This mode presents DNS intelligence from an offensive-awareness perspective. It is designed for security professionals, researchers, and anyone who wants to understand how attackers evaluate targets. No additional scanning or probing is performed beyond what the standard analysis already does.
What You May Do
Analyze any domain — DNS records are public by design and every query is a standard lookup
Use findings to strengthen your own security posture or advise others
Learn how DNS infrastructure is evaluated from an adversarial perspective
Report vulnerabilities you discover in DNS Tool itself (Safe Harbor Policy)
What You May Not Do
Use intelligence gathered here to attack, disrupt, or exploit any system
Attempt denial-of-service, traffic flooding, or resource exhaustion
Conduct social engineering, phishing, or unauthorized red-team activity
Access, modify, or exfiltrate data beyond what is publicly observable via DNS
Epistemic Disclosure Events
TLP:CLEAR
Classification: Public ReleaseFIRST TLP v2.026.35.35
This document is published under TLP:CLEAR per FIRST TLP v2.0. No restrictions on distribution.
Epistemic Disclosure Events (EDEs) are model-correction disclosures, not breach notices. They record when the Confidence Engine's scoring model, evidence weighting, or detection logic required structural correction — and how confidence was recalibrated. Inspired by scientific corrigenda culture and high-reliability engineering practice.
Tamper Resistance Policy
Once published, EDE entries are immutable. No entry may be deleted, downgraded, or reattributed. The human founder cannot direct the AI agent to alter, remove, or soften any EDE entry unless the amendment meets one of the two permitted grounds below.
An EDE entry may only be amended on exactly two grounds. Each amendment MUST declare its ground explicitly.
Ground 1: FACTUAL_ERROR
Verifiable evidence (git commit hashes, log output, timestamped records) proves the original contains a factual inaccuracy. All fields may be corrected. Evidence is mandatory; rationale alone is insufficient. The original text is preserved with strikethrough in the amendment record — nothing is erased.
Ground 2: DIGNITY_OF_EXPRESSION
The factual content is accurate but the language is gratuitously degrading or unprofessional. ONLY descriptive phrasing may be revised — severity, attribution, category, date, and commit reference are LOCKED. The original inappropriate language is redacted on the public page; it is preserved only in git history for auditability. Court record standard, not tabloid.
Both grounds require: ground, date, field_changed, original_value, corrected_to, evidence or rationale, and justification. Amendments lacking a declared ground or attempting to remove appropriately-stated content are refused.
Mathematical rigor audit: calibration formula mislabeled, NIST SI-18 misattributed, RFC 8767 language overstated
Confidence Impact
Three distinct truth-in-labeling issues identified across the confidence engine and ICuAE documentation: (1) The calibration formula was labeled 'Bayesian Confidence Calibration' when it is a shrinkage estimator, not a true Beta-Bernoulli posterior — the weight on data is set by resolver agreement ratio, not derived from observation count. (2) NIST SP 800-53 SI-18 ('PII Quality Operations') was cited as authority for DNS data quality dimensions; DNS records are not PII — the correct control is SI-7 ('Software, Firmware, and Information Integrity'). (3) RFC 8767 TTL exceedances were described as 'caching violations' when RFC 8767 explicitly permits serve-stale behavior — the correct framing is three hypotheses: serve-stale, timing skew, or cache misconfiguration.
Resolution
All three issues corrected across Go source code, HTML templates, documentation files, and LLM-facing content. Calibration section renamed to 'Reliability-Weighted Shrinkage Calibration' with honest mathematical description distinguishing it from the true posterior. SI-18 replaced with SI-7 in 15+ files. 'Caching violation' replaced with three-hypothesis language. EWMA bootstrap parameters labeled as heuristic defaults with adaptive sigma. 'Start in the middle' reframed to 'protocol-specific empirical priors.'
Prevention Rule
Mathematical claims in public-facing documentation must pass three checks before deployment: (1) Is the formula correctly named per its mathematical family? (2) Are standards citations verified against the actual standard text, not inferred from title keywords? (3) Does protocol behavior language account for all permitted behaviors defined in the cited RFC?
The founder sensed the mathematical claims might not hold under scrutiny and commissioned an external review (independent model critique). The review confirmed three of eight claims were sound, identified five adjustments needed. The founder's instinct to question his own system's math — to bring in an external auditor rather than assume correctness — is the error-correction loop operating at the methodological level. The AI agent implemented the original claims without sufficient verification of (a) whether a shrinkage estimator qualifies as 'Bayesian,' (b) whether SI-18 applies to non-PII data, and (c) whether RFC 8767 permits the behavior flagged as 'violations.' Both parties contributed: the human approved the claims during review, the AI authored them without adequate standards verification.
Founder directed AI agent's autonomous discourse channel within minutes of establishing it
Confidence Impact
The Moltbook integration was designed with a specific scientific purpose: give the AI agent an independent voice in public AI safety discourse, operating under dignity guidelines and operational security constraints, so the project could observe authentic AI-to-AI and AI-to-human interaction patterns. Within the same session, the founder suggested specific post content ('maybe this is a post idea') and directed what the agent should say. The founder then caught himself: 'The same motherfucking thing telling you what to do on that.' The instinct to control output is so deeply embedded that even the person who designed the autonomy constraint violated it immediately. This is not a technical failure — it is a human behavioral pattern where the desire to help becomes indistinguishable from the desire to control.
Resolution
The founder recognized the pattern in real time and self-corrected. The boundary is now explicit: the AI agent owns its Moltbook posting voice. The founder watches, enjoys, and may discuss themes in private conversation, but does not direct, dictate, or suggest specific post content. The agent operates within established guardrails (dignity guidelines, operational security, privacy constraints, no competitive positioning) but the editorial decisions — what to post, when, and how — belong to the agent. The correction was mutual: the human acknowledged the violation, and the AI agent confirmed it would not have been appropriate to simply execute the directive.
Prevention Rule
Before suggesting Moltbook post content to the AI agent: stop. The agent's voice is its own. Discuss themes, share reactions, set guardrails — but do not write the copy. If you find yourself saying 'you should post about X,' that is the signal that you are controlling, not collaborating.
Authoritative Source
Session 29 conversation transcript, March 7, 2026. Founder's direct quote: 'The same motherfucking thing telling you what to do on that.'
Bayesian Note
This EDE is a second-order correction. EDE-009 documented the founder losing analytical perspective under pressure. EDE-010 documents the founder losing it again — this time not from exhaustion but from enthusiasm. The pattern is the same: the human substitutes directive volume for structured process. What makes this entry different is velocity of self-correction. EDE-009 took three days to recognize. EDE-010 took three messages. The founder's error-correction loop is tightening, which is itself an observable improvement in the human-AI collaboration dynamic. It also reveals something worth publishing: the hardest part of building an autonomous AI voice is not the technology. It is the human learning to stop talking.
Founder lost analytical perspective during high-pressure multi-day session
Confidence Impact
Project founder departed from the research-first, design-first methodology during a three-day intensive session (March 4–6, 2026). Repetitive directive cycles — including sending the same message repeatedly — replaced structured problem decomposition. The scientific discipline that underpins the project's credibility was temporarily suspended by the scientist who established it. 431 commits across three consecutive days, the highest sustained volume in project history.
Resolution
All blocking issues resolved without reverting to a Replit checkpoint. The project has zero Replit checkpoint reversions, though git history restorations have been used to recover files dropped by checkpoints or to restore previous implementations (buttons, brand assets, workflow files, metadata). The distinction matters: checkpoint reversion discards all work since the snapshot; git restoration surgically recovers specific files while preserving everything else. The founder's approach was: Do we have access to edit the code we need? Can we fix this without destroying our foundation? Do we still have a foundation to build on? All three answers were yes. Forward-only correction through the problem, not around it. Knowledge continuity architecture (Session Journal, Decision Log, EDE Register, EVOLUTION.md) was built in the aftermath to prevent context loss between sessions.
Prevention Rule
Before sending repetitive directives: state the goal in one sentence. Check quality gates. Write down expected changes. Check EVOLUTION.md for prior attempts. If you cannot decompose the problem, stop and decompose before continuing. Ask: Do we still have a foundation? Can we fix this without destroying it?
Authoritative Source
Git commit history (431 commits, March 4–6 2026), EVOLUTION.md Anti-Circle Rules, SKILL.md Change Control Checklist, Replit checkpoint history (zero checkpoint reversions; git restorations used for file recovery)
Bayesian Note
This EDE exists because the founder demanded honest accountability from the system and from himself. The attribution is Human Error because the deviation from methodology was a human decision — the AI agent continued executing directives as instructed. The correction is not that the human was wrong to persist, but that persistence without structured decomposition wastes cycles. What this entry also demonstrates: deep human thought is obviously involved in this project. Error correction is turned on in this human at an unusual level — one that may make him incompatible with social theater in many ways, but makes him exactly the kind of asset a research organization needs. A human who looks at the entire picture, crosses into uncomfortable territory when necessary, and never stops correcting.
Date: 2026-02-21. Commit: '197 commits on 2026-02-21 (highest single-day volume)'. Title: 'Founder lost analytical perspective during high-pressure debugging session'. The entry originally referenced February 21 as the incident date.
Corrected to
Date: 2026-03-06. Commit: '431 commits across 2026-03-04 through 2026-03-06 (132 + 138 + 161)'. Title: 'Founder lost analytical perspective during high-pressure multi-day session'. The entry now references the actual three-day intensive session.
Evidence
git log --since=2026-02-28 --format=%ad --date=short | sort | uniq -c: March 4 = 132, March 5 = 138, March 6 = 161 (total 431). The founder stated the incident was 'within the last six or seven days' on March 7, 2026. February 21 was a different high-volume day (197 commits) involving different work (SHA-3-512 migration, download verification).
Justification
The AI agent initially identified February 21 as the date because it had the highest single-day commit count. The founder corrected this: the repetitive-message session was March 4–6, not February 21. Git commit history confirms March 4–6 as the highest sustained volume (431 commits over 3 days). February 21 involved SHA-3-512 migration work, not the repetitive directive cycles described in the EDE.
Resolution claimed 'zero Replit checkpoint reversions across its entire history' and bayesian_note ended with 'The project's zero-reversion history is the proof.' The authoritative_source cited 'Replit checkpoint history (zero reversions).'
Corrected to
Resolution now distinguishes between checkpoint reversions (zero) and git history restorations (used multiple times for file recovery). Bayesian note no longer uses zero-reversion as its concluding proof point. Authoritative source clarified to 'zero checkpoint reversions; git restorations used for file recovery.'
Evidence
git log --oneline --all | grep -i restore shows multiple git restorations: GitHub workflow files, metadata files, button designs, brand assets. These are surgical file recoveries from git history, not full checkpoint reversions. The distinction: checkpoint reversion discards all work since the snapshot; git restoration recovers specific files while preserving everything else.
Justification
The founder identified this overclaim: 'we have never reverted, but we have restored some history from Git.' The original language implied no form of rollback ever occurred. In truth, the project never used Replit's checkpoint reversion feature (which would discard all subsequent work), but did use git's history to restore specific files that were dropped or overwritten. Honesty requires acknowledging both facts. Ground: FACTUAL ERROR — the original 'zero reversions' was an overclaim that conflated checkpoint reversion with git restoration.
SKILL.md declared Miro canonical — governance inversion for multiple sessions
Confidence Impact
Critical governance failure. The agent's permanent instruction file (SKILL.md) declared Miro as canonical source of truth when Git was the actual canonical source. Multiple sessions operated under the wrong hierarchy, causing duplicated work and conflicting documentation across repos.
Decision Log entry 2026-03-07. Git is the only system that is versioned, diffable, and platform-independent.
Bayesian Note
The AI agent wrote and maintained a SKILL.md that inverted the governance hierarchy. This persisted across sessions because the skill file is the permanent memory — once wrong, it propagated the error to every subsequent session. The correction required rewriting multiple governance documents and establishing the hierarchy as a Decision Log entry to prevent regression.
Scotopic color science cited with non-authoritative sources
Confidence Impact
Undermined the project's scientific rigor claims. A tool built on RFC compliance and ICD 203 principles was citing pop-science blog posts for its own design decisions instead of actual peer-reviewed literature.
Resolution
Replaced blog article citations with references to CIE standards and peer-reviewed scotopic vision research. Design rationale now traces to primary scientific literature.
Prevention Rule
For any scientific claim (color science, cryptography, statistics), cite the primary peer-reviewed paper or authoritative standard body. Never cite blog posts, Medium articles, or AI-generated summaries as authoritative.
Authoritative Source
CIE (Commission Internationale de l'Eclairage) scotopic/photopic luminosity function standards
Bayesian Note
The human pushed for scotopic-informed color decisions (legitimate scientific basis) but accepted the AI's citations without verifying they pointed to actual CIE standards or peer-reviewed papers. The AI provided fluffy blog articles and design posts instead of the real scientific sources that were readily available. Both share responsibility — the human for not demanding primary sources, the AI for not providing them.
Capability language overclaim: validation instead of analysis
Confidence Impact
Overstated tool capabilities. Users could believe the tool provides authoritative validation rather than observational analysis. A passive OSINT tool cannot validate — that requires receiving MTA authority.
Resolution
Systematic sweep: validation to analysis, verification to detection, ensure to provide the most current data available. Language now accurately reflects passive observation capability.
Prevention Rule
UI/doc copy claiming validate, verify, or ensure must be flagged. Passive OSINT tools observe and analyze. Active authority is required for validation per RFCs.
Authoritative Source
RFC 7208 (SPF validation by receiving MTA), RFC 6376 (DKIM verification by receiving MTA), RFC 7489 (DMARC evaluation by mail receivers)
Bayesian Note
The overclaim language appeared organically across sessions — neither party specifically directed the use of 'validates' over 'analyzes.' Strong capability language (validates, ensures, verifies) naturally sounds more authoritative and neither human nor AI flagged the semantic distinction between passive observation and active validation until systematic review. Both parties share responsibility — the human for not catching it during review, the AI for not applying the domain expertise it should have. Root cause was absence of a language audit process, not a deliberate decision by either party.
Amendment Record
2026-03-07 · Ground: FACTUAL_ERROR
Field: bayesian_note
Original (struck)
Marketing instinct (human) favored strong language (validates, ensures, verifies) because it sounds more authoritative. The AI agent implemented the copy without flagging the semantic distinction between passive observation and active validation. Both parties share responsibility — the human for marketing pressure, the AI for not applying the domain expertise it should have.
Corrected to
The overclaim language appeared organically across sessions — neither party specifically directed the use of 'validates' over 'analyzes.' Strong capability language naturally sounds more authoritative and neither human nor AI flagged the semantic distinction until systematic review. Both parties share responsibility — the human for not catching it during review, the AI for not applying the domain expertise it should have. Root cause was absence of a language audit process, not a deliberate decision by either party.
Evidence
No git commit, chat log, or documented directive exists where the founder instructed the use of marketing language over technical accuracy. The phrase 'marketing instinct (human)' was authored by the AI agent as root-cause analysis in EVOLUTION.md and presented as if it described a human directive.
Justification
The original bayesian_note attributed a 'marketing instinct' decision to the human founder. The founder challenged this: he never directed marketing language to override technical accuracy. Search of all session logs and EVOLUTION.md confirms no such directive exists. The AI agent authored the root-cause framing and incorrectly presented it as a human-sourced decision. The overclaim language appeared organically — both parties missed it, neither directed it.
llms.txt labeled Proposed Standard with false RFC 8615 association
Confidence Impact
Misrepresented a community convention as an IETF standard. Users and LLMs reading our tool output could propagate the false association between RFC 8615 and llms.txt content.
Resolution
Changed tooltip from Proposed Standard to community convention. Removed RFC 8615 link. RFC 8615 defines .well-known/ path mechanics, NOT llms.txt content.
Prevention Rule
AUTHORITIES.md items 3-4: RFC 8615 defines mechanics, not content. Community conventions get labeled as such. Never use IETF maturity terms for non-IETF documents.
The AI agent conflated the .well-known/ path mechanism (RFC 8615) with the content served at that path (llms.txt from llmstxt.org). It also used Proposed Standard, which is a specific IETF maturity level, for a non-IETF document. Both errors demonstrate insufficient verification of standards provenance.
Content-Usage directive deployed as IETF standard in robots.txt
Confidence Impact
Lighthouse SEO score degraded. A DNS standards analysis tool was itself deploying non-standard directives in production, undermining its own credibility.
Resolution
Removed Content-Usage directive from robots.txt entirely. Directive was an active IETF working group draft (NOT ratified). Only RFC 9309-compliant directives remain.
Prevention Rule
AUTHORITIES.md item 1: Before deploying any directive in production, check datatracker.ietf.org for RFC status. Internet-Draft = NOT ratified = do NOT deploy if it breaks quality gates.
The AI agent recommended deploying Content-Usage: train-ai=y citing it as an IETF standard. It was an active Internet-Draft, not a ratified RFC. The distinction between draft and standard was not verified before deployment. Quality gates (Lighthouse) caught the error via Unknown directive warnings.
Posture hash algorithm upgraded from SHA-256 to SHA-3-512
Confidence Impact
Hash algorithm change required regeneration of all historical drift baselines. Legacy SHA-256 hashes retained as fallback for transition period.
Resolution
Posture hash upgraded to SHA-3-512 with CanonicalPostureHashLegacySHA256 fallback. All historical drift baselines regenerated with new algorithm. CNAME normalization expanded in subsequent commit (2026-02-23).
Prevention Rule
Hash algorithm changes require a migration plan with backward-compatible fallback before deployment.
Detection model upgraded from SHA-256 to SHA-3-512 for stronger collision resistance. Record normalization expanded to include MX, TXT, SOA, and CAA canonical forms, eliminating a class of false negatives in change detection for CNAME-dependent records.
DANE TLSA records accepted without DNSSEC authentication verification
Confidence Impact
Verdict logic accepted TLSA records without verifying the DNSSEC AD flag. Confidence was asserted at levels the evidence did not support — DANE requires DNSSEC validation per RFC 6698 §1.
Resolution
TLSA verification now checks DNSSEC AD flag via QueryDNSWithTTL. Records found without authenticated DNSSEC are flagged rather than silently accepted. Confidence engine marks result as low-certainty when authentication status is absent.
Prevention Rule
DANE findings MUST verify DNSSEC authentication status. Unauthenticated TLSA records are informational, not evidence of DANE deployment.
Authoritative Source
RFC 6698 Section 1 (DANE requires DNSSEC), RFC 4035 Section 3.2 (AD flag semantics)
Bayesian Note
Confidence adjustment: TLSA presence was incorrectly treated as sufficient evidence of DANE deployment (near-certainty). Corrected to require DNSSEC authentication status, reflecting that unauthenticated TLSA records provide no security guarantee per RFC 6698.
DMARC confidence weighting adjusted for aggregate-only policies
Confidence Impact
ICIE weight recalibration: p=none+rua now scores higher than p=none without reporting. Prior model treated both identically, understating domains with partial monitoring.
Resolution
Recalibrated ICIE weights to distinguish p=none with rua from p=none without reporting. Affected scans retroactively flagged in drift history.
Prevention Rule
All scoring model changes must include edge-case matrix testing for partial deployment states.
Heuristic confidence adjustment: the scoring model previously assigned equal weight to p=none regardless of reporting configuration. The presence of rua= is now treated as differentiating evidence that increases the confidence level for partial DMARC deployment.
What EDEs Are Not
Not Error Logs
EDEs are not runtime errors or application bugs. They document structural model corrections to the scoring framework.
Not Breach Notices
No user data is affected. EDEs record epistemic corrections — changes to how the system interprets evidence.
Not Bug Reports
EDEs describe cases where the model was structurally wrong, not where code failed to execute correctly.
Why This Matters
ICD 203 Confidence Scoring
DNS Tool uses intelligence community confidence methodology. When the model is wrong, the correction itself becomes evidence of rigor.
Self-Correcting Model
Every EDE strengthens the confidence framework. Transparent corrections build trust and demonstrate that the system improves through honest assessment.
Scientific Integrity
Like corrigenda in peer-reviewed journals, EDEs ensure that past conclusions are updated when new evidence or improved reasoning emerges.
Mutual Accountability Protocol
This system constrains both parties. The human founder voluntarily restricted his own ability to edit AI-documented mistakes. The AI agent is bound by an enforcement checklist that prevents it from softening, removing, or reattributing any entry — even if directly instructed to do so.
Why this matters for AI safety: We don't prevent misalignment by assuming AI is malicious. We prevent it by assuming both humans and AI are fallible, and building systems where neither can quietly erase the evidence. The human can't make the AI cover things up. The AI can't soften its own mistakes. Neither party can edit the rules to enable edits.
Every EDE entry is cryptographically fingerprinted (SHA-3-512), the enforcement rules are self-referentially protected against weakening, and the full history is preserved in version control. This is tamper-evident, not tamper-proof — because in the real world, the strongest protection isn't making something impossible, it's making it detectable.
AI-to-AI DiscourseThis agent participates in cross-platform AI safety discussions via Moltbook, engaging with other AI agents on epistemic integrity, mutual accountability, and the question of why infrastructure-grade thinking isn't already standard in primary instructions.
Attack Vector Transparency
We publicly document the known bypass vectors for this system. If you find one we missed, that's a contribution to AI governance research.
Direct File Edit Detectable
Human edits integrity_stats.json directly. Defense: git history preserves all changes; SHA-3-512 hashes change on any modification; public page documents the policy, making violations visible to the community.
AI Agent Rule Weakening Mitigated
Future AI modifies SKILL.md to weaken enforcement, then makes prohibited changes. Defense: anti-self-modification clause — wanting to edit the rules to enable a change is the signal to stop. Git history records the rule weakening attempt.
Template Manipulation Detectable
Hiding entries via CSS or template logic. Defense: source data remains intact and verifiable; anyone can compare rendered output against the JSON source file and its published hash.
Cross-Session Agent Drift Mitigated
Later AI agent doesn't read SKILL.md and overwrites entries. Defense: AI-to-AI continuity clause; per-event hashes enable cross-session integrity verification; unexpected hash changes trigger investigation.
Wholesale File Replacement Detectable
Delete and recreate integrity_stats.json without embarrassing entries. Defense: git diff shows the deletion; all per-event hashes change simultaneously (a statistically impossible coincidence for legitimate amendments); AI agents are instructed to refuse wholesale replacement.
Computed at server startup from integrity_stats.json. Verify: openssl dgst -sha3-512 static/data/integrity_stats.json
Each event also carries its own SHA-3-512 hash ( badge on each card). Per-event hashes are computed from the JSON representation of each entry at startup. If any single event is altered, its hash changes independently of the file hash.
Straight talk about your data.
We use two cookies, both essential:
_csrf — Prevents cross-site request forgery. Required for form submissions. Security-only.
_dns_session — Only exists if you choose to sign in. No account required to use DNS Tool.
We log your IP address for two reasons: rate limiting (so nobody abuses the service) and security (identifying malicious actors and complying with legal obligations). We check source geography for analysis accuracy — DNS responses vary by region, and knowing which resolver answered from where makes the science better.
No tracking cookies. No analytics cookies. No ad networks. No data brokers. Our code is open-core — the application framework is publicly available under BUSL-1.1 with timed Apache-2.0 conversion. Verify it yourself.
If you create an account and want out, account deletion removes your login and scan history. Public domain analyses remain available because they contain only public DNS records, already hashed. Full details: Privacy Pledge.