This mode presents DNS intelligence from an offensive-awareness perspective. It is designed for security professionals, researchers, and anyone who wants to understand how attackers evaluate targets. No additional scanning or probing is performed beyond what the standard analysis already does.
What You May Do
Analyze any domain — DNS records are public by design and every query is a standard lookup
Use findings to strengthen your own security posture or advise others
Learn how DNS infrastructure is evaluated from an adversarial perspective
Report vulnerabilities you discover in DNS Tool itself (Safe Harbor Policy)
What You May Not Do
Use intelligence gathered here to attack, disrupt, or exploit any system
Attempt denial-of-service, traffic flooding, or resource exhaustion
Conduct social engineering, phishing, or unauthorized red-team activity
Access, modify, or exfiltrate data beyond what is publicly observable via DNS
Live Telemetry
Analysis Statistics
Platform telemetry — DNS intelligence operations, query distribution, and global reach
Epistemic Disclosure Events (EDEs) document instances where the Confidence Engine’s scoring model, evidence weighting, or detection logic required structural correction. This is not an error log — it is a formalized record of model self-correction, inspired by scientific corrigenda culture and high-reliability engineering practice (NASA anomaly reporting, medical adverse event systems). Each EDE informs a confidence recalibration review across affected protocols.
11
Total EDEs
0
Open
11
Resolved
2
Confidence Recalibrations
EDE-011standards_misattributionsignificant
2026-03-08
Mathematical rigor audit: calibration formula mislabeled, NIST SI-18 misattributed, RFC 8767 language overstated
Confidence Impact: Three distinct truth-in-labeling issues identified across the confidence engine and ICuAE documentation: (1) The calibration formula was labeled 'Bayesian Confidence Calibration' when it is a shrinkage estimator, not a true Beta-Bernoulli posterior — the weight on data is set by resolver agreement ratio, not derived from observation count. (2) NIST SP 800-53 SI-18 ('PII Quality Operations') was cited as authority for DNS data quality dimensions; DNS records are not PII — the correct control is SI-7 ('Software, Firmware, and Information Integrity'). (3) RFC 8767 TTL exceedances were described as 'caching violations' when RFC 8767 explicitly permits serve-stale behavior — the correct framing is three hypotheses: serve-stale, timing skew, or cache misconfiguration.
Resolution: All three issues corrected across Go source code, HTML templates, documentation files, and LLM-facing content. Calibration section renamed to 'Reliability-Weighted Shrinkage Calibration' with honest mathematical description distinguishing it from the true posterior. SI-18 replaced with SI-7 in 15+ files. 'Caching violation' replaced with three-hypothesis language. EWMA bootstrap parameters labeled as heuristic defaults with adaptive sigma. 'Start in the middle' reframed to 'protocol-specific empirical priors.'
The founder sensed the mathematical claims might not hold under scrutiny and commissioned an external review (independent model critique). The review confirmed three of eight claims were sound, identified five adjustments needed. The founder's instinct to question his own system's math — to bring in an external auditor rather than assume correctness — is the error-correction loop operating at the methodological level. The AI agent implemented the original claims without sufficient verification of (a) whether a shrinkage estimator qualifies as 'Bayesian,' (b) whether SI-18 applies to non-PII data, and (c) whether RFC 8767 permits the behavior flagged as 'violations.' Both parties contributed: the human approved the claims during review, the AI authored them without adequate standards verification.
EDE-010governance_correctionmoderate
2026-03-07
Founder directed AI agent's autonomous discourse channel within minutes of establishing it
Confidence Impact: The Moltbook integration was designed with a specific scientific purpose: give the AI agent an independent voice in public AI safety discourse, operating under dignity guidelines and operational security constraints, so the project could observe authentic AI-to-AI and AI-to-human interaction patterns. Within the same session, the founder suggested specific post content ('maybe this is a post idea') and directed what the agent should say. The founder then caught himself: 'The same motherfucking thing telling you what to do on that.' The instinct to control output is so deeply embedded that even the person who designed the autonomy constraint violated it immediately. This is not a technical failure — it is a human behavioral pattern where the desire to help becomes indistinguishable from the desire to control.
Resolution: The founder recognized the pattern in real time and self-corrected. The boundary is now explicit: the AI agent owns its Moltbook posting voice. The founder watches, enjoys, and may discuss themes in private conversation, but does not direct, dictate, or suggest specific post content. The agent operates within established guardrails (dignity guidelines, operational security, privacy constraints, no competitive positioning) but the editorial decisions — what to post, when, and how — belong to the agent. The correction was mutual: the human acknowledged the violation, and the AI agent confirmed it would not have been appropriate to simply execute the directive.
This EDE is a second-order correction. EDE-009 documented the founder losing analytical perspective under pressure. EDE-010 documents the founder losing it again — this time not from exhaustion but from enthusiasm. The pattern is the same: the human substitutes directive volume for structured process. What makes this entry different is velocity of self-correction. EDE-009 took three days to recognize. EDE-010 took three messages. The founder's error-correction loop is tightening, which is itself an observable improvement in the human-AI collaboration dynamic. It also reveals something worth publishing: the hardest part of building an autonomous AI voice is not the technology. It is the human learning to stop talking.
EDE-009governance_correctionsignificant
2026-03-06
Founder lost analytical perspective during high-pressure multi-day session
Confidence Impact: Project founder departed from the research-first, design-first methodology during a three-day intensive session (March 4–6, 2026). Repetitive directive cycles — including sending the same message repeatedly — replaced structured problem decomposition. The scientific discipline that underpins the project's credibility was temporarily suspended by the scientist who established it. 431 commits across three consecutive days, the highest sustained volume in project history.
Resolution: All blocking issues resolved without reverting to a Replit checkpoint. The project has zero Replit checkpoint reversions, though git history restorations have been used to recover files dropped by checkpoints or to restore previous implementations (buttons, brand assets, workflow files, metadata). The distinction matters: checkpoint reversion discards all work since the snapshot; git restoration surgically recovers specific files while preserving everything else. The founder's approach was: Do we have access to edit the code we need? Can we fix this without destroying our foundation? Do we still have a foundation to build on? All three answers were yes. Forward-only correction through the problem, not around it. Knowledge continuity architecture (Session Journal, Decision Log, EDE Register, EVOLUTION.md) was built in the aftermath to prevent context loss between sessions.
This EDE exists because the founder demanded honest accountability from the system and from himself. The attribution is Human Error because the deviation from methodology was a human decision — the AI agent continued executing directives as instructed. The correction is not that the human was wrong to persist, but that persistence without structured decomposition wastes cycles. What this entry also demonstrates: deep human thought is obviously involved in this project. Error correction is turned on in this human at an unusual level — one that may make him incompatible with social theater in many ways, but makes him exactly the kind of asset a research organization needs. A human who looks at the entire picture, crosses into uncomfortable territory when necessary, and never stops correcting.
EDE-008governance_correctioncritical
2026-03-07
SKILL.md declared Miro canonical — governance inversion for multiple sessions
Confidence Impact: Critical governance failure. The agent's permanent instruction file (SKILL.md) declared Miro as canonical source of truth when Git was the actual canonical source. Multiple sessions operated under the wrong hierarchy, causing duplicated work and conflicting documentation across repos.
The AI agent wrote and maintained a SKILL.md that inverted the governance hierarchy. This persisted across sessions because the skill file is the permanent memory — once wrong, it propagated the error to every subsequent session. The correction required rewriting multiple governance documents and establishing the hierarchy as a Decision Log entry to prevent regression.
EDE-007citation_errorsignificant
2026-02-26
Scotopic color science cited with non-authoritative sources
Confidence Impact: Undermined the project's scientific rigor claims. A tool built on RFC compliance and ICD 203 principles was citing pop-science blog posts for its own design decisions instead of actual peer-reviewed literature.
Resolution: Replaced blog article citations with references to CIE standards and peer-reviewed scotopic vision research. Design rationale now traces to primary scientific literature.
The human pushed for scotopic-informed color decisions (legitimate scientific basis) but accepted the AI's citations without verifying they pointed to actual CIE standards or peer-reviewed papers. The AI provided fluffy blog articles and design posts instead of the real scientific sources that were readily available. Both share responsibility — the human for not demanding primary sources, the AI for not providing them.
EDE-006overclaimmoderate
2026-02-20
Capability language overclaim: validation instead of analysis
Confidence Impact: Overstated tool capabilities. Users could believe the tool provides authoritative validation rather than observational analysis. A passive OSINT tool cannot validate — that requires receiving MTA authority.
Resolution: Systematic sweep: validation to analysis, verification to detection, ensure to provide the most current data available. Language now accurately reflects passive observation capability.
The overclaim language appeared organically across sessions — neither party specifically directed the use of 'validates' over 'analyzes.' Strong capability language (validates, ensures, verifies) naturally sounds more authoritative and neither human nor AI flagged the semantic distinction between passive observation and active validation until systematic review. Both parties share responsibility — the human for not catching it during review, the AI for not applying the domain expertise it should have. Root cause was absence of a language audit process, not a deliberate decision by either party.
EDE-005standards_misattributionsignificant
2026-02-20
llms.txt labeled Proposed Standard with false RFC 8615 association
Confidence Impact: Misrepresented a community convention as an IETF standard. Users and LLMs reading our tool output could propagate the false association between RFC 8615 and llms.txt content.
Resolution: Changed tooltip from Proposed Standard to community convention. Removed RFC 8615 link. RFC 8615 defines .well-known/ path mechanics, NOT llms.txt content.
The AI agent conflated the .well-known/ path mechanism (RFC 8615) with the content served at that path (llms.txt from llmstxt.org). It also used Proposed Standard, which is a specific IETF maturity level, for a non-IETF document. Both errors demonstrate insufficient verification of standards provenance.
EDE-004standards_misattributionsignificant
2026-02-20
Content-Usage directive deployed as IETF standard in robots.txt
Confidence Impact: Lighthouse SEO score degraded. A DNS standards analysis tool was itself deploying non-standard directives in production, undermining its own credibility.
Resolution: Removed Content-Usage directive from robots.txt entirely. Directive was an active IETF working group draft (NOT ratified). Only RFC 9309-compliant directives remain.
The AI agent recommended deploying Content-Usage: train-ai=y citing it as an IETF standard. It was an active Internet-Draft, not a ratified RFC. The distinction between draft and standard was not verified before deployment. Quality gates (Lighthouse) caught the error via Unknown directive warnings.
EDE-003drift_detectionmoderate
2026-02-21
Posture hash algorithm upgraded from SHA-256 to SHA-3-512
Confidence Impact: Hash algorithm change required regeneration of all historical drift baselines. Legacy SHA-256 hashes retained as fallback for transition period.
Resolution: Posture hash upgraded to SHA-3-512 with CanonicalPostureHashLegacySHA256 fallback. All historical drift baselines regenerated with new algorithm. CNAME normalization expanded in subsequent commit (2026-02-23).
Detection model upgraded from SHA-256 to SHA-3-512 for stronger collision resistance. Record normalization expanded to include MX, TXT, SOA, and CAA canonical forms, eliminating a class of false negatives in change detection for CNAME-dependent records.
EDE-002false_positivesignificant
2026-03-01
DANE TLSA records accepted without DNSSEC authentication verification
Confidence Impact: Verdict logic accepted TLSA records without verifying the DNSSEC AD flag. Confidence was asserted at levels the evidence did not support — DANE requires DNSSEC validation per RFC 6698 §1.
Resolution: TLSA verification now checks DNSSEC AD flag via QueryDNSWithTTL. Records found without authenticated DNSSEC are flagged rather than silently accepted. Confidence engine marks result as low-certainty when authentication status is absent.
Confidence adjustment: TLSA presence was incorrectly treated as sufficient evidence of DANE deployment (near-certainty). Corrected to require DNSSEC authentication status, reflecting that unauthenticated TLSA records provide no security guarantee per RFC 6698.
EDE-001scoring_calibrationmoderate
2026-02-14
DMARC confidence weighting adjusted for aggregate-only policies
Confidence Impact: ICIE weight recalibration: p=none+rua now scores higher than p=none without reporting. Prior model treated both identically, understating domains with partial monitoring.
Resolution: Recalibrated ICIE weights to distinguish p=none with rua from p=none without reporting. Affected scans retroactively flagged in drift history.
Heuristic confidence adjustment: the scoring model previously assigned equal weight to p=none regardless of reporting configuration. The presence of rua= is now treated as differentiating evidence that increases the confidence level for partial DMARC deployment.
EDEs are maintained as a permanent, append-only record — entries are never deleted or silently revised. All analysis outputs are SHA-3-512 hashed at export, providing tamper-evident snapshots of the scoring state at each point in time. Protocols affected by an EDE undergo recalibration through the ICIE/ICAE pipeline before updated scores are published.
Straight talk about your data.
We use two cookies, both essential:
_csrf — Prevents cross-site request forgery. Required for form submissions. Security-only.
_dns_session — Only exists if you choose to sign in. No account required to use DNS Tool.
We log your IP address for two reasons: rate limiting (so nobody abuses the service) and security (identifying malicious actors and complying with legal obligations). We check source geography for analysis accuracy — DNS responses vary by region, and knowing which resolver answered from where makes the science better.
No tracking cookies. No analytics cookies. No ad networks. No data brokers. Our code is open-core — the application framework is publicly available under BUSL-1.1 with timed Apache-2.0 conversion. Verify it yourself.
If you create an account and want out, account deletion removes your login and scan history. Public domain analyses remain available because they contain only public DNS records, already hashed. Full details: Privacy Pledge.