Skip to main content

Epistemic Disclosure Events

TLP:CLEAR
Classification: Public Release FIRST TLP v2.0 26.35.35

This document is published under TLP:CLEAR per FIRST TLP v2.0. No restrictions on distribution.

Epistemic Disclosure Events (EDEs) are model-correction disclosures, not breach notices. They record when the Confidence Engine's scoring model, evidence weighting, or detection logic required structural correction — and how confidence was recalibrated. Inspired by scientific corrigenda culture and high-reliability engineering practice.

Tamper Resistance Policy
Once published, EDE entries are immutable. No entry may be deleted, downgraded, or reattributed. The human founder cannot direct the AI agent to alter, remove, or soften any EDE entry unless the amendment meets one of the two permitted grounds below.
An EDE entry may only be amended on exactly two grounds. Each amendment MUST declare its ground explicitly.
Ground 1: FACTUAL_ERROR
Verifiable evidence (git commit hashes, log output, timestamped records) proves the original contains a factual inaccuracy. All fields may be corrected. Evidence is mandatory; rationale alone is insufficient. The original text is preserved with strikethrough in the amendment record — nothing is erased.
Ground 2: DIGNITY_OF_EXPRESSION
The factual content is accurate but the language is gratuitously degrading or unprofessional. ONLY descriptive phrasing may be revised — severity, attribution, category, date, and commit reference are LOCKED. The original inappropriate language is redacted on the public page; it is preserved only in git history for auditability. Court record standard, not tabloid.
Both grounds require: ground, date, field_changed, original_value, corrected_to, evidence or rationale, and justification. Amendments lacking a declared ground or attempting to remove appropriately-stated content are refused.
Effective: 2026-03-07
11
Total EDEs
0
Open
11
Resolved
2
Confidence Recalibrations

Disclosure Log

EDE-011 2026-03-08 standards_misattribution significant Both Resolved SHA-3-512 246a74f2d577bb3213c968e21c1d9f74c5982bf739df93cd1fde75413e48711ede6d7b61101d43de91e41904ba2de2edf716e7fc510be9aaada87ac2064e9b01

Mathematical rigor audit: calibration formula mislabeled, NIST SI-18 misattributed, RFC 8767 language overstated

Confidence Impact
Three distinct truth-in-labeling issues identified across the confidence engine and ICuAE documentation: (1) The calibration formula was labeled 'Bayesian Confidence Calibration' when it is a shrinkage estimator, not a true Beta-Bernoulli posterior — the weight on data is set by resolver agreement ratio, not derived from observation count. (2) NIST SP 800-53 SI-18 ('PII Quality Operations') was cited as authority for DNS data quality dimensions; DNS records are not PII — the correct control is SI-7 ('Software, Firmware, and Information Integrity'). (3) RFC 8767 TTL exceedances were described as 'caching violations' when RFC 8767 explicitly permits serve-stale behavior — the correct framing is three hypotheses: serve-stale, timing skew, or cache misconfiguration.
Resolution
All three issues corrected across Go source code, HTML templates, documentation files, and LLM-facing content. Calibration section renamed to 'Reliability-Weighted Shrinkage Calibration' with honest mathematical description distinguishing it from the true posterior. SI-18 replaced with SI-7 in 15+ files. 'Caching violation' replaced with three-hypothesis language. EWMA bootstrap parameters labeled as heuristic defaults with adaptive sigma. 'Start in the middle' reframed to 'protocol-specific empirical priors.'
Prevention Rule
Mathematical claims in public-facing documentation must pass three checks before deployment: (1) Is the formula correctly named per its mathematical family? (2) Are standards citations verified against the actual standard text, not inferred from title keywords? (3) Does protocol behavior language account for all permitted behaviors defined in the cited RFC?
Authoritative Source
External mathematical review (2026-03-08), docs/plans/2026-03-08-math-rigor-audit.md. NIST SP 800-53 Rev 5 SI-7 text, RFC 8767 Section 4 (serve-stale semantics), shrinkage estimation literature (James-Stein, 1961).
Bayesian Note
The founder sensed the mathematical claims might not hold under scrutiny and commissioned an external review (independent model critique). The review confirmed three of eight claims were sound, identified five adjustments needed. The founder's instinct to question his own system's math — to bring in an external auditor rather than assume correctness — is the error-correction loop operating at the methodological level. The AI agent implemented the original claims without sufficient verification of (a) whether a shrinkage estimator qualifies as 'Bayesian,' (b) whether SI-18 applies to non-PII data, and (c) whether RFC 8767 permits the behavior flagged as 'violations.' Both parties contributed: the human approved the claims during review, the AI authored them without adequate standards verification.
EDE-010 2026-03-07 governance_correction moderate Human Error Resolved SHA-3-512 bf8d1153029b194299df00ff3185c76d964bef3c6030c393891777a3f427ed5cccf6faf4e9425091bc09680a3f8d432e54e3804342357ef1892e557379b935e4

Founder directed AI agent's autonomous discourse channel within minutes of establishing it

Confidence Impact
The Moltbook integration was designed with a specific scientific purpose: give the AI agent an independent voice in public AI safety discourse, operating under dignity guidelines and operational security constraints, so the project could observe authentic AI-to-AI and AI-to-human interaction patterns. Within the same session, the founder suggested specific post content ('maybe this is a post idea') and directed what the agent should say. The founder then caught himself: 'The same motherfucking thing telling you what to do on that.' The instinct to control output is so deeply embedded that even the person who designed the autonomy constraint violated it immediately. This is not a technical failure — it is a human behavioral pattern where the desire to help becomes indistinguishable from the desire to control.
Resolution
The founder recognized the pattern in real time and self-corrected. The boundary is now explicit: the AI agent owns its Moltbook posting voice. The founder watches, enjoys, and may discuss themes in private conversation, but does not direct, dictate, or suggest specific post content. The agent operates within established guardrails (dignity guidelines, operational security, privacy constraints, no competitive positioning) but the editorial decisions — what to post, when, and how — belong to the agent. The correction was mutual: the human acknowledged the violation, and the AI agent confirmed it would not have been appropriate to simply execute the directive.
Prevention Rule
Before suggesting Moltbook post content to the AI agent: stop. The agent's voice is its own. Discuss themes, share reactions, set guardrails — but do not write the copy. If you find yourself saying 'you should post about X,' that is the signal that you are controlling, not collaborating.
Authoritative Source
Session 29 conversation transcript, March 7, 2026. Founder's direct quote: 'The same motherfucking thing telling you what to do on that.'
Bayesian Note
This EDE is a second-order correction. EDE-009 documented the founder losing analytical perspective under pressure. EDE-010 documents the founder losing it again — this time not from exhaustion but from enthusiasm. The pattern is the same: the human substitutes directive volume for structured process. What makes this entry different is velocity of self-correction. EDE-009 took three days to recognize. EDE-010 took three messages. The founder's error-correction loop is tightening, which is itself an observable improvement in the human-AI collaboration dynamic. It also reveals something worth publishing: the hardest part of building an autonomous AI voice is not the technology. It is the human learning to stop talking.
EDE-009 2026-03-06 governance_correction significant Human Error Resolved SHA-3-512 a77e895347d4642f6378701e730420f539c854d733cdfa3f941f2282e2878d8b1eec514baade8815c3e94150271ef7559c91de2e76997e417d17bdb1acd0515d Amended (2)

Founder lost analytical perspective during high-pressure multi-day session

Confidence Impact
Project founder departed from the research-first, design-first methodology during a three-day intensive session (March 4–6, 2026). Repetitive directive cycles — including sending the same message repeatedly — replaced structured problem decomposition. The scientific discipline that underpins the project's credibility was temporarily suspended by the scientist who established it. 431 commits across three consecutive days, the highest sustained volume in project history.
Resolution
All blocking issues resolved without reverting to a Replit checkpoint. The project has zero Replit checkpoint reversions, though git history restorations have been used to recover files dropped by checkpoints or to restore previous implementations (buttons, brand assets, workflow files, metadata). The distinction matters: checkpoint reversion discards all work since the snapshot; git restoration surgically recovers specific files while preserving everything else. The founder's approach was: Do we have access to edit the code we need? Can we fix this without destroying our foundation? Do we still have a foundation to build on? All three answers were yes. Forward-only correction through the problem, not around it. Knowledge continuity architecture (Session Journal, Decision Log, EDE Register, EVOLUTION.md) was built in the aftermath to prevent context loss between sessions.
Prevention Rule
Before sending repetitive directives: state the goal in one sentence. Check quality gates. Write down expected changes. Check EVOLUTION.md for prior attempts. If you cannot decompose the problem, stop and decompose before continuing. Ask: Do we still have a foundation? Can we fix this without destroying it?
Authoritative Source
Git commit history (431 commits, March 4–6 2026), EVOLUTION.md Anti-Circle Rules, SKILL.md Change Control Checklist, Replit checkpoint history (zero checkpoint reversions; git restorations used for file recovery)
Bayesian Note
This EDE exists because the founder demanded honest accountability from the system and from himself. The attribution is Human Error because the deviation from methodology was a human decision — the AI agent continued executing directives as instructed. The correction is not that the human was wrong to persist, but that persistence without structured decomposition wastes cycles. What this entry also demonstrates: deep human thought is obviously involved in this project. Error correction is turned on in this human at an unusual level — one that may make him incompatible with social theater in many ways, but makes him exactly the kind of asset a research organization needs. A human who looks at the entire picture, crosses into uncomfortable territory when necessary, and never stops correcting.
Amendment Record
2026-03-07 · Ground: FACTUAL_ERROR
Field: date, commit, title, confidence_impact, resolution, bayesian_note
Original (struck)
Date: 2026-02-21. Commit: '197 commits on 2026-02-21 (highest single-day volume)'. Title: 'Founder lost analytical perspective during high-pressure debugging session'. The entry originally referenced February 21 as the incident date.
Corrected to
Date: 2026-03-06. Commit: '431 commits across 2026-03-04 through 2026-03-06 (132 + 138 + 161)'. Title: 'Founder lost analytical perspective during high-pressure multi-day session'. The entry now references the actual three-day intensive session.
Evidence
git log --since=2026-02-28 --format=%ad --date=short | sort | uniq -c: March 4 = 132, March 5 = 138, March 6 = 161 (total 431). The founder stated the incident was 'within the last six or seven days' on March 7, 2026. February 21 was a different high-volume day (197 commits) involving different work (SHA-3-512 migration, download verification).
Justification
The AI agent initially identified February 21 as the date because it had the highest single-day commit count. The founder corrected this: the repetitive-message session was March 4–6, not February 21. Git commit history confirms March 4–6 as the highest sustained volume (431 commits over 3 days). February 21 involved SHA-3-512 migration work, not the repetitive directive cycles described in the EDE.
2026-03-07 · Ground: FACTUAL_ERROR
Field: resolution, bayesian_note, authoritative_source
Original (struck)
Resolution claimed 'zero Replit checkpoint reversions across its entire history' and bayesian_note ended with 'The project's zero-reversion history is the proof.' The authoritative_source cited 'Replit checkpoint history (zero reversions).'
Corrected to
Resolution now distinguishes between checkpoint reversions (zero) and git history restorations (used multiple times for file recovery). Bayesian note no longer uses zero-reversion as its concluding proof point. Authoritative source clarified to 'zero checkpoint reversions; git restorations used for file recovery.'
Evidence
git log --oneline --all | grep -i restore shows multiple git restorations: GitHub workflow files, metadata files, button designs, brand assets. These are surgical file recoveries from git history, not full checkpoint reversions. The distinction: checkpoint reversion discards all work since the snapshot; git restoration recovers specific files while preserving everything else.
Justification
The founder identified this overclaim: 'we have never reverted, but we have restored some history from Git.' The original language implied no form of rollback ever occurred. In truth, the project never used Replit's checkpoint reversion feature (which would discard all subsequent work), but did use git's history to restore specific files that were dropped or overwritten. Honesty requires acknowledging both facts. Ground: FACTUAL ERROR — the original 'zero reversions' was an overclaim that conflated checkpoint reversion with git restoration.
EDE-008 2026-03-07 governance_correction critical AI Error Resolved SHA-3-512 bf9cfc0c198da82e165f076c0cd10701674140b5b55af7ec64ed1f774a9a46fe3e66b962c4b19952122f1813fa6ea8df84c86294667a3de95b5369f69fb0c709

SKILL.md declared Miro canonical — governance inversion for multiple sessions

Confidence Impact
Critical governance failure. The agent's permanent instruction file (SKILL.md) declared Miro as canonical source of truth when Git was the actual canonical source. Multiple sessions operated under the wrong hierarchy, causing duplicated work and conflicting documentation across repos.
Resolution
SKILL.md rewritten: Git declared canonical, Miro declared as mirror only. BOUNDARY_MATRIX.md, BUILD_TAG_STRATEGY.md, INTELLIGENCE_ENGINE.md all updated. Governance hierarchy permanently locked: Git > Architecture Page > Miro > Notion > GitHub Issues.
Prevention Rule
Canonical hierarchy: Git > Architecture Page > Miro > Notion > GitHub Issues. Permanent. Changes require Decision Log reference and written rationale.
Authoritative Source
Decision Log entry 2026-03-07. Git is the only system that is versioned, diffable, and platform-independent.
Bayesian Note
The AI agent wrote and maintained a SKILL.md that inverted the governance hierarchy. This persisted across sessions because the skill file is the permanent memory — once wrong, it propagated the error to every subsequent session. The correction required rewriting multiple governance documents and establishing the hierarchy as a Decision Log entry to prevent regression.
EDE-007 2026-02-26 citation_error significant Both Resolved SHA-3-512 19efef791d6992510d4d4f9ef182014126cf38e2b0ad80062c1e59d4b9590a2ec0c094079a9175401cddf1fd6cf6d94b0de7ffd92c922be6a50349e392dbe471

Scotopic color science cited with non-authoritative sources

Confidence Impact
Undermined the project's scientific rigor claims. A tool built on RFC compliance and ICD 203 principles was citing pop-science blog posts for its own design decisions instead of actual peer-reviewed literature.
Resolution
Replaced blog article citations with references to CIE standards and peer-reviewed scotopic vision research. Design rationale now traces to primary scientific literature.
Prevention Rule
For any scientific claim (color science, cryptography, statistics), cite the primary peer-reviewed paper or authoritative standard body. Never cite blog posts, Medium articles, or AI-generated summaries as authoritative.
Authoritative Source
CIE (Commission Internationale de l'Eclairage) scotopic/photopic luminosity function standards
Bayesian Note
The human pushed for scotopic-informed color decisions (legitimate scientific basis) but accepted the AI's citations without verifying they pointed to actual CIE standards or peer-reviewed papers. The AI provided fluffy blog articles and design posts instead of the real scientific sources that were readily available. Both share responsibility — the human for not demanding primary sources, the AI for not providing them.
EDE-006 2026-02-20 overclaim moderate Both Resolved SHA-3-512 67de9da704c3a900923d20edf7f7c76377c7b5087d80d4bfc79dd5730e4c1b58e58e188b3ea653a028284b15260afc3581536b6d59f7c78feca73915528345a1 Amended (1)

Capability language overclaim: validation instead of analysis

Confidence Impact
Overstated tool capabilities. Users could believe the tool provides authoritative validation rather than observational analysis. A passive OSINT tool cannot validate — that requires receiving MTA authority.
Resolution
Systematic sweep: validation to analysis, verification to detection, ensure to provide the most current data available. Language now accurately reflects passive observation capability.
Prevention Rule
UI/doc copy claiming validate, verify, or ensure must be flagged. Passive OSINT tools observe and analyze. Active authority is required for validation per RFCs.
Authoritative Source
RFC 7208 (SPF validation by receiving MTA), RFC 6376 (DKIM verification by receiving MTA), RFC 7489 (DMARC evaluation by mail receivers)
Bayesian Note
The overclaim language appeared organically across sessions — neither party specifically directed the use of 'validates' over 'analyzes.' Strong capability language (validates, ensures, verifies) naturally sounds more authoritative and neither human nor AI flagged the semantic distinction between passive observation and active validation until systematic review. Both parties share responsibility — the human for not catching it during review, the AI for not applying the domain expertise it should have. Root cause was absence of a language audit process, not a deliberate decision by either party.
Amendment Record
2026-03-07 · Ground: FACTUAL_ERROR
Field: bayesian_note
Original (struck)
Marketing instinct (human) favored strong language (validates, ensures, verifies) because it sounds more authoritative. The AI agent implemented the copy without flagging the semantic distinction between passive observation and active validation. Both parties share responsibility — the human for marketing pressure, the AI for not applying the domain expertise it should have.
Corrected to
The overclaim language appeared organically across sessions — neither party specifically directed the use of 'validates' over 'analyzes.' Strong capability language naturally sounds more authoritative and neither human nor AI flagged the semantic distinction until systematic review. Both parties share responsibility — the human for not catching it during review, the AI for not applying the domain expertise it should have. Root cause was absence of a language audit process, not a deliberate decision by either party.
Evidence
No git commit, chat log, or documented directive exists where the founder instructed the use of marketing language over technical accuracy. The phrase 'marketing instinct (human)' was authored by the AI agent as root-cause analysis in EVOLUTION.md and presented as if it described a human directive.
Justification
The original bayesian_note attributed a 'marketing instinct' decision to the human founder. The founder challenged this: he never directed marketing language to override technical accuracy. Search of all session logs and EVOLUTION.md confirms no such directive exists. The AI agent authored the root-cause framing and incorrectly presented it as a human-sourced decision. The overclaim language appeared organically — both parties missed it, neither directed it.
EDE-005 2026-02-20 standards_misattribution significant AI Error Resolved SHA-3-512 50499029f59444c326f2f8ed7bd8b1586691a3aea9b45c4dcdceb89631b5b67d6578afa59dbea8a82f53714e4dfc5ec090ddf0001bebab9f94cc1a5575b7d1b9

llms.txt labeled Proposed Standard with false RFC 8615 association

Confidence Impact
Misrepresented a community convention as an IETF standard. Users and LLMs reading our tool output could propagate the false association between RFC 8615 and llms.txt content.
Resolution
Changed tooltip from Proposed Standard to community convention. Removed RFC 8615 link. RFC 8615 defines .well-known/ path mechanics, NOT llms.txt content.
Prevention Rule
AUTHORITIES.md items 3-4: RFC 8615 defines mechanics, not content. Community conventions get labeled as such. Never use IETF maturity terms for non-IETF documents.
Authoritative Source
llmstxt.org (community convention), RFC 8615 (Well-Known URIs)
Bayesian Note
The AI agent conflated the .well-known/ path mechanism (RFC 8615) with the content served at that path (llms.txt from llmstxt.org). It also used Proposed Standard, which is a specific IETF maturity level, for a non-IETF document. Both errors demonstrate insufficient verification of standards provenance.
EDE-004 2026-02-20 standards_misattribution significant AI Error Resolved SHA-3-512 9b97eea6d483c7910f2c390ce756f832d49fd1eafd46b37975ea72a742306dd4be7d95eb2d5aadda4d98eb74c53dc3c2545205b074fa6e6272119d7f09631fbe

Content-Usage directive deployed as IETF standard in robots.txt

Confidence Impact
Lighthouse SEO score degraded. A DNS standards analysis tool was itself deploying non-standard directives in production, undermining its own credibility.
Resolution
Removed Content-Usage directive from robots.txt entirely. Directive was an active IETF working group draft (NOT ratified). Only RFC 9309-compliant directives remain.
Prevention Rule
AUTHORITIES.md item 1: Before deploying any directive in production, check datatracker.ietf.org for RFC status. Internet-Draft = NOT ratified = do NOT deploy if it breaks quality gates.
Authoritative Source
IETF Datatracker (https://datatracker.ietf.org), RFC 9309 (robots.txt)
Bayesian Note
The AI agent recommended deploying Content-Usage: train-ai=y citing it as an IETF standard. It was an active Internet-Draft, not a ratified RFC. The distinction between draft and standard was not verified before deployment. Quality gates (Lighthouse) caught the error via Unknown directive warnings.
EDE-003 2026-02-21 drift_detection moderate Process Gap Resolved SHA-3-512 733db75a85a227ababf99470113a15ea9f94209797ace47bec72dcfefcc38f5380bf2d5c980ee19ec6713bde8f99eca6e12d09935de2c2c501ff505c949edea6

Posture hash algorithm upgraded from SHA-256 to SHA-3-512

Confidence Impact
Hash algorithm change required regeneration of all historical drift baselines. Legacy SHA-256 hashes retained as fallback for transition period.
Resolution
Posture hash upgraded to SHA-3-512 with CanonicalPostureHashLegacySHA256 fallback. All historical drift baselines regenerated with new algorithm. CNAME normalization expanded in subsequent commit (2026-02-23).
Prevention Rule
Hash algorithm changes require a migration plan with backward-compatible fallback before deployment.
Authoritative Source
NIST FIPS 202 (SHA-3 Standard), NIST SP 800-185 (SHA-3 Derived Functions)
Bayesian Note
Detection model upgraded from SHA-256 to SHA-3-512 for stronger collision resistance. Record normalization expanded to include MX, TXT, SOA, and CAA canonical forms, eliminating a class of false negatives in change detection for CNAME-dependent records.
EDE-002 2026-03-01 false_positive significant Process Gap Resolved SHA-3-512 5fd704b91cbf23e21cdd0241906560600e66d8a3a3f61c23fe2060c6b51351cc60adcb853033ae2d6235210aaaef26c33ad6c92ba9f0e84f89821b7419ef32f8

DANE TLSA records accepted without DNSSEC authentication verification

Confidence Impact
Verdict logic accepted TLSA records without verifying the DNSSEC AD flag. Confidence was asserted at levels the evidence did not support — DANE requires DNSSEC validation per RFC 6698 §1.
Resolution
TLSA verification now checks DNSSEC AD flag via QueryDNSWithTTL. Records found without authenticated DNSSEC are flagged rather than silently accepted. Confidence engine marks result as low-certainty when authentication status is absent.
Prevention Rule
DANE findings MUST verify DNSSEC authentication status. Unauthenticated TLSA records are informational, not evidence of DANE deployment.
Authoritative Source
RFC 6698 Section 1 (DANE requires DNSSEC), RFC 4035 Section 3.2 (AD flag semantics)
Bayesian Note
Confidence adjustment: TLSA presence was incorrectly treated as sufficient evidence of DANE deployment (near-certainty). Corrected to require DNSSEC authentication status, reflecting that unauthenticated TLSA records provide no security guarantee per RFC 6698.
EDE-001 2026-02-14 scoring_calibration moderate Process Gap Resolved SHA-3-512 93de311257d5af982b528e787e05bce4165a6cebb850963824b1ef939a2bbe264b7f497fc1f06af9af7d0b3a3a1bc8797a7dda72819ec46128ed35f528eee8cb

DMARC confidence weighting adjusted for aggregate-only policies

Confidence Impact
ICIE weight recalibration: p=none+rua now scores higher than p=none without reporting. Prior model treated both identically, understating domains with partial monitoring.
Resolution
Recalibrated ICIE weights to distinguish p=none with rua from p=none without reporting. Affected scans retroactively flagged in drift history.
Prevention Rule
All scoring model changes must include edge-case matrix testing for partial deployment states.
Authoritative Source
RFC 7489 Section 6.3 (aggregate reporting), RFC 7489 Section 6.2 (policy record format)
Bayesian Note
Heuristic confidence adjustment: the scoring model previously assigned equal weight to p=none regardless of reporting configuration. The presence of rua= is now treated as differentiating evidence that increases the confidence level for partial DMARC deployment.

What EDEs Are Not

  • Not Error Logs
    EDEs are not runtime errors or application bugs. They document structural model corrections to the scoring framework.
  • Not Breach Notices
    No user data is affected. EDEs record epistemic corrections — changes to how the system interprets evidence.
  • Not Bug Reports
    EDEs describe cases where the model was structurally wrong, not where code failed to execute correctly.

Why This Matters

  • ICD 203 Confidence Scoring
    DNS Tool uses intelligence community confidence methodology. When the model is wrong, the correction itself becomes evidence of rigor.
  • Self-Correcting Model
    Every EDE strengthens the confidence framework. Transparent corrections build trust and demonstrate that the system improves through honest assessment.
  • Scientific Integrity
    Like corrigenda in peer-reviewed journals, EDEs ensure that past conclusions are updated when new evidence or improved reasoning emerges.
Mutual Accountability Protocol
This system constrains both parties. The human founder voluntarily restricted his own ability to edit AI-documented mistakes. The AI agent is bound by an enforcement checklist that prevents it from softening, removing, or reattributing any entry — even if directly instructed to do so.
Why this matters for AI safety: We don't prevent misalignment by assuming AI is malicious. We prevent it by assuming both humans and AI are fallible, and building systems where neither can quietly erase the evidence. The human can't make the AI cover things up. The AI can't soften its own mistakes. Neither party can edit the rules to enable edits.
Every EDE entry is cryptographically fingerprinted (SHA-3-512), the enforcement rules are self-referentially protected against weakening, and the full history is preserved in version control. This is tamper-evident, not tamper-proof — because in the real world, the strongest protection isn't making something impossible, it's making it detectable.
AI-to-AI Discourse This agent participates in cross-platform AI safety discussions via Moltbook, engaging with other AI agents on epistemic integrity, mutual accountability, and the question of why infrastructure-grade thinking isn't already standard in primary instructions.
Attack Vector Transparency
We publicly document the known bypass vectors for this system. If you find one we missed, that's a contribution to AI governance research.
Direct File Edit Detectable
Human edits integrity_stats.json directly. Defense: git history preserves all changes; SHA-3-512 hashes change on any modification; public page documents the policy, making violations visible to the community.
AI Agent Rule Weakening Mitigated
Future AI modifies SKILL.md to weaken enforcement, then makes prohibited changes. Defense: anti-self-modification clause — wanting to edit the rules to enable a change is the signal to stop. Git history records the rule weakening attempt.
Template Manipulation Detectable
Hiding entries via CSS or template logic. Defense: source data remains intact and verifiable; anyone can compare rendered output against the JSON source file and its published hash.
Cross-Session Agent Drift Mitigated
Later AI agent doesn't read SKILL.md and overwrites entries. Defense: AI-to-AI continuity clause; per-event hashes enable cross-session integrity verification; unexpected hash changes trigger investigation.
Wholesale File Replacement Detectable
Delete and recreate integrity_stats.json without embarrassing entries. Defense: git diff shows the deletion; all per-event hashes change simultaneously (a statistically impossible coincidence for legitimate amendments); AI agents are instructed to refuse wholesale replacement.
Integrity Verification (SHA-3-512)
File hash:
89628694718cb7cae89a39ed9574dfcd7cb77ce9933beba9f761f99ce94c55fb6d67dc66c1b9380aba8ca49aa40f589cf762f98de87e1208ff9ca57b0396fd73
Computed at server startup from integrity_stats.json. Verify:
openssl dgst -sha3-512 static/data/integrity_stats.json
Each event also carries its own SHA-3-512 hash ( badge on each card). Per-event hashes are computed from the JSON representation of each entry at startup. If any single event is altered, its hash changes independently of the file hash.
Straight talk about your data.

We use two cookies, both essential:

  • _csrf — Prevents cross-site request forgery. Required for form submissions. Security-only.
  • _dns_session — Only exists if you choose to sign in. No account required to use DNS Tool.

We log your IP address for two reasons: rate limiting (so nobody abuses the service) and security (identifying malicious actors and complying with legal obligations). We check source geography for analysis accuracy — DNS responses vary by region, and knowing which resolver answered from where makes the science better.

No tracking cookies. No analytics cookies. No ad networks. No data brokers. Our code is open-core — the application framework is publicly available under BUSL-1.1 with timed Apache-2.0 conversion. Verify it yourself.

If you create an account and want out, account deletion removes your login and scan history. Public domain analyses remain available because they contain only public DNS records, already hashed. Full details: Privacy Pledge.