Our Methodology
A score without a citation is an opinion. We give you a cited finding, tell you where it came from, and show you the terminal command to verify it yourself — because we’ve seen what happens when an engineer trusts a tool that was optimizing for clean-looking output instead of accurate output.
Why This Rigor Exists
A domain registered for ten years with no published mail policy is not dormant — it is exposed. Without SPF, DMARC, and a Null MX record, any domain becomes a free spoofing vector. Reputation systems do not distinguish between negligence and compromise. They only see the absence of published policy.
Five Perspectives, One Algorithm
A single viewpoint misses things. An intelligence officer quantifies confidence but won't catch a misconfigured TTL. A DNS engineer reads RFC chapter and verse but won't frame findings for a board meeting. A hacker spots the attack surface but won't write the remediation steps. An executive needs a clear verdict but shouldn't have to parse RRSIG expiration dates. A field technician needs to know what to paste into Cloudflare, not why ICD 203 matters.
DNS Tool encodes all five as a precision algorithm. Each perspective watches for the blind spots the others miss. We call this Symbiotic Security — not a philosophy, but an architecture with a human face.
Quantifies what you can trust. Runs dual confidence engines — ICAE measures accuracy (are the findings correct?), ICuAE measures currency (are they fresh?). Merges both into an ICD 203 confidence level so every finding carries its own epistemic weight. If a score is HIGH, you know why. If it's LOW, you know which factor is limiting.
ICAE · ICuAE · Unified ConfidenceReads the RFCs so you don't have to guess. Every finding maps to a specific RFC paragraph — not a vendor interpretation, not a best-practice blog post. SPF follows RFC 7208. DMARC follows RFC 7489. DNSSEC follows RFC 4033–4035. Delegation consistency, nameserver fleet health, SOA compliance — all grounded in the standards that define how DNS actually works.
RFC Analysis · Delegation · Fleet MatrixSees what the attacker sees. Covert Recon Mode reframes the same data through an adversarial lens — scotopic-optimized red interface grounded in published vision science research. Zone transfer exposure, open recursion, lame delegation, NSEC zone walking, SPF ranges that authorize entire netblocks. Multi-layer subdomain discovery from certificate transparency and DNS intelligence surfaces exposed assets — with full download. The configuration data doesn't change. The questions do.
Covert Recon · Attack Surface · Subdomain DiscoveryGets the verdict without the noise. The Executive Brief distills every finding into a clear posture summary — what's secure, what's at risk, what needs attention. No RRSIG expiration dates, no TTL arithmetic, no RFC section numbers. Just the strategic picture with enough context to make a decision or ask the right follow-up question.
Executive Brief · Posture Score · Risk SummaryGets actionable remediation, not a lecture. TTL Tuner recommends specific values for your context. Big Picture Questions reframe configuration details as strategic decisions. Provider-aware notes explain when a finding is a Cloudflare constraint vs. a genuine misconfiguration. DMARC guidance tells you whether the fix is yours or your provider's. Every finding connects to what you actually do next.
TTL Tuner · Big Picture Questions · RemediationThese aren't personas or marketing segments. They're implemented as distinct template outputs, scoring engines, and guidance layers — each backed by deployed code you can verify in our architecture and confidence documentation.
The Verification Principle
DNS security analysis demands a position on this spectrum. The pressure to simplify pulls toward one extreme or the other — and the pull is rarely acknowledged. We chose the middle, and built the math to defend it.
Accept every record at face value. If the syntax parses, call it “configured.” Ignore context, ignore drift, ignore what the record actually authorizes.
Flag everything as dangerous. Inflate severity to sell remediation. Infer malicious intent from common misconfigurations.
Here’s what the math means in plain language: if you walk in already certain — “this domain is fine” or “this domain is compromised” — then no amount of evidence will change your mind. That’s a dogmatic prior, and it’s the silent default when DNS analysis skips confidence disclosure. “Configured = safe” or “unusual = dangerous” — never questioned. We don’t work that way. Our scoring starts from protocol-specific empirical priors — never at absolute certainty or absolute doubt — and moves with the evidence. When evidence is ambiguous, confidence drops — visibly. No finding is ever asserted without telling you how certain we actually are. See the Confidence Engine for the full framework.
Four Reasons “Record Present” Isn’t Enough
A record existing is not the same as a record working. DNS security requires looking past the presence check — here’s why:
v=spf1 ip4:10.0.0.0/8 ~all passes every simple checker — and authorizes 16.7 million IPs to send your email.
CISA guidance leans toward -all. RFC 7208 permits ~all. We make that tension explicit and cite the basis for both.
A record existed when you checked — but has it been stable? Fast-flux DNS rotates records every few minutes. Our drift engine detects it statistically.
A 300s TTL on a stable zone means thousands of unnecessary queries per hour. We tell you the right TTL for your context, with RFC backing.
Why DNS?
If you want to understand how the connected internet works — how email is secured, how domains are authenticated, how trust is established between systems — DNS is where you start. If you don't understand DNS, you don't have a chance to understand the internet. And if you don't understand the internet, you can't understand cyber security. DNS, it's the foundation of both.
But DNS is not a static database. It's a distributed, eventually consistent, human-operated system. Records propagate across thousands of resolvers with different caching behaviors. Configurations are set by people — TLD engineers, hosting providers, federal agencies issuing directives — and every one of those humans can introduce errors that standard tools miss because they only take a single snapshot of a single moment from a single vantage point.
DNS Tool was built to handle that complexity honestly, not to hide it behind a simple letter grade.
Addressing the Criticism
We've received feedback that this level of rigor is "too much" for a DNS checker. That applying time-series drift detection, evidence-weighted confidence scoring, and multi-probe verification to DNS records is excessive. That we're over-engineering.
We understand the reaction. When most DNS checkers are simple scripts with minimal depth, anything that goes deeper looks like it's trying too hard. That's the heuristic trap — pattern-matching against a low bar and treating the outlier as suspicious rather than serious.
But the criticism reveals the actual problem: DNS security has historically been under-engineered relative to what it actually requires. Email security misconfiguration causes real, costly harm. A record can exist, pass a syntax check, and still be operationally dangerous. The problem was never too small for rigor. The industry just decided not to apply it.
Yes, you can use DNS Tool as a DNS checker — and it will give you accurate results. But we built it to go further: cited findings, epistemic provenance, and time-series verification for when a simple check isn't enough. It's a forensic audit engine for the protocol that underpins every email, every certificate, and every domain on the internet. If all you need today is a quick check, start there. The depth is here when you're ready for it.
And yes, there are Easter eggs. Hacker poems in the console. An adversarial red theme built on scotopic vision science (DTIC AD0639176, MIL-STD-3009, MIL-STD-1472G). RFC 1392 nods in the source. These aren't decoration — they're signals to a specific subculture of security professionals who recognize the difference between aesthetic choices and engineering choices. Underneath the aesthetic: EWMA control charts with 3σ control limits, evidence-weighted confidence scoring inspired by Bayesian reasoning, SHA-3-512 tamper-evident audit trails, structured JSON export, PWA offline support, and a drift engine that statistically detects whether a record is stable or flickering. The surface is a toolkit. The core is a high-assurance engine.
On the Intelligence Community Framing
We've also heard the narrower critique: that applying Intelligence Community methodology (ICD 203 confidence taxonomy, ICAE/ICuAE engine naming, TLP classification) to a DNS tool is theatrical. That the science is real but the packaging is cosplay.
We'll address that directly.
ICD 203 was designed for one problem: an operator making a high-stakes decision based on incomplete, potentially ambiguous data needs structured confidence assessment. That is exactly what DNS security analysis is. SPF misconfiguration is the attack surface exploited in Business Email Compromise — the single most financially damaging cybercrime category ($2.9 billion in 2023, FBI IC3). When an engineer evaluates an SPF finding, they need to know: was it observed directly or inferred? Was it confirmed across multiple resolvers or seen by only one? Is the evidence degraded by caching artifacts? ICD 203 provides that structure. The alternative — presenting every finding as equally certain — is what causes the alert fatigue that makes security tools useless.
The naming convention (ICAE — Intelligence Confidence Audit Engine, ICuAE — Intelligence Currency Assurance Engine) serves a specific engineering purpose: it disambiguates two subsystems that solve fundamentally different problems. ICAE answers "Did we interpret the data correctly?" ICuAE answers "Is the data still valid?" Those are different failure modes requiring different test methodologies. The names enforce that separation in code, documentation, and conversation. We could have called them "Accuracy Checker" and "Freshness Checker" — but those names would have been wrong. Accuracy implies measurement precision; confidence implies epistemic rigor about the basis for a conclusion. Currency implies temporal validity of evidence, not just "is it fresh." The IC terminology is precise because the IC spent decades making it precise.
As for the aesthetic: if you've been to DEF CON lately, you know that the people writing the CVEs, breaking the protocols, and publishing the research also appreciate hacker culture, Easter eggs, and adversarial thinking as intrinsic motivation. These are not mutually exclusive communities. The person who finds the Morse code Easter egg in our Covert Recon Mode might be the same person who found CVE-2024-49040. Dismissing the aesthetic as cosplay says more about the critic's assumptions than about the engineering underneath.
We built DNS Tool for the engineer who wants cited findings, the executive who needs a clear verdict, and the security researcher who appreciates both the science and the craft. Serving all three at once is hard. We did it anyway.
Where Simplicity Fails
Here are four real examples of why "does the record exist?" is a dangerous question to stop at:
An SPF record like v=spf1 a mx ip4:10.0.0.0/8 ~all is syntactically perfect. It passes every simple checker. But it authorizes 16.7 million IP addresses to send email on your behalf. A tool that reports "SPF: Present" just told you everything is fine. It isn't.
CISA's BOD 18-01 requires valid SPF and uses v=spf1 -all for non-sending domains. RFC 7208 permits ~all as softfail and leaves disposition to local policy. Stricter enforcement requires confidence that legitimate senders are fully accounted for. We surface that tension, cite both sources, and let the engineer decide with full context.
A DNS record that existed when you checked doesn't mean it's been stable. Fast-flux DNS — a technique used by malware — rotates records every few minutes. A snapshot scanner says "record present." Our drift engine uses Exponentially Weighted Moving Average (EWMA) control charts to statistically determine whether a record is stable or flickering. That's not overkill — it's the only way to detect that class of threat.
TTL is rarely examined in DNS audits. But a record set to 300 seconds (5 minutes) when the zone is stable means thousands of unnecessary upstream queries per hour — increased cost, increased attack surface, increased latency. A TTL of 3600 for a stable record reduces resolver load and improves cache efficiency (RFC 1035 §3.2.1). Conversely, during a migration or incident response, a low TTL is essential for rapid propagation. The right TTL depends on context, and we surface that context with specific recommendations.
Chain of Custody, Not Opinions
In a high-reliability organization, if a DNS configuration is flagged, the first question from the sysadmin is: "According to whom?"
Our analysis engine doesn't invent interpretations. Every test case maps to a specific RFC paragraph. SPF evaluation follows RFC 7208. DMARC alignment follows RFC 7489. DANE/TLSA follows RFC 6698. DNSSEC validation follows RFC 4033–4035. By mapping findings to standards, we remove the opinion from the audit. We aren't the judge — the IETF standards are.
We use ICD 203's confidence language not as a credential, but because it was designed to locate the weakest link in an intelligence chain — and in DNS, that weakest link is usually human. Whether it's a TLD engineer, a misconfigured TTL, or a federal directive that outpaced industry readiness, our confidence engine is built to locate the failure wherever it occurs.
Opinions about facts — that's just politics.
The Death of the Black Box
Every analysis includes "Verify It Yourself" terminal commands — dig, openssl, curl — so any analyst can independently reproduce our findings. This isn't decoration. It's how scientific papers are structured: conclusion, methodology, raw data. We provide all three.
Our confidence engine explicitly distinguishes between what we've directly observed, what we've inferred from patterns, and what came from third-party enrichment:
- Observed — Directly witnessed in authoritative data
- Inferred — Derived from patterns in primary data
- Third-party — Sourced from external enrichment
A verdict without provenance is unfalsifiable — you can’t know if a finding was observed, inferred, or pulled from a third-party API unless the tool tells you. Making that distinction explicit is the only way to build trust with the engineers who have to act on the findings.
Rigor Is Dogfooding
There is a credibility problem in security tooling: flagging vulnerabilities on other domains while your own security headers are incomplete, your own Content Security Policy is missing, and your own performance is an afterthought. If we can’t secure our own web delivery, why should anyone trust our DNS analysis?
We hold ourselves to the same standards that Google, Mozilla, and the security community created for exactly this purpose. Not as vanity metrics — as proof that we practice what we measure.
This philosophy extends to how our scanners and probes interact with the systems they analyze. Every DNS query, every SMTP probe, every certificate check is designed to be a best-practices good internet citizen — optimized query patterns, respectful rate limiting, symbiotic interaction with the infrastructure we depend on. We don't just measure good behavior. We practice it. If we're going to tell you your TTL is too aggressive, ours had better be right too.
This isn't theoretical. Over the years, multiple professional development agencies — real enterprise shops — were hired to build and maintain this platform. Every one promised Lighthouse, Observatory, and security compliance up front. Every one failed to deliver it. The authoritative tools to verify quality existed the entire time. They just weren't used. That experience is why every quality gate now runs on every build, not as a final check, but as a build requirement. The tools are right there. The only question is whether you use them.
This is also our contribution standard. We build with coverage, tests, and quality gates from the first commit — not as a cleanup step before release. If you have a good idea, we want to hear it. But if a pull request regresses Lighthouse, breaks Observatory, or skips test coverage, it goes back. The gates aren't negotiable. They exist so that every contributor inherits the same discipline the project was built with.
Built for the Next Engineer
Technical debt is a security risk. A system that can't be understood can't be trusted, and a system that can't be maintained will eventually be replaced by something less rigorous. We build modular, documented, and traceable infrastructure — not because it's faster to build, but because it's safer to inherit.
Every report is cryptographically sealed with SHA-3-512 hashing per NIST FIPS 202. Every architecture decision is documented. Every quality gate runs on every build. We are building for 2036, not just 2026. The line between an over-engineered project and a high-reliability system is traceability — and ours traces from finding to RFC to terminal command to test case.
Big Picture Questions
DNS security isn't one personality type's problem. The cybersecurity world includes hackers and pentesters, red teamers and blue teamers, compliance officers and field engineers, self-taught builders and formally trained analysts. They think differently, prioritize differently, and need different entry points into the same data.
That's why DNS Tool includes a dedicated Covert Recon Mode — a scientifically designed adversarial interface built on scotopic vision research. But the red interface isn't the point. The point is perspective. Covert Recon Mode gives you the adversarial viewpoint — the same configuration data, reframed through the lens of an attacker looking for weaknesses rather than an administrator confirming compliance. That cognitive shift is what separates security awareness from security theater. The interface is intentionally segregated so that executives reviewing an Engineer's Report never encounter a hacker aesthetic they didn't choose — but for the operators who need it, the adversarial lens is always available.
It also gives you the loot. Subdomain discovery from certificate transparency, passive DNS intelligence, and recursive enumeration surfaces assets that the domain owner may not realize are exposed — downloadable as a complete inventory. When sysadmins call asking how to hide their subdomains, it means the recon is working.
And yes, there are Easter eggs hidden throughout the platform, because intrinsic motivation — the only human trait that research consistently maps to sustained excellence — is what turns a scan into a learning experience.
Our Big Picture Questions are designed to meet every one of these operators. They're the questions that make you think — that reframe a configuration detail as a strategic decision, or surface a compliance gap that a scan result alone would bury.
DNS Tool is built to educate as much as it evaluates. Every finding is an opportunity to teach the why behind the configuration — not just flag the what.
The Ethical Line
Our multi-probe infrastructure uses Nmap — one of the most powerful reconnaissance tools ever built. We allow exactly six NSE scripts: ssl-cert, http-title, http-headers, dns-zone-transfer, banner, and smtp-commands. That’s it.
We deliberately excluded every vulnerability scanning, exploitation, and brute-force script category. Would running vuln scripts produce richer intelligence? Absolutely — we’d see actual CVEs on nameservers, exploitable TLS weaknesses, misconfigured SMTP relays. The data would be better. The ethics wouldn’t be.
There’s a difference between observing what’s publicly visible and probing for weaknesses. We chose the boundary, documented it, and made it public. Read the full rationale in our Rules of Engagement.
Validation Status Matrix
Honest assessment of where each component stands. Science requires transparency about what’s proven, what’s in progress, and what’s next.
| Component | Status | Standards | Validated | Next Step |
|---|---|---|---|---|
| ICAE Confidence Engine | Deployed | ICD 203 | 129 test cases | External audit |
| ICuAE Currency Engine | Deployed | ICD 203, ISO/IEC 25012, RFC 8767, NIST SP 800-53 | 29 test cases | Longitudinal evaluation |
| RFC Protocol Analysis | Deployed | RFC 7208, 6376, 7489, etc. | Golden fixtures | Continuous expansion |
| Drift Engine | Deployed | Novel | Operational (3 EDEs) | Timeline visualization |
| Covert Recon Mode | Deployed | MIL-STD-1472H, 3009, CIE 1951 | Standards-informed | Spectroradiometric study |
| Topology Solver | Beta | Graph drawing metrics | Benchmark in progress | ELK baseline + significance |
| EDE System | Deployed | Corrigenda practice | 3 events tracked | Continued governance |
| Notification Pipeline | Deployed | — | Operational | Email + HTTPS enforcement |
A Note from the Builder
I don't have a computer science degree. I'm self-taught. Twenty-seven years of field engineering — standing in front of clients who needed a trustworthy answer, not a confident-looking one. Everything in this platform was learned from reading the standards, studying the science, and applying it with the discipline of someone who couldn't afford to get it wrong.
The people who built this didn't learn it casually. They were in a lab — studying RFCs at midnight, reverse-engineering protocol behavior, testing edge cases that no customer would ever report. They chose that work over easier paths because the problem was interesting enough to deserve the effort. That's what intrinsic motivation looks like when it meets infrastructure.
Over those years, I've mentored a lot of young IT professionals just starting out — helping them understand how to do things correctly from the foundation up. DNS Tool carries that same philosophy. It's built to teach the why, not just report the what. If you want to understand how the connected internet actually works — how domains are authenticated, how email is secured, how trust is established — DNS is where you start. We're here to help.
Here's something I've learned: whenever someone reaches out about a security product or partnership, the first thing I do is scan their domain. If they haven't secured their own email authentication, their own DNSSEC, their own TLS — I have a hard time believing they have the philosophy or discipline to protect mine. DNS doesn't lie. It's the most honest audit you can run on anyone.
This project exists because security intelligence should be transparent, reproducible, and grounded in evidence. Not because it's trendy. Because after 27 years of watching tools optimize for comfortable output instead of honest, neutral truth, I decided to build one that reports the findings as they are — not as you'd prefer them to be. We still care deeply about craft and design, because intrinsic motivation matters and learning should be engaging. But the visuals never soften the data.
I wanted to be a scientist my whole life. My father gave me access in the late 70s and early 80s — to databases, to tools, to the spark. Tandy TRS-80s, early modems, the thrill of watching data move across a wire. After that window closed, I never had access to the expensive databases and research tools I needed. But the spark was already lit. The intrinsic motivation was hard-coded into the firmware.
Twenty-seven years later, modern tools collapsed the barriers that used to require entire teams. Open-source infrastructure, cloud computing, AI-assisted development — one person can now build what previously required a frontend developer, a backend developer, a sysadmin, a release engineer, and a security analyst. DNS Tool is the expression of that drive — a scientific instrument built by someone who couldn’t afford to wait for permission.
Yes, this platform uses AI-assisted development. Most serious software does now. But AI tools optimize for done unless you redirect them toward correct. Left to defaults, they produce generic layouts, placeholder copy, and features described in documentation but never implemented. The difference is the specification. Every RFC citation, every quality gate, every confidence taxonomy, every scientific color validation — those existed as requirements before a single line was generated. The AI was the tool. The 27 years of field engineering — knowing exactly what correct output looks like in this domain — that was the specification. You can’t hand a tool an ambiguous request and expect precision. You have to define what excellence looks like before a single line is written. We did.
We're not perfect — we're human too. But we promise we're here to learn, to improve, and to help. If you find something we got wrong, tell us. That's how this gets better.
