Executive Summary
Most large organizations believe they are protected against DDoS because they have paid for protection. They have a CDN contract, a WAF contract, an upstream scrubbing service, and usually a managed SOC. The spend is in place, the logos are in the deck, and the assumption is that the problem is solved.
The assumption is wrong, and it is wrong in a specific, measurable way. Protection that is purchased is not the same as protection that is reachable. Protection that is reachable is not the same as protection that is correctly configured. And protection that is correctly configured today is not the same as protection that will still be correctly configured next quarter.
This paper lays out what goes wrong in that chain, documents it with DDactic's own scan dataset, and describes the methodology we use to close the gap. It covers:
- The shift of DDoS to the application and protocol layer, and why volumetric defenses alone are no longer enough
- The configuration gap we consistently find in Israeli and European enterprise assets
- Why annual penetration testing does not catch DDoS exposure, and what has to change
- DDactic's 6-stage pipeline, the 233-entry attack taxonomy built on 23 core mechanisms, and the Open Protection Index scoring model
- A practical maturity framework and concrete success metrics
Table of Contents
1. The DDoS Landscape in 2026
1.1 Volume is up. The shape has changed.
The public threat reports agree on the direction, even when the numbers differ. Cloudflare's Q4 2024 DDoS report recorded more than 21.3 million attacks mitigated over the year, a 53% increase year over year. NETSCOUT's 2H 2023 DDoS Threat Intelligence Report counted roughly 7 million attacks globally in the first half of 2023 alone. Corero, Arbor, and Akamai all describe the same general trend: more attacks, shorter durations, and a much larger share running at the application layer.
What that means practically is that the attacks landing on an enterprise today are not usually the 1 Tbps headline-grabbing floods. They are application-layer events that fit underneath the capacity thresholds of most cloud scrubbing services, and they land on whichever asset is not behind the CDN.
1.2 Why application-layer attacks succeed
The economics of attacking the application layer are asymmetric in the attacker's favor. Attacks such as HTTP/2 Rapid Reset (CVE-2023-44487) demonstrated that a single client can generate tens of thousands of wasted request-handling cycles per second on an unpatched server using roughly one residential internet connection. Slow-read, Slowloris, RUDY, large-download, and GraphQL introspection attacks all share this property: they consume a backend resource that is far scarcer than bandwidth.
- They fit well under the capacity ceilings of scrubbing contracts (often under 1 Gbps)
- They imitate legitimate user traffic, which defeats coarse rate limits
- They exhaust backend resources (CPU, sockets, DB connections, worker pools) rather than the pipe
- They often exploit expensive or unoptimized endpoints that nobody rate-limits individually (search, export, login)
For a fuller breakdown of the protocol-level anatomy, including the 23 core server-side mechanisms and the HTTP/2 and HTTP/3 frame-level behaviors, see our live reference page at ddactic.net/ddos-anatomy. This paper summarizes those mechanisms; the reference page shows them.
1.3 The cost of downtime
The often-repeated "$5,600 per minute" figure originates from a Ponemon Institute 2016 study on data center outages. That is a nine-year-old number at this point. More recent data paints a wider range:
| Source | Figure | Notes |
|---|---|---|
| Ponemon Institute (2016) | ~$9,000/min | Average unplanned data center outage; the source of the widely-quoted $5,600/min median |
| ITIC 2022 Hourly Cost of Downtime Survey | $301K to $5M+/hr | For large enterprises; 44% reported >$1M/hr, 91% of mid/large enterprises reported >$300K/hr |
| Gartner (2014, still widely cited) | ~$5,600/min | Median; noted as highly dependent on sector and size |
| Uptime Institute Annual Outage Analysis 2023 | 25% of outages cost >$1M | Share trending up year over year |
The useful takeaway is not a single dollar figure. It is that credible third-party research, from Ponemon in 2016 through ITIC and the Uptime Institute in 2022 and 2023, consistently places major outages in six- to seven-figure ranges per hour for any enterprise above mid-size. Any DDoS resilience conversation should be anchored to the client's actual revenue-per-hour, not a stock average.
2. The Configuration Gap
2.1 Protection purchased is not protection applied
Almost every organization we scan has a cloud WAF or CDN relationship. Very few have that protection in front of every asset that belongs to them. The reason is that the inventory drifted after the contract was signed.
A marketing team launches a microsite. A developer spins up a status page. Security acquires a new subsidiary. A mobile team stands up an API gateway on a different DNS label than the main product. Each of those assets belongs to the organization, each is reachable from the internet, and each needs to be explicitly added to the protection stack. Most of the time, no one does that.
2.2 The DDactic dataset
The findings below come from DDactic's own scan corpus as of April 2026. The dataset is not synthetic. It is every real-target scan we have run through the production pipeline, filtered to production-owned assets (parked domains, expired registrations, and obvious third-party SaaS subdomains are excluded before counting).
2.3 The three shapes of configuration failure
Origin exposure
CDNs only protect traffic routed through them. If an attacker can reach the origin directly, the CDN is a decoration. Origin IPs leak through many small channels: historical DNS (Shodan, Censys, passive DNS feeds), TLS certificates with wildcard SANs that predate the migration, direct-to-origin subdomains used for deployments, SMTP headers leaking the sending server, staging hostnames that were never rotated, and load balancers that respond on a direct IP. DDactic's discovery stage aggregates all of these signals into a single origin-exposure score per asset.
Shadow subdomains
In most scans we find more DNS labels than the target organization itself has in its asset inventory. The typical pattern in the 843-subdomain Israeli batch was that the top-level marketing domain was protected, the login portal was protected, and most of the rest, such as legacy regional domains, vendor-operated subdomains, and developer-oriented labels, were not.
Protection drift
Even assets that are behind a CDN or cloud WAF often have the protection in a weak configuration: caching disabled because of a past incident, a bot-challenge tier that was set to "monitor" and never moved to "enforce", a rate-limit rule that exists but is orders of magnitude higher than real traffic. We capture these as protection mode, not just presence-or-absence, in the Open Protection Index (see section 6).
3. Why Traditional Testing Fails
3.1 Annual pentest was not designed for DDoS
Penetration testing produces real value for authentication flaws, authorization bugs, and application logic. It does not produce useful data about DDoS resilience, because DDoS resilience is a function of three things a typical pentest engagement does not test:
| Limitation of annual pentest | Why it matters for DDoS |
|---|---|
| Point-in-time engagement | Misses configuration drift between tests; DNS changes, new assets, and rule edits happen weekly |
| Narrow authorized scope | Shadow subdomains and third-party-hosted properties are usually outside scope, and are exactly where exposure lives |
| Cannot generate load safely | Testers stop at "I could send a lot of traffic here"; they do not generate the realistic load needed to actually characterize the defense |
| Broad security focus | Budget goes to application logic and AuthN/Z, not to rate-limit and cache-hit mapping |
3.2 The production-risk paradox
- Useful testing requires generating something close to a real attack profile
- Generating a real attack profile risks degrading production
- So most organizations either skip realistic load testing entirely, or restrict it so tightly that the results are not informative
3.3 Point-in-time drift
Most CDN, WAF, and rate-limit configurations drift. A typical trajectory looks like this: an incident occurs, a rule is tightened, a week later customer support flags false positives on a legitimate customer, the rule gets relaxed, no one retests. Within six months the protection is in a degraded state relative to the day of the original annual pentest. Continuous probing is the only way to catch this.
4. DDactic's Approach: The 6-Stage Pipeline
4.1 Why six stages, not one
DDoS resilience is not a single finding. It is the intersection of what assets exist, how they resolve, what is listening on them, what the application looks like from the outside, whether the identities behind them have been compromised, and how the defenses actually behave under load. We decompose that into six stages so that each can be rerun independently, streamed to the dashboard in real time, and verified.
| Stage | What it does | Artifacts produced |
|---|---|---|
| 1. Discovery | SLD + subdomain enumeration, passive DNS, ASN and cloud-provider mapping | Topology JSON, origin-exposure candidates |
| 2. Port Scan | Non-intrusive port sweep via socket and nmap modes | Open-port map per asset |
| 3. L7 Recon | HTTP/HTTPS probing with Playwright rendering, DNS, SMTP, SIP, WebRTC (D2R) | Tech stack per asset, WAF/CDN vendor detection, sourcemap extractions |
| 4. Breach Cross-Reference | HIBP, DeHashed, LeakCheck, LeakIX lookups against discovered domains and identities | Credential and infostealer exposure signals |
| 5. Active Recon | Tiered path probing (5a), deep crawl (5b), and opt-in measurement (5c) for rate limits, cache, timing | Sensitive-path findings, endpoint characterization, rate-limit bisect output |
| 6. Attack Simulation + AI Analysis | Maps findings to the 233-entry attack taxonomy, generates a tailored test plan, produces OPI score | Test plan JSON, OPI score, prioritized vectors |
Each stage produces S3-backed artifacts that stream into the client dashboard as they complete. Stages can be rerun individually (stage-level reprobe) so an organization can validate a single hardening change without paying for a full re-scan.
4.2 The attack taxonomy, abbreviated
The test-plan generator in stage 6 draws from an internal taxonomy of 233 attack entries that reduce to 23 core server-side mechanisms. Each entry is tagged as one of three types: a documented CVE (such as CVE-2023-44487 Rapid Reset), a design vector (how the protocol is supposed to behave, used against itself), or a technique (composition of primitives, such as multi-vector pulse-wave). The table below is the abbreviated, category-level view. The full taxonomy is visualized interactively at ddactic.net/ddos-anatomy.
| Category | Count | Representative entries |
|---|---|---|
| L3 Network | 11 | ICMP flood, BlackNurse, IP fragment, GRE/ESP/AH tunnel |
| L4 TCP | 11 | SYN, ACK, FIN, RST, PSH+ACK, Sockstress, Tsunami |
| L4 UDP | 5 | UDP flood, DNS water torture, QUIC flood |
| Amplification | 14 | NTP, DNS, SSDP, SNMP, CLDAP, Memcached, Middlebox |
| L7 HTTP/1.1 | 13 | GET/POST floods, cache bust, large-download, login, GraphQL, ReDoS |
| L7 HTTP/2 | 5 | Rapid Reset (CVE-2023-44487), PING, SETTINGS, CONTINUATION |
| L7 QUIC / HTTP/3 | 8 | Initial flood, CID exhaust, version negotiation, 0-RTT replay |
| Low and slow | 7 | Slowloris, RUDY, slow read, TLS renegotiation |
| L7 SIP / SMTP / Remote / Directory | 21 | INVITE/BYE/REGISTER, RCPT TO, SSH/RDP handshake, LDAP bind |
| IoT / Streaming / Messaging | 15 | MQTT, CoAP, Modbus, RTSP, RTMP, WebSocket, gRPC |
| Multi-vector + escalation | 14 | Pulse wave, protocol hop, carpet bomb, CDN bypass, challenge bypass |
The sum of the category counts is larger than 23 mechanisms because many entries share a mechanism. For example, "GET flood", "POST flood", and "HEAD flood" are three separate entries but map to the same underlying mechanism (worker-pool starvation). The 23 mechanisms are what a defense team has to actually design for; the 233 entries are what an attacker actually sends.
5. Technical Deep Dive
5.1 Active recon in tiers
Stage 5 is where DDactic differs most visibly from traditional DAST tooling. It has three sub-modes and three intensity tiers, and a client can scope any combination.
5a · Path probe
Sensitive-path HEAD checks, tech-stack-adaptive. Three tiers: light (~40 paths, compliance-safe), moderate (~300-500 paths, source-map mining and framework-aware), deep (~2,000+ paths, full fuzz).
5b · Deep crawl
Playwright-rendered link following. Discovers image-resize endpoints, API pagination, JS-rendered routes, downloadable resources. Sourcemap extraction surfaces internal API routes and variable names.
5c · Measurement
Opt-in burst probing: per-endpoint rate-limit bisect, response-timing baseline, cache-hit characterization, compression-ratio anomalies. May trigger SOC alerts, so it is always explicit.
5.2 Rendering and JS reverse engineering
The L7 recon stage runs a Playwright-backed browser, not a static HTTP client. That matters because most modern bot protection and WAF products ship JS challenges that refuse to classify a client as human until the JS has executed and reported back. Running a headful-equivalent browser lets DDactic see what the real application looks like, including SPA-rendered routes and GraphQL schemas exposed through introspection.
On top of that, the recon engine carries vendor-specific JavaScript reverse-engineering handlers for 20+ bot-protection and WAF vendors. Each handler knows how that vendor's challenge script produces its client fingerprint, so the scanner can identify the vendor reliably even when the vendor is not named in HTTP headers. This is how we distinguish "a cloud WAF is present" from "no cloud WAF is visible" as a defensible dataset claim.
5.3 AI CAPTCHA solver
For 25 CAPTCHA and bot-challenge vendors we maintain an AI-assisted solver used strictly for scan continuation on authorized targets. This is not an attack tool. It exists so that stage 5b and 5c can reach application screens behind a challenge and measure them, instead of stopping at the challenge page and reporting nothing. Usage is logged per scan and gated on authorization.
5.4 The fleet: 23-24 cloud platforms
Attack simulation, when it is authorized and scoped, runs from a multi-ASN fleet distributed across 23-24 cloud platforms with roughly 1,400+ on-demand bot capacity. The fleet includes AWS, GCP, Azure (restricted to Azure-hosted targets per Microsoft TOS), Alibaba, Tencent, IBM, DigitalOcean, Hetzner, Scaleway, Vultr, Linode, OVH, Fly.io, IONOS, Exoscale, Civo, Kamatera, UpCloud, Cherry, LunaNode, Leaseweb, Zenlayer, Hostinger, and Infomaniak. A few additional platforms are enrolled but not always practically deployable (OVH has seen account-specific suspensions, Infomaniak's API has been intermittently unavailable, ServerSpace is L7-only).
The multi-platform spread matters for two reasons. First, realistic attack traffic originates from many ASNs and many residential-adjacent IP reputations, not from a single cloud. Second, different platforms have different upstream behaviors: some rate-limit outbound UDP, some null-route SYN floods within minutes, some pass raw traffic cleanly. Characterizing a target's defense requires varying the source.
5.5 Target API Intel and mTLS flows
For customer portals and API gateways that sit behind mTLS, OAuth, or a custom authentication scheme, DDactic exposes a Target API Intel workflow where the client uploads a HAR capture of an authorized session. The scanner then crafts legitimate-looking requests from that baseline and measures rate limits and auth-flow resilience without replaying credentials outside the scoped test. This closes the gap that most external scanners have on anything gated by anything stronger than a bearer token.
5.6 Breach cross-reference
Stage 4 cross-references the discovered domains and identities against HIBP, DeHashed, LeakCheck, LeakIX, and Hudson Rock infostealer data. This matters for DDoS planning because credential stuffing and account-takeover traffic is often indistinguishable from a legitimate login surge until it overwhelms the login service. Having the breach surface mapped at scan time lets the test plan anticipate it.
6. Hardening: The Open Protection Index
6.1 What OPI measures
The Open Protection Index (OPI) is DDactic's attempt to move beyond binary "WAF yes / WAF no" scoring. OPI is computed per asset, on a 0-100 scale, and combines four components:
- Edge presence. Is a cloud WAF or CDN visible in front of this asset, and which vendor.
- Protection mode. Is that vendor running in monitor, challenge, or enforce mode (inferred from observed block behavior, not self-report).
- Origin isolation. Can the origin be reached directly, bypassing the edge.
- Application discipline. Are the usual expensive endpoints rate-limited, is there cache coverage on cacheable surfaces, are sourcemaps shipping to production.
6.2 The Defense Intelligence DB
OPI is backed by a hash database of observed defense fingerprints per vendor per mode, so that two scans of the same target that run in different weeks produce comparable scores. The database is populated as we scan; as of this writing, Imperva is calibrated against a known-mode reference set, and Cloudflare, Akamai, Radware, F5, and AWS Shield are populated with production observations that improve as the corpus grows.
6.3 Client-applied hardening, validated
When a client applies a hardening change, such as tightening a rate-limit rule or moving a WAF tier from monitor to enforce, DDactic validates the change with a narrow stage-reprobe against the specific endpoint or rule. The test plan is updated only after validation. This prevents the common failure mode where an organization self-reports a fix and the next incident finds that the fix never actually landed.
- IR response pipeline. When a customer gets hit by a real attack, we pivot the scan engine to capture what the attack looked like and what got through, not only what was blocked.
- challenge.ddactic.net. A public attack-invite endpoint that demonstrates the IR pipeline on a controlled target.
- Mobile and desktop app labs. Isolated Windows, Android, and iOS environments that intercept traffic from real customer applications to map which endpoints the apps actually call. This is how shadow APIs surface.
7. Case Study: Dogfooding DDactic
DDactic runs the full pipeline against its own infrastructure (ddactic.net and related backend hosts) on a recurring schedule. The current dogfood pass is in progress: the discovery and L7 recon stages have run, known gaps in our own infrastructure have been logged, and the corresponding hardening changes are being staged. This section will be filled in with before and after OPI scores, the specific misconfigurations we caught on ourselves, and the time to remediate, once the hardening pass closes. We will not publish a case study we have not yet lived through.
8. Implementation Framework
8.1 Maturity model
| Level | Scope | Best for |
|---|---|---|
| Level 1 · Discovery | Attack surface mapping, asset inventory reconciliation, OPI baseline, origin exposure scoring. No load generation. | Organizations starting the DDoS resilience conversation, or those with no current external inventory |
| Level 2 · Full assessment | Level 1 plus stage 5b deep crawl, stage 5c measurement on scoped endpoints, tailored test plan, and authorized simulation from the multi-cloud fleet against a pre-agreed window. | Pre-peak season hardening, compliance requirements, post-incident review, board-level reporting |
| Level 3 · Continuous | Recurring Level 2 with drift detection between runs, stage-level reprobes after each hardening change, IR pivot on live incidents, API integration into the client's security stack. | High-availability services, regulated industries, organizations with known threat exposure |
8.2 Success metrics
| Metric | Target | Why |
|---|---|---|
| Asset coverage | >95% of owned internet-facing hostnames | You cannot harden what is not in the inventory |
| Assets with visible cloud WAF / CDN | Trending up from the 32% baseline | The DDactic-corpus baseline is 32%; the goal is asset-by-asset correction |
| Origin exposure | 0 direct-to-origin paths for tier-1 assets | Any direct path invalidates the edge contract for that asset |
| OPI score (tier-1 assets) | >75 | Below 75 usually means either monitor-mode WAF or missing rate limits |
| Time to remediate critical findings | <7 days | Drift rate typical in most orgs means slower than this produces a permanent backlog |
| Re-scan cadence | Quarterly at minimum, monthly for regulated and high-traffic | Caught drift compounds otherwise |
9. Conclusion and Path Forward
9.1 Key takeaways
- Protection purchased is not protection applied. Across 670+ enterprise scans, 68% of discovered assets had no visible cloud WAF. The gap is in configuration and inventory, not in vendor choice.
- The attack surface is larger than the inventory. Shadow subdomains, acquired-company assets, and developer-oriented labels are where exposure lives.
- Annual pentest does not measure DDoS. DDoS resilience is a continuous configuration-drift problem, not a point-in-time finding.
- Realistic testing is possible without breaking production. Tiered recon, opt-in measurement, and authorized simulation from a 23-24 platform multi-ASN fleet lets a team characterize real defense behavior without a full attack.
- Hardening has to be validated. Self-reported fixes do not count. Stage-level reprobe of the specific rule or endpoint is the difference between a closed ticket and a closed gap.
9.2 Path forward
- Baseline with a Level 1 discovery scan to reconcile the inventory against reality
- Compute the OPI baseline and identify the tier-1 assets that are in monitor mode or exposed at origin
- Run a scoped Level 2 simulation with the three or four attack vectors most likely to land, based on the test plan
- Validate each hardening change with a narrow reprobe before closing the ticket
- Move to continuous monitoring for drift; integrate via API if the security stack supports it
10. Sources
- Cloudflare, "DDoS Threat Report for 2024 Q4" and annual Radar DDoS reports: radar.cloudflare.com/reports
- NETSCOUT, "DDoS Threat Intelligence Report", published semi-annually: netscout.com/threatreport
- Corero, "DDoS Threat Intelligence Report"
- Akamai, "State of the Internet" security reports
- Ponemon Institute, "Cost of Data Center Outages" (2016)
- ITIC, "Global Server Hardware, Server OS Reliability Survey" and "Hourly Cost of Downtime" (2022)
- Gartner, widely-cited downtime cost research (2014, updated commentary since)
- Uptime Institute, "Annual Outage Analysis 2023"
- CVE-2023-44487 (HTTP/2 Rapid Reset), NVD and CISA advisories
- DDactic internal scan corpus (670+ enterprise scans, April 2026)
- DDactic attack taxonomy reference: ddactic.net/ddos-anatomy