We Scanned Ourselves and Found 131 Findings - A DDoS Vendor's Honest Self-Assessment

April 15, 2026 | 11 min read | Stav Barak

We build a DDoS resilience testing platform. We sell the idea that every organization has blind spots in its DDoS defenses. So we pointed our own scanner at ddactic.net and ran the full pipeline. 8.2 minutes later, we had 131 findings. Some of them were critical.

This is not a marketing exercise dressed up as transparency. We genuinely did not know what the scanner would find. We had assumptions. Some held up. Others did not. This post walks through the actual results, the gaps we discovered in our own infrastructure, and what we did about them.

If a company whose entire product is DDoS resilience testing has exploitable gaps, you should assume yours does too.

131
Findings discovered in DDactic's own production infrastructure in 8.2 minutes

Why Scan Yourself?

The short answer: because we tell every prospect they have gaps they do not know about. If we are not willing to prove that claim against ourselves, the claim is empty.

The longer answer: DDactic's scanner has grown significantly since launch. We added L7 application-layer analysis, active reconnaissance modules, cross-layer fingerprint detection, and AI-powered threat modeling. Each new capability finds things the previous version missed. We had not run the full updated pipeline against our own infrastructure since the early prototype stages.

On April 6, 2026, we ran it. No exemptions, no pre-hardening, no cleanup beforehand. The same scan we offer to prospects, pointed inward.

The Setup

Our production infrastructure is relatively simple compared to the enterprise environments we typically scan. DDactic runs on a single production domain, ddactic.net, with the following stack:

We expected Cloudflare to handle the L3/L4 and basic L7 protection competently. We expected our API to be reasonably hardened. We were half right.

Scan Parameters

Target: ddactic.net. Scan date: 2026-04-06. Duration: 8.2 minutes. Pipeline: full reconnaissance, L7 analysis, active probing, AI threat assessment. No credentials provided, no internal access. Pure external attacker perspective.

131 Findings in 8.2 Minutes

The scanner completed in just over 8 minutes and returned 131 discrete findings across four categories:

Category Count Severity
L7 Application-Layer Findings 91 Mixed (Critical to Low)
Active Reconnaissance Findings 32 Informational to Medium
Test Plan Entries 5 Actionable
Production Assets Discovered 1 Expected

The asset count was expected. We run a lean infrastructure. The 91 L7 findings were not. That number reflects every individual attack vector, misconfiguration, and exposure point the scanner identified at the application layer. Many were informational. Several were not.

The AI Assessment: Layer-by-Layer

Our AI threat assessment module grades each protection layer independently. Here is what it returned for ddactic.net:

Layer Rating Assessment
L3/L4 (Network/Transport) STRONG Cloudflare absorbs volumetric attacks effectively
L7 (Application) MODERATE WAF present but with exploitable gaps
API CRITICAL No per-endpoint rate limiting, no schema validation
Cross-Layer ABSENT No fingerprint correlation between layers

STRONG at L3/L4 was expected. Cloudflare is genuinely good at absorbing volumetric floods. That is table stakes in 2026. The MODERATE L7 rating was a mild surprise. The CRITICAL API rating and ABSENT cross-layer rating were the findings that required immediate action.

Critical Gap 1: API Endpoints With No Per-Endpoint Rate Limiting

This was the most significant finding. DDactic's backend exposes 9 API endpoints through Cloudflare. None of them had per-endpoint rate limiting configured.

9 API Endpoints, Zero Per-Endpoint Rate Limits

Every API endpoint was protected only by Cloudflare's generic DDoS mitigation. There were no per-path rate limits, no request schema validation, and no differentiation between lightweight health checks and expensive scan-triggering operations.

Why does this matter? Not all API endpoints have equal cost. A GET /api/health call is essentially free. A POST /api/scan call triggers a multi-minute reconnaissance pipeline involving DNS resolution, HTTP probing, and cloud API calls. Without per-endpoint rate limiting, an attacker can target the expensive endpoints specifically, amplifying the impact per request.

There was also no request schema validation. The API accepted malformed JSON without rejecting it at the edge. While input validation existed at the application layer (preventing injection attacks), the absence of schema validation at the API gateway level meant that garbage requests still consumed processing cycles before being rejected.

The practical attack scenario: an attacker sends 1,000 requests per second to /api/scan with valid-looking but nonsensical payloads. Each request passes Cloudflare (no rate limit), reaches the Flask backend, gets parsed, validated, and rejected. At scale, the validation processing alone exhausts the backend.

Critical Gap 2: Cache Bust Attacks at 50K Deadly RPS

The scanner confirmed that cache bust attacks were viable against our infrastructure. By appending random query parameters to static asset URLs, an attacker can force Cloudflare to treat each request as a cache miss, sending every request to the origin server.

Cache Bust Attack Pattern

Request: GET /index.html?bust=random_string_12345

Cloudflare sees a unique URL per request. Cache hit rate drops to 0%. Every request hits origin.

The scanner estimated that 50,000 requests per second using this technique would be sufficient to overwhelm the origin, bypassing Cloudflare's caching layer entirely.

Impact: Origin server denial of service despite Cloudflare CDN protection. The CDN becomes a transparent proxy rather than a shield.

This is a well-known attack class. Cloudflare offers cache key normalization and query string stripping, but these features require explicit configuration. We had not configured them. The scanner found the gap in seconds.

50,000 RPS is not a large attack. Botnets routinely generate millions of requests per second. The "deadly RPS" metric represents the threshold at which the origin fails, not the volume the attacker needs to generate. A lower deadly RPS means a cheaper, easier attack.

Critical Gap 3: Missing Cross-Layer Fingerprint Validation

This finding was the most architecturally interesting. The scanner identified that DDactic's infrastructure performed no cross-layer fingerprint correlation. Specifically: a bot presenting a Chrome TLS fingerprint (JA3/JA4) at the TLS layer but sending non-browser HTTP headers would pass through unchallenged.

What Is Cross-Layer Validation?

Cross-layer validation compares signals from different protocol layers to detect inconsistencies. A real Chrome browser has a specific TLS fingerprint, specific HTTP/2 behavior, specific header ordering, and specific JavaScript execution capability. If the TLS layer says "Chrome" but the HTTP layer says "Python requests library," something is wrong. Without cross-layer checks, each layer evaluates independently, and attackers only need to spoof one layer at a time.

Cloudflare performs some cross-layer analysis through its Bot Management product, but we were on the standard tier without Bot Management. The standard Cloudflare WAF evaluates each request in isolation. It does not correlate TLS fingerprints with HTTP behavior patterns.

There is a deeper problem here specific to API protection. Cloudflare's primary bot detection mechanism is its JavaScript challenge (the "checking your browser" interstitial). But API clients cannot execute JavaScript. They are not browsers. This means Cloudflare's most effective bot detection tool is architecturally incompatible with API traffic. The scanner flagged this as a fundamental gap, not a misconfiguration.

The 91 L7 Findings: A Closer Look

The 91 application-layer findings broke down into several sub-categories:

Not all of these are critical. Information disclosure findings are low severity individually. But they feed into reconnaissance: knowing the exact server software version helps an attacker choose the right exploit. The protocol-level abuse findings were more concerning, as HTTP/2 multiplexing attacks can amplify a single TCP connection into hundreds of concurrent requests.

The 32 Active Reconnaissance Findings

The active recon module probes for exposed infrastructure, DNS misconfigurations, and network-level exposures. Against ddactic.net, it found 32 items:

Most of these were informational. None revealed a direct exploitable vulnerability. But they represent the information an attacker gathers before launching a targeted attack. The more an attacker knows about your stack, the more precisely they can target its weaknesses.

What We Fixed

Within 48 hours of the scan, we deployed three categories of remediation:

1. API Gateway Rate Limiting

We implemented per-endpoint rate limits with differentiated thresholds based on endpoint cost:

Endpoint Type Before After
Health/status checks No limit 60 req/min per IP
Read operations (GET) No limit 30 req/min per IP
Write operations (POST) No limit 10 req/min per IP
Scan triggers No limit 3 req/min per IP

Rate limits are enforced at two layers: Cloudflare WAF custom rules (edge enforcement) and application-level middleware (origin enforcement). The dual-layer approach ensures that even if Cloudflare's per-PoP counting allows some excess through (see our rate limit research), the origin has its own backstop.

2. Cross-Layer Scoring

We deployed a cross-layer fingerprint scoring system that correlates TLS, HTTP/2, and application-layer signals. Requests with inconsistent fingerprints (for example, a JA4 hash indicating Chrome but HTTP headers indicating a Python script) receive a risk score. Requests above the threshold are challenged or blocked.

This is not a simple allowlist/denylist. Legitimate API clients have non-browser fingerprints by design. The scoring system accounts for this by maintaining profiles of known API client signatures and flagging only anomalous combinations.

3. WAF Rule Audit and Cache Configuration

We reviewed every Cloudflare WAF managed rule and enabled additional rule groups that were previously set to "log only" or disabled entirely. We configured cache key normalization to strip query parameters from static assets, eliminating the cache bust vector. We also tightened the error page configuration to reduce information disclosure.

Time to Remediate

API rate limiting: 4 hours (including testing). Cross-layer scoring: 12 hours (new capability, required development). WAF audit and cache configuration: 2 hours. Total: approximately 18 hours of engineering time spread across 48 hours.

What the Scan Did Not Find

Transparency requires mentioning what went right, not just what went wrong.

These are not trivial. Origin IP protection, authentication, and input validation represent fundamental security hygiene. Getting those right matters. But getting them right while leaving API rate limiting and cross-layer validation absent creates a false sense of security. The strong outer wall makes the unguarded side door more dangerous, not less.

Lessons for Security Teams

Running this scan on ourselves reinforced several things we already believed, and taught us a few things we did not expect.

1. CDN Protection Creates Dangerous Assumptions

Having Cloudflare in front of your infrastructure feels like having protection. And at L3/L4, it genuinely is. But that confidence bleeds into assumptions about L7 and API protection that may not be justified. We assumed our API was "behind Cloudflare, therefore protected." It was behind Cloudflare, but not protected at the API layer in any meaningful way.

2. Simple Infrastructure Is Not Simple to Protect

DDactic has one production domain, a handful of API endpoints, and a static frontend. This is not a complex enterprise environment with hundreds of microservices. And yet the scanner found 131 findings. Complexity is not the primary driver of exposure. Assumptions and blind spots are.

3. The Gap Between "Configured" and "Tested" Is Where Attacks Land

We had Cloudflare WAF enabled. We had managed rules active. We had DDoS protection turned on. All configured. None of it had been tested from an attacker's perspective against our specific configuration. The cache bust vector existed because we had caching configured but had not configured cache key normalization. The feature was available. We simply had not enabled it.

4. Cross-Layer Validation Is the Next Frontier

Most DDoS protection operates within a single layer. The WAF looks at HTTP. The CDN looks at caching. Rate limiting looks at request counts. Nobody correlates across layers. This is where sophisticated attackers find their opening, presenting valid-looking signals at each layer individually while the combination reveals them as malicious.

5. API Protection Requires Different Thinking

The entire CDN protection model was designed for browser traffic. JavaScript challenges, CAPTCHAs, cookie validation: these all assume a browser on the other end. APIs break that assumption fundamentally. If your API sits behind the same CDN as your website, the protection mechanisms designed for your website may not apply to your API at all.

The Uncomfortable Truth

We are a DDoS resilience testing company. Our entire value proposition is finding gaps that defenders miss. We had gaps. If you have not run a similar assessment against your own infrastructure recently, the question is not whether you have gaps. It is how many, and how critical.

Why We Are Publishing This

There is a reasonable argument against publishing a self-assessment that reveals weaknesses. It could undermine confidence in our product. It could give competitors ammunition.

We are publishing it anyway for three reasons.

First, the gaps are fixed. The vulnerabilities described in this post were remediated within 48 hours of discovery. Publishing the details after remediation is standard responsible disclosure practice.

Second, it validates our product. If our scanner finds 131 findings in our own infrastructure, including critical API gaps we did not know about, it demonstrates that the tool works. The best marketing for a vulnerability scanner is showing that it finds real vulnerabilities in environments that should be well-defended.

Third, and most importantly: the security industry has a credibility problem. Vendors routinely claim their products are secure without evidence. We would rather show you that we had gaps and fixed them than pretend we never had gaps at all. That is the point. Even a DDoS resilience testing company has blind spots. Continuous testing is not optional. It is the only way to find what you do not know you are missing.

8.2 min
Time from scan start to full findings report, including AI threat assessment

Run It Against Your Infrastructure

The same scanner that found 131 findings in our infrastructure is available as a free scan. No credentials required. No agents to install. We point it at your domain and deliver a full reconnaissance report, L7 analysis, and AI-powered threat assessment.

If a DDoS vendor's own infrastructure had critical API gaps and missing cross-layer validation, your infrastructure almost certainly has findings worth knowing about.

Get Your Free DDoS Resilience Scan

131 findings in 8.2 minutes. Same scanner, same depth, pointed at your infrastructure. External reconnaissance, L7 analysis, and AI threat assessment included. No cost, no commitment.

Start Your Free Scan
Self-Assessment API Security DDoS Protection Rate Limiting Cache Bust Cross-Layer Validation Cloudflare WAF Configuration Vulnerability Disclosure L7 DDoS Security Research