No CDN Vendor Uses JA3 as a Blocking Signal. Here's What They Actually Check.

April 15, 2026 | 11 min read | DDactic Research

Security teams assume TLS fingerprinting blocks bots. It doesn't. We reverse-engineered the JS challenge logic of 25 WAF vendors and tested Cloudflare Free tier at sustained 219 RPS. Zero blocks. Not one. The TLS fingerprint was validated, logged, and completely ignored as a blocking signal.

JA3 and JA4 fingerprints have become a fixture of security conference talks and vendor marketing decks. The implication is clear: if a client presents a non-browser TLS fingerprint, it gets blocked. The reality is different. Every vendor we tested uses TLS fingerprints for classification and analytics, not for hard blocking decisions. The actual blocking signals are elsewhere, and understanding where they are changes how you think about bot mitigation and DDoS defense.

219 RPS
Sustained through Cloudflare Free tier with zero blocks, zero challenges

What JA3 and JA4 Actually Are

JA3 is a method for creating a fingerprint of a TLS client based on fields in the Client Hello message. Introduced by John Althouse, Jeff Atkinson, and Josh Atkins at Salesforce in 2017, it hashes the TLS version, cipher suites, extensions, elliptic curves, and elliptic curve point formats into an MD5 hash. JA4, its successor, adds more granularity by incorporating ALPN values, signature algorithms, and other TLS parameters into a structured format.

The fingerprint identifies the TLS implementation, not the user. A Chrome browser on Windows produces a different JA3 hash than Chrome on macOS, which differs from curl, which differs from Python's requests library. In theory, this lets a WAF distinguish "real browser" from "scripted HTTP client" at the TLS layer, before any HTTP request is even processed.

In theory.

The Test: 219 RPS Through Cloudflare Free, Zero Blocks

We set up a test domain behind Cloudflare's Free tier, which includes their bot management and WAF capabilities. We then sent sustained traffic at 219 requests per second using a custom HTTP/2 client with a non-browser TLS fingerprint. The JA3 hash did not match any known browser. The cipher suite ordering was non-standard. The TLS extensions were minimal.

The result: every single request received a 200 OK response. No blocks. No CAPTCHA challenges. No 429 rate limits. The traffic ran for the full duration of the test without interruption.

What Cloudflare Did Log

Cloudflare's dashboard correctly identified the traffic as "automated" in its bot analytics. The JA3 fingerprint was recorded. The bot score was low (indicating bot-like behavior). But the default action for this classification is "log," not "block." The fingerprint was used for visibility, not enforcement.

This is not a Cloudflare-specific finding. We tested this pattern across multiple vendors and tiers. The result was consistent: TLS fingerprints feed into scoring and analytics systems, but no vendor we tested uses JA3 or JA4 as a standalone blocking signal in their default configuration.

Why Vendors Do Not Block on JA3

The reason is straightforward: false positives. Blocking on TLS fingerprint alone would break legitimate traffic at scale.

Shared Fingerprints Across Legitimate Clients

JA3 fingerprints are not unique to individual users or even individual applications. They are determined by the TLS library and its configuration. This means:

The False Positive Problem

A CDN vendor serving millions of domains cannot afford to block traffic based on TLS fingerprint alone. One false positive at Cloudflare's scale affects thousands of sites simultaneously. The risk calculus is clear: logging is safe, blocking is not.

TLS Fingerprint Spoofing Is Trivial

Even if vendors wanted to block on JA3, the signal is easily spoofed. Libraries like utls (Go), tls-client (Python), and curl-impersonate allow any client to present an arbitrary TLS fingerprint. A bot can trivially present Chrome's exact JA3 hash while running headless automation. This makes JA3 a weak signal for enforcement because any attacker aware of the check can bypass it with a single library import.

The vendors know this. That is precisely why they treat JA3 as one input to a scoring model rather than a binary gate.

What Vendors Actually Check: The Five Layers

We reverse-engineered the JS challenge logic of 25 WAF vendors. The actual blocking decisions happen across five distinct layers, and JA3 is not a blocking signal in any of them.

Layer 1: JavaScript Execution Capability

The most fundamental check: can this client execute JavaScript? Every major vendor deploys some form of JS challenge as their primary bot detection gate. The challenge scripts vary in complexity, but the core question is binary. If a client cannot execute JavaScript, it is classified as a simple bot and either blocked or served a CAPTCHA.

This is why headless browsers (Puppeteer, Playwright) bypass basic bot detection. They execute JavaScript. The challenge passes. The TLS fingerprint is irrelevant because the JS execution check already cleared the client.

Vendors using JS execution as primary gate All 25 tested

Every vendor we examined deploys a JavaScript challenge as the first line of defense. Cloudflare uses its managed challenge (Turnstile), Akamai uses Bot Manager's client-side script, Imperva uses its Advanced Bot Protection JS, DataDome injects an inline script, and PerimeterX (now HUMAN) runs a sensor script. The specific implementations differ, but the architectural pattern is universal.

Layer 2: Cookie and State Management

After a client passes the JS challenge, the vendor sets a validation cookie. Subsequent requests must present this cookie to avoid re-challenge. The cookie is typically signed, time-limited, and bound to the client's session. Vendors check:

This is the closest that JA3 comes to being a blocking signal, but even here it is indirect. A JA3 mismatch does not block the request. It invalidates the session cookie, which triggers a new JS challenge. The client gets another chance to prove it can execute JavaScript. It is a consistency check, not a block.

Layer 3: Browser Environment Fingerprinting

The JS challenge scripts collect far more than a pass/fail execution result. They probe the browser environment for signals that distinguish a real browser from an automated one:

Signal What It Detects Spoofing Difficulty
Canvas fingerprint GPU rendering differences across hardware Medium - requires GPU emulation
WebGL renderer Graphics card identity and driver version Medium - must match OS/hardware profile
Navigator properties Browser version, platform, language, plugins Low - easily overridden
Screen dimensions Headless browsers often have default/unusual sizes Low - trivial to set
Audio context Audio processing fingerprint unique to hardware High - requires audio stack emulation
Font enumeration Installed font list varies by OS and configuration Medium - must match OS profile
Automation flags navigator.webdriver, Chrome DevTools protocol markers Low - well-known bypass techniques

The key insight is that these signals are collected and evaluated inside the JS challenge. They are not available at the TLS layer. A client that never executes the JS challenge, such as a direct API call, is never fingerprinted at this layer at all.

Layer 4: Behavioral Analysis

The most sophisticated detection layer examines how a client interacts with the page over time:

This layer is where advanced bot detection vendors like HUMAN (PerimeterX), DataDome, and Kasada differentiate themselves. The behavioral data is collected by the JS sensor script and sent back to the vendor's analysis backend. Again, none of this is available at the TLS layer.

Layer 5: Cross-Request Intelligence

The final layer aggregates signals across requests and sessions:

This last point, device consistency, is where JA3 plays its actual role. It is one input in a consistency check. If a client claims to be Chrome 124 via its User-Agent but presents a JA3 hash that matches Python's requests library, that inconsistency raises the risk score. But the JA3 mismatch alone does not trigger a block. It contributes to a composite score alongside dozens of other signals.

The Gap: API Endpoints Skip All Five Layers

Here is the finding that matters most for DDoS defense. All five detection layers described above require one thing: a browser-like client that loads HTML, executes JavaScript, and interacts with page elements. API endpoints serve JSON. They are consumed by mobile apps, backend services, and scripts that never render HTML and never execute JavaScript.

The API Blind Spot

When a bot sends requests directly to an API endpoint, it bypasses all five detection layers simultaneously. There is no JS challenge to execute. There are no cookies to validate (or the API uses token auth). There is no browser environment to fingerprint. There is no behavior to analyze. And the TLS fingerprint, the one signal that IS available at the API layer, is not used as a blocking signal.

This creates a structural gap in DDoS protection. A bot that sends HTTP/2 requests to /api/v1/search with a valid API key or auth token faces none of the detection mechanisms that protect the website's HTML pages. The only signals available are IP reputation, request rate, and TLS fingerprint. Rate limiting is approximate (see our rate limit research). IP reputation is bypassable with residential proxies. And TLS fingerprint, as we have established, is not a blocking signal.

Cross-Layer Fingerprint Validation Is Absent

The logical defense would be cross-layer validation: comparing the TLS fingerprint against the HTTP-layer behavior to detect inconsistencies. For example, if a client presents Chrome's JA3 hash but sends HTTP/2 frames with non-Chrome SETTINGS values, that mismatch should be flagged.

In practice, this cross-layer validation is absent in most deployments. The TLS termination happens at the edge. The HTTP processing happens at a different layer (often a different process or even a different server). The JA3 hash may be passed as a header to the HTTP layer, but systematic cross-validation between TLS parameters and HTTP behavior is not implemented in default configurations.

Validation Type Available at TLS Layer Available at HTTP Layer Cross-Validated?
JA3/JA4 fingerprint Yes As header only Rarely
HTTP/2 SETTINGS No Yes No
Header order No Yes No
ALPN negotiation Yes No No
User-Agent consistency No (but JA3 implies client) Yes Some vendors
TLS extension ordering Yes No No

GCP Cloud Armor is a notable exception. It supports JA3 fingerprint as a rate limiting key, allowing administrators to rate-limit per TLS fingerprint. But even there, the fingerprint is used for counting, not for outright blocking.

The 25 Vendors: What We Found

We reverse-engineered the client-side JS challenge logic of 25 WAF and bot management vendors. Here is how each one uses TLS fingerprinting.

Vendor JA3 Used for Blocking? Primary Detection Method JS Challenge?
Cloudflare No - scoring only Managed Challenge (Turnstile) Yes
Akamai No - classification Bot Manager sensor script Yes
Imperva No - risk input Advanced Bot Protection JS Yes
HUMAN (PerimeterX) No - consistency check Behavioral sensor + device fingerprint Yes
DataDome No - scoring input Real-time ML on device signals Yes
Kasada No - telemetry Proof-of-work + behavioral Yes
AWS WAF No - not evaluated Rate rules + managed rule groups Optional (Bot Control)
Azure Front Door No - not evaluated Rate limiting + custom rules No
GCP Cloud Armor No - rate limit key only Rate rules + reCAPTCHA Enterprise Optional
Radware No - fingerprint DB Bot Manager + device fingerprint Yes
F5 (Shape) No - classification Signal collection + behavioral ML Yes
Fastly (Signal Sciences) No - logging Request inspection + rate rules No
Sucuri No - not evaluated JS challenge + IP reputation Yes

The remaining 12 vendors (including Barracuda, Fortinet FortiWeb, Citrix/NetScaler, Reblaze, Edgio, Vercel, Netlify, StackPath, KeyCDN, Secucloud, ThreatX, and Wallarm) showed the same pattern. None use JA3 as a standalone blocking signal in their default deployment.

What This Means for DDoS Protection

The implications for DDoS defense are significant and specific:

1. Your API Endpoints Are Unprotected by Bot Detection

If your DDoS protection relies on JS challenges and behavioral analysis, your API endpoints receive none of that protection. API traffic is evaluated only by rate limits, IP reputation, and request pattern analysis. The TLS fingerprint is available but unused as a blocking signal. This means an attacker targeting your API with a distributed botnet faces minimal detection, especially if each bot stays below per-PoP rate limits.

2. Headless Browsers Pass Every Check

A headless browser with proper stealth configuration (Puppeteer with stealth plugin, Playwright with anti-detection patches) executes JavaScript, renders Canvas, produces mouse movements, and presents a valid browser TLS fingerprint. It passes all five detection layers. JA3 does not help here because the fingerprint matches a real browser. The only remaining signal is behavioral anomalies over extended sessions, which most attacks do not need to maintain.

3. The "Bot Score" Is Not a Block

Vendors report bot scores and classifications in their dashboards. Security teams see traffic flagged as "likely automated" and assume it was handled. In most default configurations, "handled" means "logged." Converting a bot score into a blocking action requires explicit rule configuration. Many organizations never take that step, leaving bot-scored traffic flowing freely to their origin.

The Configuration Gap

In our scans, we consistently find that organizations have bot detection enabled but have not configured blocking actions for medium-confidence bot traffic. The vendor detects the bot. The dashboard shows the detection. The traffic reaches the origin anyway. JA3 classification without a corresponding blocking rule is just an expensive logging mechanism.

4. DDoS Attacks Do Not Need to Evade Bot Detection

For application-layer DDoS, the attacker's goal is not to look human. It is to send enough expensive requests to exhaust backend resources. If each request is processed before the rate limit kicks in (or if the rate limit is per-PoP and therefore bypassable), the attack succeeds regardless of the bot score. The TLS fingerprint classification happens after the TCP handshake and TLS negotiation are already complete, consuming server resources. Even if the request is eventually flagged, the damage is done.

5. Cross-Layer Validation Would Help, but Nobody Ships It by Default

The defense that would make JA3 useful is systematic cross-layer validation: comparing the TLS fingerprint against HTTP/2 frame settings, header ordering, and claimed User-Agent. A mismatch between these layers is a strong bot signal. But implementing this requires correlating data from different processing stages, and no vendor we tested ships this as a default-on capability. It exists in some enterprise-tier configurations but requires manual rule creation.

Practical Recommendations

Based on our research across 25 vendors:

  1. Do not rely on JA3/JA4 for DDoS mitigation. It is a classification tool, not a defense mechanism. Your vendor is logging it. They are not blocking on it.
  2. Audit your bot detection configuration. Check whether "detect" actions are paired with "block" actions. Many deployments detect bots but take no enforcement action.
  3. Protect API endpoints separately. JS challenges cannot protect API endpoints. Implement token-based authentication, per-endpoint rate limiting, and request signature validation at the API layer.
  4. Test from outside the JS challenge path. Your penetration tests should include direct API calls that bypass the JS challenge entirely. If your API accepts unauthenticated requests, it is exposed regardless of your bot detection investment.
  5. Enable cross-layer validation where available. If your vendor offers HTTP/2 fingerprint matching or header-order analysis, enable it. These are stronger signals than JA3 alone.
  6. Measure actual blocking rate, not detection rate. Your vendor's dashboard shows what was detected. What matters is what was blocked. These numbers are often very different.

What Does Your WAF Actually Block?

DDactic's free infrastructure scan tests your defenses from the outside, the way an attacker would. We check whether bot traffic is actually blocked or just logged, whether your API endpoints are exposed, and whether your WAF configuration enforces what your dashboard claims.

Get a Free Scan
JA3 JA4 TLS Fingerprinting Bot Detection WAF Bypass DDoS Protection Cloudflare Akamai JavaScript Challenge API Security Browser Fingerprinting Security Research