In our introduction to the Open Protection Index, we explained why DDoS resilience needs a standardized measurement. Now it is time to go deeper: how exactly is the OPI score calculated, what patterns emerge when you apply it across industries, and what practical steps can move your score from a D to a B.
The data in this article is based on DDactic's assessment pipeline across dozens of organizations. These are observed patterns, not guaranteed benchmarks. Every environment is different. But the patterns are consistent enough to be useful for security teams trying to understand where they stand relative to peers.
What Makes Up the OPI Score
OPI is not a single measurement. It is a weighted composite of six components, each measuring a different dimension of DDoS resilience. The final score ranges from 0 to 100, with letter grades that map to real-world protection levels.
Measures deployed defenses before any attack: CDN presence, WAF deployment, origin IP protection, rate limiting, and scrubbing center availability.
The heaviest-weighted component. Measures resilience against HTTP floods, Slowloris, cache bypass, and API abuse. L7 attacks represent 71% of all DDoS traffic.
Network and transport layer protection: SYN floods, UDP amplification, protocol abuse. If your origin is properly hidden behind a CDN, you get an automatic 85 points.
Modern protocol vulnerabilities: HTTP/2 Rapid Reset (CVE-2023-44487), CONTINUATION floods, QUIC-specific attacks. These vulnerabilities do not exist in HTTP/1.1.
Real-world availability during attacks, measured by external validators. Includes latency degradation, false positive rate, and recovery time.
Detection of sophisticated attacks: TLS fingerprint (JA3/JA4) rotation, slow-rate attacks below threshold, header randomization, and IP rotation handling.
How Each Component Is Measured
Defense Coverage (20%)
This is the architectural baseline, measured before any attack simulation begins. Five sub-factors contribute equally:
- CDN Deployment (25%): Are all public-facing assets served through a CDN? Detection uses CNAME patterns, server headers, and IP range analysis.
- WAF Deployment (25%): Is a Web Application Firewall active on all endpoints? Challenge pages, block pages, and response headers confirm presence.
- Origin Protection (20%): Is the origin server IP hidden from public discovery? Certificate transparency logs, historical DNS, and direct IP probing reveal exposure.
- Rate Limiting (15%): Are per-endpoint rate limits configured? An endpoint with no rate limiting is an open invitation for volume attacks.
- Scrubbing Center (15%): Is always-on or on-demand scrubbing available upstream? Always-on scores 100, on-demand scores 50, none scores 0.
L7 Attack Resilience (25%)
At the Validated tier (active testing), each attack category is scored on three metrics: availability during attack (50% weight), latency degradation (30%), and error rate (20%). At the Estimated tier (passive reconnaissance), the score starts from an architecture-inferred baseline and is reduced by penalties for detected attack surface.
These penalties target DDoS-relevant findings that bypass CDN and WAF protection:
- GraphQL introspection enabled: -12 points. Complexity attacks bypass cache and exhaust origin resources.
- Large uncacheable API surface (>20 endpoints): -6 points. Each endpoint is a potential flood target that skips edge caching.
- No rate limiting on login endpoints: -8 points. Unlimited authentication volume with CPU-intensive password hashing.
- WordPress XMLRPC enabled: -6 points. Pingback amplification vector for reflected DDoS.
L3/L4 Resilience (15%)
If your origin IP is properly hidden behind a CDN, you receive an automatic 85 points for this component. This reflects the architectural reality: volumetric attacks against a CDN-protected origin simply cannot reach the target. Full 100 points require hidden origin, upstream scrubbing, and ISP-level protection.
When the origin is accessible, SYN floods (40% weight), UDP floods (30%), amplification attacks (20%), and protocol abuse (10%) are tested directly.
Protocol Resilience (15%)
HTTP/2 introduced connection multiplexing, which also introduced new attack surfaces. The Rapid Reset attack (CVE-2023-44487) carries 35% of this component's weight. CONTINUATION floods (CVE-2024-27316) carry 25%. The remaining 40% covers PING floods, SETTINGS floods, and empty frame attacks.
HTTP/3/QUIC introduces its own attack surface: initial flood, connection ID exhaustion, 0-RTT replay, and version negotiation abuse. If a protocol is not supported by the target, those attacks are excluded from scoring rather than penalized.
Operational Resilience (15%)
This is where theoretical protection meets measured reality. External validators from ISP/residential IPs (35% weight), datacenter probes (25%), real browsers (20%), and third-party monitors (10%) measure what actually happens during an attack: Does the service stay up? How much does latency degrade? Are legitimate users falsely blocked? How quickly does the service recover?
Evasion Resistance (10%)
The smallest component by weight, but often the most revealing. Can the target's defenses detect TLS fingerprint rotation (JA3/JA4), which carries 40% of this score? Can they identify slow-rate attacks that stay below individual thresholds (20%)? These are the techniques that sophisticated attackers use to bypass volumetric detection.
// OPI Total Score Calculation
OPI = (
Defense_Coverage x 0.20 +
L7_Attack_Resilience x 0.25 +
L3_L4_Resilience x 0.15 +
Protocol_Resilience x 0.15 +
Operational_Resilience x 0.15 +
Evasion_Resistance x 0.10
)
The Scoring Rubric
What does each score range actually mean for your organization? Here is the full rubric with practical implications:
| Score | Grade | What It Means | Practical Impact |
|---|---|---|---|
| 90-100 | A | Excellent. Enterprise-grade defense across all layers. CDN + WAF + scrubbing + behavioral detection. Patched against protocol CVEs. Minimal degradation under sustained attack. | Survives coordinated multi-vector attacks. Recovery in seconds. Minimal board-level risk. |
| 80-89 | B | Good. Solid defenses with minor gaps. Strong CDN and WAF, but may lack evasion detection or have unpatched protocol vulnerabilities. Origin is protected. | Handles most attack campaigns. Sophisticated attackers may find workarounds. 1-2 targeted improvements needed. |
| 70-79 | C | Adequate. Basic CDN protection is in place, but significant blind spots remain. Missing rate limiting on key endpoints, or exposed API surface without cache-bypass mitigation. | Survives unsophisticated floods. Vulnerable to targeted L7 attacks, slow-rate techniques, or protocol exploits. Moderate risk. |
| 60-69 | D | Poor. Major vulnerabilities present. Origin may be exposed, WAF coverage is incomplete, or large uncacheable API surface is unprotected. Little evasion detection. | Service will degrade or go down under moderate attack pressure. Extended recovery time. Significant DDoS risk. |
| 0-59 | F | Critical. Minimal to no DDoS protection. No CDN, no WAF, origin directly accessible. Crashes at low request volumes. | Outage within seconds of a targeted attack. Any attacker with basic tools can take the service down. Immediate action required. |
Industry Patterns: What the Data Shows
After running OPI assessments across multiple verticals, clear patterns emerge. These ranges reflect DDactic's assessment data, not guaranteed industry-wide benchmarks. But the consistency is notable: the same structural factors that drive resilience in one financial institution tend to appear across the sector, and the same gaps that weaken one healthcare organization appear across the vertical.
Financial Services
Regulatory pressure drives investment. Strong CDN + WAF deployment, scrubbing centers, and dedicated security teams. Weaknesses tend to appear in evasion resistance and protocol patching for HTTP/2 CVEs.
E-Commerce / Retail
CDN adoption is high (performance-driven), but WAF configuration gaps are common. Large API surfaces for product catalogs, search, and checkout create cache-bypass vectors. Seasonal traffic makes rate limiting harder to tune.
Gaming / Entertainment
Strong L3/L4 protection (game servers need it), but L7 resilience lags behind. Real-time APIs for matchmaking, leaderboards, and chat are difficult to cache and often lack rate limiting. WebSocket and custom protocol surfaces add complexity.
Healthcare
Legacy systems, on-premises infrastructure, and limited security budgets. Many patient portals run on older platforms without CDN protection. HIPAA compliance focuses on data privacy, not availability under attack. Origin servers are frequently exposed.
SaaS / Technology
Wide variance. Cloud-native SaaS platforms with auto-scaling score well. But large API surfaces, GraphQL endpoints, and developer-facing infrastructure often lack DDoS-specific hardening. Evasion resistance depends heavily on whether bot management is deployed.
Government / Public Sector
Procurement cycles delay security upgrades. Many government sites run on aging infrastructure with no CDN. When CDN is present, configuration tends to be basic. Frequent targets during geopolitically motivated campaigns.
Assessment Tier Matters
These ranges reflect OPI Estimated tier scores (passive reconnaissance + L7 recon). Validated tier scores (with active testing) can be higher or lower, because actual behavior under attack often differs from what architecture suggests. A CDN that is "present" may still pass malicious traffic due to misconfigured rules.
Why Financial Services Leads
Financial institutions consistently score highest, and the reasons are structural rather than accidental:
- Regulatory requirements. PCI DSS, SOX, and banking regulators mandate availability controls. Security budgets follow compliance mandates.
- Dedicated DDoS protection. Most financial institutions deploy enterprise-grade scrubbing (Akamai Prolexic, Radware DefensePro, or similar) in addition to CDN-level WAF.
- Security team depth. Larger organizations have SOC teams that actively monitor and tune WAF rules, update rate limits, and respond to emerging threats.
- Incident experience. Financial services is one of the most targeted sectors. Organizations that have survived attacks tend to invest in preventing the next one.
Their gaps are typically in evasion resistance (TLS fingerprint detection requires specialized bot management) and protocol-level patching (HTTP/2 CVEs require coordinated updates across multiple proxy layers).
Why Healthcare Struggles
Healthcare organizations consistently score lowest, and the pattern is predictable:
- Legacy infrastructure. Patient portals, EHR interfaces, and clinical systems often run on platforms that predate modern CDN integration. Migrating a hospital information system behind Cloudflare is not a weekend project.
- Budget allocation. HIPAA compliance consumes most of the security budget, and HIPAA focuses on confidentiality and integrity, not availability under attack.
- Exposed origins. Direct IP access to application servers is common. Many healthcare systems were built before origin-hiding was standard practice.
- Limited rate limiting. Clinical workflows require high availability with minimal friction. Rate limiting is seen as a risk to patient care, so it is often not deployed.
Healthcare is a High-Value Target
Ransomware groups increasingly use DDoS as a secondary pressure tactic during extortion campaigns. A healthcare organization with an OPI score below 50 faces both the DDoS itself and the reputational and regulatory consequences of downtime affecting patient access.
The Gaming Paradox: Strong L4, Weak L7
Gaming companies present an interesting split. Their L3/L4 resilience is often excellent because game servers have been DDoS targets for decades. Game hosting providers bake in volumetric protection as a baseline.
But their L7 scores lag behind. Why?
- Real-time APIs resist caching. Matchmaking, leaderboard, and chat endpoints must serve fresh data. CDN caching is irrelevant here.
- WebSocket surfaces. Persistent connections for game state complicate rate limiting. You cannot simply block a connection that has been alive for hours.
- User-generated content. In-game reporting, chat, and content creation endpoints are expensive to process and hard to rate-limit without hurting the player experience.
- API sprawl. Mobile companion apps, web dashboards, developer APIs, and internal tooling create dozens of uncacheable endpoints across multiple services.
What a Low OPI Score Means in Practice
A number on its own is abstract. Here is what low scores translate to in operational terms:
- More attack vectors available. A low Defense Coverage score means attackers have multiple paths to impact. Exposed origins, missing WAF, no rate limiting: each gap is a separate vector an attacker can exploit without sophistication.
- Longer time-to-mitigation. Without always-on scrubbing, mitigation depends on detecting the attack and manually engaging a service. On-demand scrubbing adds minutes. No scrubbing at all means you absorb the full volume.
- Higher probability of service degradation. A score below 60 means most attack categories will cause measurable user impact. Latency spikes, error rates, or complete outage within the first minutes of sustained pressure.
- Greater financial exposure. DDoS downtime costs vary by industry, but the correlation is clear: organizations with lower OPI scores experience longer outages, higher recovery costs, and more frequent incidents. We covered the financial impact in detail in What Does DDoS Downtime Cost?
How to Improve Your OPI Score
OPI is designed to be actionable. Each component maps to specific infrastructure and configuration changes. Here are the highest-impact improvements for each component:
Defense Coverage: Deploy a CDN with WAF on all public assets
This is the single highest-impact change. A properly configured CDN with WAF covers 45% of the Defense Coverage score (CDN 25% + WAF 25%). Ensure every public-facing domain and subdomain routes through the CDN, not just the marketing site. API subdomains, customer portals, and internal tools need coverage too.
Defense Coverage: Hide your origin IP
If your origin server IP is discoverable through historical DNS records, certificate transparency logs, or direct scanning, your CDN protection can be bypassed entirely. Firewall origin access to CDN IP ranges only. This alone moves L3/L4 Resilience to an automatic 85.
L7 Resilience: Implement per-endpoint rate limiting
Not all endpoints cost the same. Set rate limits based on the computational cost of each endpoint. Login pages with password hashing: 5 requests/minute per IP. Search with database queries: 10/minute. Static content: 1000/minute. This prevents cache-bypass floods from reaching your origin at full volume.
L7 Resilience: Close GraphQL and XMLRPC attack surface
Disable GraphQL introspection in production. Configure query complexity limits (max depth, max cost). Disable WordPress XMLRPC if you do not need pingback functionality. These changes can recover up to 26 penalty points on the L7 score at the Estimated tier.
Protocol Resilience: Patch HTTP/2 CVEs and configure stream limits
Ensure your reverse proxy (Nginx, HAProxy, or CDN) is patched against Rapid Reset (CVE-2023-44487) and CONTINUATION floods (CVE-2024-27316). Configure HTTP/2 max concurrent streams, max header list size, and PING frame limits. These carry 75% of the Protocol Resilience weight.
Operational Resilience: Add auto-scaling with fast scaling speed
Scaling speed matters more than raw capacity. Serverless backends or Kubernetes HPA with pre-loaded images scale in seconds. VM auto-scaling groups take minutes, and by then the service is already degraded. The OPI specification awards up to 8 bonus points for fast scaling architectures.
Evasion Resistance: Deploy bot management with TLS fingerprinting
JA3/JA4 fingerprint detection carries 40% of the Evasion Resistance score. Without it, attackers can rotate TLS fingerprints to appear as different clients and bypass IP-based rate limiting entirely. Solutions like Cloudflare Bot Management, DataDome, or HUMAN provide this capability.
Why Continuous Measurement Matters
An OPI score is a snapshot. It reflects your protection posture at the time of assessment. But protection posture is not static. It drifts.
Configuration Drift Is Real
A CDN rule changed during an incident and never reverted. A new API endpoint deployed without rate limiting. An SSL certificate renewed with a different provider, exposing the origin IP in certificate transparency logs. Each of these silently reduces your OPI score without anyone noticing.
There are several common causes of OPI score drift:
- Infrastructure changes. New subdomains, API endpoints, or microservices deployed outside CDN coverage. Every unprotected endpoint is a potential flood target.
- WAF rule changes. Rules disabled to fix false positives may never be re-enabled. Over time, the WAF becomes permissive by accumulation.
- Certificate transparency. SSL certificate renewals can expose origin IPs that were previously hidden. Certificate logs are public and searchable.
- Vendor changes. Switching CDN providers, changing WAF vendors, or migrating to new infrastructure can introduce gaps during transition periods.
- New CVEs. Protocol vulnerabilities like HTTP/2 Rapid Reset did not exist as a testing category before October 2023. New attack techniques expand the scoring surface.
Regular reassessment catches these changes before attackers exploit them. DDactic recommends quarterly OPI assessments at minimum, with automated monitoring for high-impact changes like origin IP exposure or CDN coverage gaps.
Assessment Tiers: Passive, Estimated, Validated
Not all OPI scores are equally precise. The specification defines three assessment tiers, and the tier determines what the score actually tells you.
- Passive (OPI Passive): Uses only DNS records, HTTP headers, certificate transparency, and WHOIS. Detects CDN/WAF presence and origin exposure. Cannot measure actual resilience under attack. Low to medium accuracy. Useful for initial triage and free assessments.
- Estimated (OPI Estimated): Adds L7 reconnaissance: API endpoint discovery, GraphQL detection, rate limit probing, login form detection. Applies attack surface penalties. Medium to high accuracy. This is the standard for pre-engagement assessment.
- Validated (OPI Validated): Active attack simulation with measured results. Actual availability, latency, and recovery data replace estimates. High accuracy. This is the authoritative score.
A Validated score can be significantly different from an Estimated score in either direction. A CDN that appears well-configured may pass attack traffic due to rule misconfiguration (Validated score lower). Or a seemingly exposed origin may have on-premises DDoS appliances that passive detection cannot see (Validated score higher).
Read More About OPI
For the complete technical specification including all formulas, sub-component weights, and conformance requirements, visit the OPI Standard page. The specification is open source under Apache 2.0.
From Score to Action
The purpose of OPI is not to produce a number. It is to produce a prioritized list of improvements. The component breakdown tells you exactly where your resilience gaps are. The industry benchmarks tell you how you compare to peers. The scoring rubric tells you what grade you need to reach acceptable risk.
A financial institution scoring 72 (Grade C) knows they are below their industry range of 65-80 and can see from their component scores that Protocol Resilience is dragging them down. The fix is specific: patch HTTP/2 CVEs and configure stream limits.
A healthcare organization scoring 40 (Grade F) knows they need foundational changes: CDN deployment, origin protection, and basic rate limiting. Evasion resistance can wait. Defense Coverage comes first.
That is what a good metric does. It does not just measure. It directs action.
Get Your OPI Score
DDactic's free scan provides an OPI Estimated score with component-level breakdown, industry comparison, and prioritized recommendations. See where you stand before attackers test your defenses for you.