Most DDoS testing platforms stop at the attack. They generate traffic, measure how defenses respond, and produce a report. What happens next, the actual incident response, is left entirely to the customer. DDactic takes a different approach: the same engine that simulates attacks also drives automated incident response. One system, two directions.
This is not a theoretical architecture. DDactic has been building attack simulations across 13 cloud platforms, 233 attack techniques, and 25 vendor-specific defense configurations. The data flowing through the simulation pipeline, detection latency, mitigation triggers, vendor API responses, is exactly the data an IR system needs to respond to a real attack. The decision to build IR automation was not a pivot. It was an inevitability.
The Core Insight: Simulation and Response Are Symmetric
Consider what a DDoS simulation engine does. It deploys attack traffic from a distributed fleet, monitors how the target's defenses react, measures time-to-detection and time-to-mitigation, and records vendor-specific behavior at each stage. Now consider what an IR system needs: detection of incoming attack patterns, correlation with known techniques, vendor-specific mitigation actions, and orchestration of the response timeline.
These are the same operations running in opposite directions. The simulation engine generates traffic and observes how defenses respond. The IR pipeline observes traffic and generates defensive actions. The vendor API integrations, the attack taxonomy, the detection signatures, all of it is shared infrastructure.
Why This Matters
Building IR from a simulation engine means every playbook has already been tested against real attack traffic before it is ever needed in production. The playbook does not just describe what should happen. It has been validated against what actually happens when a specific attack hits a specific vendor's defenses.
Architecture: How It Works
The system connects three layers: detection beacons that observe traffic patterns, a correlation engine that matches observations to the attack taxonomy, and vendor-specific playbooks that execute mitigation actions through provider APIs.
Attack Traffic
|
v
+--------------------------------------------+
| Detection Beacons |
| (HTTP metrics, connection rates, headers) |
+--------------------------------------------+
|
telemetry stream
|
v
+--------------------------------------------+
| Correlation Engine |
| 233 techniques x 10 vendors x 3 configs |
| Match observed pattern -> known technique |
+--------------------------------------------+
|
matched technique + confidence
|
v
+--------------------------------------------+
| Playbook Orchestrator |
| 25 playbooks across 10 vendors |
| Vendor API calls, rule updates, escalation |
+--------------------------------------------+
|
+-------------+-------------+
| | |
v v v
+-----------+ +-----------+ +-----------+
| Cloudflare| | Akamai | | Imperva |
| API | | API | | API |
+-----------+ +-----------+ +-----------+
The detection beacons are lightweight probes deployed alongside the target infrastructure. They monitor HTTP request rates, connection patterns, header anomalies, and response latency. When a beacon detects a pattern that deviates from the established baseline, it streams telemetry to the correlation engine.
The correlation engine maps observed patterns against DDactic's attack taxonomy of 233 techniques. Each technique has a signature profile: expected request rate, header characteristics, protocol-level behavior, and the vendor-specific detection latency measured during prior simulations. When a match exceeds the confidence threshold, the engine triggers the appropriate playbook.
Four Maturity Levels
Not every organization is ready for full automation on day one. DDactic's IR capability is structured as a maturity progression. Each level builds on the previous one, and organizations can operate at whichever level matches their risk tolerance and operational readiness.
Level 1: Manual Playbooks Foundation
Written procedures derived from simulation results. After a DDoS test engagement, DDactic delivers vendor-specific runbooks that document exactly what to do when each attack type is detected. These are static documents, but they are grounded in tested behavior rather than generic best practices. Each runbook includes the specific API calls, configuration changes, and escalation steps validated during the simulation.
Level 2: Beacon + Dashboard Visibility
Detection beacons are deployed and stream telemetry to a real-time dashboard. The security team can see attack patterns as they develop, correlated against the known taxonomy. The dashboard shows which technique is being used, which vendor defenses are engaged, current mitigation status, and recommended next actions. Human operators make all decisions, but they have full situational awareness rather than raw logs.
Level 3: Recommended Actions Assisted
The system identifies the attack technique, selects the appropriate playbook, and presents the specific mitigation steps to the operator as pre-built actions. One click to execute a rate limit change, one click to shift to "under attack" mode, one click to update WAF rules. The operator retains approval authority, but the research and preparation are automated. Response time drops from minutes of manual investigation to seconds of confirmation.
Level 4: Full Automation Autonomous
Playbooks execute automatically when confidence exceeds a configurable threshold. The system detects the attack, matches the technique, selects the vendor-specific playbook, and executes the mitigation actions through the vendor's API without human intervention. Operators are notified after the fact and can override or roll back any action. This level is appropriate for organizations with well-understood traffic patterns and mature change management processes.
Vendor Coverage: 25 Playbooks Across 10 Vendors
Each vendor's API surface, configuration model, and mitigation capabilities are different. A playbook for Cloudflare looks nothing like a playbook for Akamai or Imperva. DDactic maintains vendor-specific playbooks that use each provider's native API to execute mitigation actions.
| Vendor | Playbook Count | Key Actions |
|---|---|---|
| Cloudflare | 4 | Under Attack mode, rate limiting rules, WAF custom rules, IP access rules |
| Akamai | 3 | Rate control policies, client reputation rules, site failover |
| Imperva | 3 | Security rule updates, bot mitigation tuning, DDoS policy escalation |
| AWS Shield/WAF | 3 | WAF rule groups, Shield Advanced protections, rate-based rules |
| Azure Front Door | 2 | WAF policy updates, custom rules, geo-filtering |
| Radware | 2 | DefensePro policy tuning, AppWall rule updates |
| F5 | 2 | iRule deployment, DoS profile adjustment |
| Fastly | 2 | VCL snippet deployment, rate limiting configuration |
| Sucuri | 2 | Firewall mode escalation, IP blacklisting |
| DataDome | 2 | Bot protection tuning, custom detection rules |
The playbook library is not a static collection. Each time DDactic runs a simulation against a vendor's protection stack, the playbook for that vendor is updated with the latest observed behavior. Detection thresholds, API response times, configuration propagation delays, all of this is captured and fed back into the playbook definitions.
The Self-Demo: DDactic Attacks Itself
The most effective way to demonstrate IR automation is to run it live. DDactic maintains a self-demo environment where the attack simulation engine and the IR pipeline operate simultaneously against the same target. The simulation fleet launches an attack. The detection beacons pick it up. The correlation engine identifies the technique. The playbook executes the mitigation. The attack is contained. All of this happens in real time, visible on the dashboard.
Why Self-Demo Matters
Every vendor claims automated response. The difference is testability. DDactic can demonstrate the full loop, attack generation through automated mitigation, on demand, because the same platform that creates the attack is the platform that responds to it. There is no separate "demo environment" with canned data. The attack is real. The response is real. The timeline is measured in seconds.
The self-demo serves a second purpose beyond sales demonstrations. It is a continuous integration test for the IR pipeline itself. Every time the attack taxonomy is updated with new techniques, the self-demo validates that the detection signatures, correlation logic, and playbook actions still work correctly. If a vendor changes their API or updates their mitigation behavior, the self-demo catches the regression before it affects a customer.
From Testing to Hardening: Vendor API Integration
The IR pipeline naturally extends into proactive hardening. If the system can push emergency mitigation rules through a vendor's API during an attack, it can also push hardening configurations before an attack occurs. This is where the simulation data becomes most valuable.
After a DDoS simulation engagement, DDactic knows exactly which attack techniques succeeded, which were partially mitigated, and which were fully blocked. That knowledge translates directly into vendor-specific configuration recommendations. With the customer's API credentials, those recommendations can be applied automatically.
The progression looks like this: simulate an attack, identify gaps in vendor configuration, generate hardening templates, apply them through the vendor API, re-simulate to confirm the gaps are closed. The entire cycle can run without manual intervention at maturity level 4.
Customer Control
Automated vendor API actions require explicit customer authorization and credential delegation. DDactic never accesses a customer's vendor account without documented consent, and all API actions are logged in an immutable audit trail. Customers can revoke access or restrict the scope of automated actions at any time.
What This Changes
Traditional DDoS testing produces a report. The report identifies gaps. The customer's team reads the report, prioritizes the findings, and manually implements fixes over weeks or months. By the time remediation is complete, the threat landscape has shifted, and the vendor's configuration may have drifted.
With simulation-driven IR, the loop closes automatically. Attack simulation generates findings. Findings generate playbooks. Playbooks execute through vendor APIs. Re-simulation validates the fix. The gap between "we found a problem" and "the problem is resolved" shrinks from weeks to hours, or at maturity level 4, to minutes.
This is not a replacement for a security operations team. It is a force multiplier. The SOC still owns the decisions at maturity levels 1 through 3. But the research, correlation, and execution are handled by a system that has already tested every scenario it recommends acting on.
See Automated IR in Action
DDactic can demonstrate the full attack-to-response loop against your protection stack. Start with a free infrastructure scan to understand your current exposure, then see how simulation-driven IR closes the gaps.
Get a Free Scan