Technical Deep Dive

Restricted to authorized personnel
Confidential Technical Overview

The Engineering Behind DDactic

A distributed DDoS resilience testing platform built across 19 cloud providers, processing industry-wide vulnerability assessments at scale. This is what makes it hard to replicate.

19Cloud Platforms
75sSpot Recovery
19Industry Configs
9Intel Sources

19-Platform Cloud Orchestration

Most security platforms use one, maybe two cloud providers. DDactic deploys bot instances across 19 cloud platforms simultaneously through a single Go-based Deploy Service. Each platform has a dedicated adapter handling authentication, API differences, region selection, and snapshot management.

This isn't a wrapper around Terraform. It's a purpose-built deployment engine that handles platform-specific quirks: OVH's OpenStack metadata colliding with AWS at 169.254.169.254, Scaleway's cloud-init not running on snapshot instances, IBM requiring VPC infrastructure pre-provisioning, Tencent's mainland China regions being GFW-blocked.

5
Spot-Capable Platforms
AWS, GCP, Azure, Alibaba, Tencent. Each with different spot APIs, pricing models, and reclamation behaviors.
8
On-Demand Platforms
DigitalOcean, Vultr, Linode, Hetzner, OVH, Scaleway, IBM, Oracle. Instant provisioning from pre-built snapshots.
~15s
Average Deploy Time
From API call to bot binary running and registered with the Fleet Controller. Snapshot-based, no cloud-init dependency.
1
Boot Script
boot.sh auto-detects all 19 platforms via metadata endpoints, configures Fleet Controller, downloads binary. Zero manual intervention.
boot.sh - 13-Platform Auto-Detection (simplified)
# Each platform has a unique metadata endpoint curl -s 169.254.42.42 # Scaleway curl -s 100.100.100.200 # Alibaba curl -s 169.254.0.23 # Tencent curl -s 169.254.169.254 # AWS, GCP, Azure, OVH (differentiated by headers) cat /sys/class/dmi/id/sys_vendor # Linode ("Akamai"), Vultr hostname # IBM (pattern match: *-ibm-*) # OVH detected BEFORE AWS (both respond at 169.254.169.254) # Solution: check /openstack/latest/vendor_data.json first

Self-Healing Spot Instance Fleet

Spot instances save 60-90% on compute costs but can be reclaimed at any time. The Spot Monitor polls all 5 spot-capable platforms every 60 seconds through the Deploy Service's /status/instances endpoint, detects terminations, and automatically provisions replacements.

The replacement instance is deployed on the same platform with the same configuration, including the spot flag, so it's also a spot instance. The bot downloads its binary, auto-detects its platform, registers with C2C, and rejoins the fleet. Total recovery: under 75 seconds.

T+0s
Cloud provider reclaims spot instance. GCP gives 30s warning (ignored; we detect post-termination). AWS gives 2min warning. Azure gives 30s.
T+60s (worst case)
Spot Monitor detects missing instance. Polling cycle catches the terminated status. Instance classified as "reclaimed" vs "manual destroy" based on is_spot metadata.
T+61s
POST /deploy sent to Deploy Service. Same platform, same region, same instance type, spot: true. The Deploy Service picks the platform adapter and provisions.
T+75s
New instance running and registered with the Fleet Controller. boot.sh executed, binary downloaded from Binary Server (Nginx :9999), platform auto-detected, Fleet Controller registration complete. Fleet at full strength.
Spot Monitor - Detection Logic
// Every 60 seconds: GET /status/instances from Deploy Service // Returns all instances across 19 platforms with is_spot flag for _, dbInstance := range trackedInstances { liveInst, alive := liveMap[dbInstance.ID] if !alive { // Instance disappeared — classify termination if dbInstance.IsSpot { reason = "reclaimed" // Spot instance → auto-redeploy } else { reason = "unknown" // On-demand → investigate } redeployInstanceFallback(dbInstance) } else if liveInst.IsSpot && liveInst.Status == "TERMINATED" { // GCP: terminated but not yet deleted redeployInstanceFallback(dbInstance) } }

Residential Proxy Escalation

Certificate Transparency logs (crt.sh) are a critical intelligence source but aggressively rate-limit datacenter IPs. DDactic routes these queries through a residential IP connection: a proxy running on a real ISP line, tunneled through Cloudflare Tunnel for encryption and reliability.

The backend automatically escalates to the residential proxy when datacenter requests fail. Results are cached in S3 with 24-hour TTL, so each domain is only queried once per day regardless of how many scans reference it.

Request Chain
AWS Batch Scanner
↓ crt.sh blocked from datacenter
Dedibox Backend API
↓ HTTPS to proxy.ddactic.net
Cloudflare Tunnel
↓ encrypted tunnel
Residential ISP PC
↓ real residential IP
crt.sh responds
Why this matters:

• crt.sh rate limits to ~1 req/sec from any IP
• Datacenter IPs face additional throttling
• Residential IPs get preferential treatment
• CF Tunnel provides encryption + reliability
• 24h S3 cache prevents redundant queries
• Scanner polls S3 while backend pre-fetches async

3-Stage Reconnaissance Pipeline

The scanner runs as a Docker container on AWS Batch. Three sequential stages build a complete picture of the target's attack surface. Stage 3 runs 5 specialized L7 tools in parallel, each using evasion techniques like uTLS fingerprinting and randomized cipher suites to avoid detection.

Stage 1
Discovery
SLD enumeration, subdomain discovery via 9 API sources (crt.sh, VirusTotal, Shodan, SecurityTrails, Censys, CIRCL, Google CSE, and more). DNS resolution, ASN mapping, cloud provider identification.
Stage 2
Port Scanning
Nmap scan with GlobalPing multi-region availability testing from 15+ locations. CDN detection and filtering (Cloudflare, CloudFront, Akamai, Fastly). Last-hop traceroute analysis.
Stage 3
L7 Reconnaissance
5 parallel tools: HTTP fingerprinting (uTLS, random ciphers), DNS audit (DNSSEC, zone transfer), SMTP security (SPF/DKIM/DMARC), SIP probing, Direct-to-Router mapping.

Industry-Wide Batch Intelligence

Beyond individual customer scans, DDactic maintains vulnerability intelligence across 19 industries. Pre-configured company lists enable batch scanning of hundreds of companies simultaneously via AWS Batch parallelization. This produces competitive benchmarks and market-wide protection assessments.

Financial
Healthcare
Telecom
E-commerce
Gaming
Cloud/SaaS
Airlines
Energy
Automotive
Insurance
Crypto
Media
Education
Logistics
Social
Entertainment
Cybersecurity
Technology
Retail

Load-Balanced Fleet Controllers with HTTP/2

Two Fleet Controllers behind an AWS Application Load Balancer handle fleet coordination. Bots communicate over HTTP/2 through Cloudflare-proxied DNS, so the Fleet Controller infrastructure benefits from Cloudflare's DDoS protection. Each bot self-identifies its platform, IP, and capabilities on registration.

2
Fleet Controllers
ALB host-based routing distributes bots across servers. Each has an embedded web dashboard for real-time fleet visibility.
HTTP/2
Bot Protocol
Multiplexed connections, header compression. Bots poll for commands, report results. All traffic through Cloudflare proxy.
:9999
Binary Server
Nginx serves boot.sh and bot_latest. Every boot pulls fresh binary. Fleet-wide updates without SSH access to any instance.

Physical Device Labs

Web scanners miss mobile and desktop application endpoints. DDactic operates physical device labs that intercept real application traffic via MITM proxies, discovering API endpoints, WebSocket connections, gRPC channels, and telemetry backends that are invisible to traditional reconnaissance.

iOS
Jailbroken iPhone 7+
iOS 14.3, SSL pinning bypass, full traffic interception. Captures API calls, push notification endpoints, certificate pinning configs.
Android
Lenovo Yoga Tablet
Android 10, root access. gRPC, QUIC, and MQTT traffic capture. Play Store app scanning.
44
Windows Desktop Apps
Slack, Teams, Zoom, 1Password, Figma, and 39 more. Hourly automated traffic capture and analysis.

Vendor-Specific Hardening Engine

The platform generates copy-paste CLI commands tailored to the customer's CDN/WAF vendor. 16 hardening templates across 6 vendors, with optional credential injection from the customer's stored integrations. Before-test and after-test recommendation sets ensure the right fixes at the right time.

TemplateCloudflareAWSGCPAzureAkamai
Rate LimitingAPI + CLIWAFv2Cloud ArmorFront DoorProperty Mgr
WAF RulesManagedOWASP SetPre-configWAF PolicyApp & API
TLS HardeningMin TLSACM PolicySSL PolicyTLS ConfigEdge Cert
DDoS ConfigSec LevelShieldArmorDDoS PlanKona Site
Bot MgmtBot FightBot ControlreCAPTCHABot MgrBot Mgr
Geo BlockingFirewallWAF GeoArmor GeoGeo FilterEdge Logic

Why This Is Hard to Replicate

Each component is individually achievable. The moat is the integration: making 19 cloud APIs, 9 intelligence sources, 5 L7 recon tools, physical device labs, self-healing spot fleets, and vendor-specific hardening work as a single automated pipeline.

13
Cloud API Integrations
Each with unique auth, API shapes, quota limits, snapshot formats, and failure modes. 18+ months of platform-specific debugging.
19
Industry Datasets
Curated company lists with subsidiaries, primary domains, and industry categorization. Competitive intelligence at scale.
0
Manual Intervention
From scan submission to hardening report, fully automated. Spot recovery, binary updates, Fleet Controller registration - all zero-touch.
OPI
Proprietary Scoring
Open Protection Index: 6-category scoring framework. Industry benchmarks. Before/after comparison. Quantified security posture.