Short table of contents:
— DDoS without myths
— Our built-in network protection
— Why L7 attacks are different
— Cloudflare settings that actually matter
— Nginx and app hardening for survivability
— A calm incident playbook
People often describe DDoS as “too much traffic and the site goes down.” That’s sometimes true, but it’s not the full picture anymore. Many modern attacks are quieter on bandwidth and much louder on compute. They don’t need to saturate your line; they just need to hit the endpoints that cost you the most: authentication, search, filters, API calls, pages that trigger heavy database work, and anything that burns CPU per request.
That’s why a reliable defense is layered. First, you want to stop the noisy network junk before it even reaches your server. Then you want a web-aware perimeter (Cloudflare) that can challenge and rate-limit suspicious behavior. Finally, you want your own Nginx and application to be resilient enough to keep serving real users even if a portion of attack traffic slips through.
What we provide by default: network-level Anti-DDoS
Our baseline Anti-DDoS for customers operates at the datacenter network level (Hetzner-backed infrastructure). In plain terms, it means a significant portion of volumetric and transport-layer attack traffic is filtered and scrubbed upstream—before it becomes your VPS or dedicated server’s problem.
This matters because for true volumetric attacks, trying to fight on the server is often too late. If your uplink is saturated, Nginx can’t “handle” the traffic—because the traffic never becomes HTTP; it’s a flood at the packet level. Upstream scrubbing is what keeps the pipe usable so legitimate traffic can still flow.
In practice, network mitigation watches for abnormal patterns: sudden spikes in packets per second, unusual connection handshakes, amplification profiles, and protocol anomalies. Once an attack pattern is detected, the mitigation system applies targeted filters that drop junk and preserve valid traffic. That layer typically performs best against classic L3/L4 vectors: UDP floods, amplification, SYN storms, malformed packets, and other “brute force” patterns.
But there’s a boundary, and it’s important to be honest about it. If an attacker sends valid HTTPS requests that look like real browsing behavior, the network layer alone cannot always determine intent. That’s where L7 mitigation begins.
Why L7 attacks feel “mysterious”
L7 attacks are painful because the dashboards don’t always scream “attack.” Bandwidth may look reasonable. Yet the site becomes sluggish or intermittently unavailable. That’s because the real bottleneck is inside: application workers, CPU, database connections, cache misses, upstream calls, and expensive endpoints being hammered.
A single “login” request can be far more expensive than a static image fetch. A “search” request can be dozens of queries. An “API list endpoint” can turn into pagination and sorting logic that your app executes repeatedly. Multiply that by thousands of requests per minute, and you have an outage without a dramatic bandwidth spike.
The antidote is a combination of smart edge controls (Cloudflare), server-level rate controls (Nginx), caching, and sometimes temporary feature throttling.
Cloudflare: the difference between “enabled” and “actually protective”
Many sites “use Cloudflare” but still get taken down because the origin is reachable directly. If attackers can discover your server’s IP and hit it, they bypass Cloudflare entirely. So the first rule is simple: hide and lock your origin.
Concretely:
-
Proxy your DNS records through Cloudflare so visitors see Cloudflare’s edge, not your origin IP.
-
On the origin firewall, allow HTTP/HTTPS only from Cloudflare IP ranges.
-
Restrict administrative access (SSH, panels) to trusted IPs or a VPN.
Once the origin is properly locked, Cloudflare becomes a powerful L7 layer. The best results come from protecting “high-cost” endpoints rather than punishing the entire site. You typically want WAF rules and rate limiting on:
-
login and authentication flows,
-
API routes,
-
search and filtering pages,
-
form submissions,
-
any endpoint that triggers heavy computation or database work.
Rate limiting is especially effective because humans don’t behave like bots. A real user won’t submit 40 logins per minute. A bot will. Turning that difference into a rule is one of the highest ROI improvements you can make.
For active incidents, stricter challenges can be used as an emergency brake. It’s not something you keep permanently for every visitor, but it buys you time while you refine the rules based on real traffic patterns.
Nginx and application hardening: staying responsive under pressure
Even with a well-configured edge, some traffic reaches the origin. Your job on the server side is to prevent any single client—or a swarm of clients—from consuming all connections and all workers.
The key is targeted throttling. You don’t want to rate-limit your entire homepage the same way you rate-limit /login or /api. Instead, apply request-rate limits and connection limits to the endpoints that are most commonly abused and most expensive to serve.
Timeouts also matter more than most people think. Under attack, slow connections can hold resources and starve legitimate users. Reasonable client timeouts, body size limits, and method restrictions reduce the chance of your workers getting “stuck.”
Caching can be a lifesaver. If you can cache even for 10–30 seconds on content pages or predictable API responses, you often reduce backend load dramatically. During HTTP floods, this can be the difference between “degraded but alive” and “fully down.”
At the application layer, it helps to have a “storm mode”: temporarily disable the most expensive features, reduce search complexity, queue heavy operations, add progressive challenges to authentication, and prioritize essential pages. It’s not perfect, but it’s far better than an outage.
A practical incident playbook
When an attack is happening, the biggest risk is frantic random changes. A calmer sequence works better:
Start at the edge: tighten Cloudflare rules and challenges on targeted endpoints. Confirm the origin is not reachable directly. Apply Nginx limits to the abused routes and enable caching where possible. Then use logs to refine: which URLs are hit, where traffic comes from, what patterns repeat, and which rules reduce load without blocking legitimate users.
How different attacks feel in real life
It helps to recognize patterns, because “the site is down” can mean very different bottlenecks.
SYN floods often look like intermittent availability. CPU may not be fully pegged, yet users experience connection issues and timeouts because the server struggles to handle a massive number of connection attempts. The pain is “at the door”: legitimate users can’t establish clean sessions reliably.
UDP floods / amplification are usually more obvious on network graphs. Large spikes in inbound traffic can degrade everything, sometimes even SSH. If upstream mitigation is effective, the impact can be brief—one of the reasons network-layer scrubbing is a foundational layer.
L7 HTTP floods are the sneakiest. Bandwidth doesn’t have to be huge, but the application becomes unresponsive because workers, database connections, and CPU are exhausted by “valid-looking” requests. These attacks almost always focus on specific expensive paths: login, search, filters, API endpoints, or checkout flows. When access logs show one or two URIs dominating traffic, you’re usually dealing with L7.
Cloudflare: three practical configuration stories
WordPress.
Most bot traffic gravitates toward wp-login.php and xmlrpc.php. The best approach is not to challenge every visitor sitewide, but to protect the high-risk doors. If you don’t need XML-RPC, block it. Rate-limit login attempts and use stricter challenges only during incidents.
Also, modern WP sites often have “heavy” AJAX endpoints created by themes and plugins. Under attack, those endpoints can become the true bottleneck. A useful strategy is to identify which URIs are expensive and apply WAF/rate controls specifically there.
Laravel (or any session-based web app).
Attackers target /login, /register, password recovery, and expensive authenticated pages and APIs. Cloudflare should be your web-aware perimeter—stopping abusive patterns before they reach your app.
In practice: rate-limit authentication flows, add separate rules for API routes, and lock down the origin so direct IP access can’t bypass Cloudflare. Then improve resilience inside the app: progressive protection (extra checks after suspicious behavior), caching expensive responses, and avoiding heavy work until a request passes basic screening.
Pure APIs (mobile/SPA/partners).
APIs fail when a single client can consume unlimited throughput. Use Cloudflare as a gateway: rate-limit by route and method, apply stricter thresholds for write operations, and reduce abusive bursts.
But don’t rely on edge controls alone. Your API must enforce its own limits: per-token quotas, payload size caps, request validation, replay protections, and separate throttles for high-cost endpoints.
Nginx and L7 survivability without destroying UX
Nginx is most effective when it’s precise. Instead of throttling the entire site, constrain only the endpoints that are commonly abused and expensive: login, search, filters, and APIs. Keep content pages fast and cacheable where possible.
Timeout discipline matters as well. Under pressure, slow connections can pin resources and starve legitimate users. Tightening header/body timeouts and managing keepalive behavior can dramatically improve survivability during a flood.