Couple of my clients’ sites were being probed by botnets. It was the usual:
- Common vulnerabilities probes
- Password guessing attacks
It didn’t warrant putting them behind a firewall, but I also didn’t like the idea of having it open to relentless automated 24×7 scanning. The sites were extranet type of sites, not really public, but not private either. I scratched my head a bit trying to figure out how I could improve security without inconveniencing users. What I came up with is this method:
- Any access to the website is blocked by a “captive portal” like page
- When the user provides the portal what’s being asked for (in this case correct answer to CAPTCHA) the firewall opens an exception to their IP and the user is redirected onward to the site they were looking for.
Is this secure against a determined hacker? Definitely not. Solving the CAPTCHA takes only 3 seconds and they’re in through the first layer of defense. That’s why it’s critical that this is treated as an extra (thin) layer of defense and not as the only layer.
Is this secure against 99% off all attacks out there? (worm / botnet attacks on autopilot): Definitely
How is it implemented?
- PHP page shows the portal page
- If CAPTCHA is correct, IP is collected from $_SERVER[“REMOTE_ADDR”], sanitized and saved in a file that keeps track of allowed IPs
- PHP page then triggers a shell script that updates iptables with the following entry:
iptables -t nat -A PREROUTING -p tcp --dport 80 -s $AllowedIP -d $YourIP -j DNAT --to-destination $TargetIP:80
How can this be improved?
Nothing says that access is given with a simple CAPTCHA, there could be full authentication that takes pace before firewall is opened. Furthermore it could close shortly after access is given disallowing any fresh TCP connections. Think of it as authenticated port knocking.
You’ve probably seen a page like this if you’ve stumbled across CloudFlare protected site when using Tor. I don’t know what internal mechanism they use, but I imagine it’s very similar.