On a host with many virtual sites and no centralized logging, getting an idea of which site is being hammered too hard can be tedious / impossible.
Instead of looking at the logs, why not look at the traffic at real time:
tcpdump -s 1500 -v -A -c100 dst port 80 | grep Host
This will show the hosts being requested through HTTP.
Every time I hack or crack something, I face a tough ethical dilemma. I wonder, am I hurting people’s security and privacy by doing this? When I improve the code that is designed to simplify the cloning of RFID access cards, am I helping the society? Am I helping criminals break into buildings? When I write a tutorial that explains “how to hack in”, am I helping the society? Or am I helping the criminals send phishing spam?
To untangle this, let’s start with definitions. White hat hacker is defined as someone who improves security, while a black hat hacker is defined as someone who harms security. This isn’t very helpful. Whose security are we talking about? Is a hacker working for a government security organization considered white hat or black hat? After all, they are improving *their* organization’s security. Are our guys the white guys, while “the other” guys are black hat? And how do we define harm or benefit? Is a hacker who releases info about 0-day exploit causing harm, or benefit? It seems these definitions shift the ethics off to another level avoiding the deeper philosophical implications.
Here is a more useful definition:
- White hat hacker: Hacker who shares their tools and knowledge in a public and open manner for the purpose of enabling everyone to gain privacy and control.
- Black hat hacker: Hacker who secretively guards their tools and knowledge for the purpose of relinquishing privacy and control from others.
This definition allows us to ask us another interesting question. What would happen if majority of hackers were white hat? What would happen if majority of hackers were black hat?
Black Hat Majority:
Information security is in a very bleak state. Black hats have all kinds of back doors, and everyday users can only throw up their arms and say “privacy is dead”, “liberty is dead”, “I do not have control over my devices – others do”. This is a state we are in now.
White Hat Majority:
Information security is in a good state. Published exploit is a defensible exploit. Black hats still have the fringes to operate in. However, overall, every day users are fairly certain that they have the control over their systems and that they are not just puppets within a system controlled by others.
This makes it easy for me to say: I’m proud to be a white hat hacker. I’m also proud to be on the right side of the race between the two sides.
I hope this makes others who have been on the sidelines, wondering what’s the right thing to do, jump right in.
I’ve been playing with my new proxmark3. It works great for HID cards, but ioProx code is still in its infancy. I made some improvements to it based on analysis by marshmellow:
- Better accuracy: You no longer have to worry about centering your fob on the antenna or scan it repeatedly to get a “good” reading. Now you can just hold it in your fingers to scan. Before this update I was averaging 10 – 70% accuracy depending on how I held the fob. This version is pretty much 100% – I haven’t had a bad scan yet.
- Correct decoding of human readable XSF number: Previous version had a bug that displayed the wrong unique code and the wrong facility IDs.
Download the binary firmware (including source code patch if you want to build it yourself) .
There is still more work to be done. For example, there appears to be CRC or checksum near the end – it’s still a mystery.
Couple of my clients’ sites were being probed by botnets. It was the usual:
- Common vulnerabilities probes
- Password guessing attacks
It didn’t warrant putting them behind a firewall, but I also didn’t like the idea of having it open to relentless automated 24×7 scanning. The sites were extranet type of sites, not really public, but not private either. I scratched my head a bit trying to figure out how I could improve security without inconveniencing users. What I came up with is this method:
- Any access to the website is blocked by a “captive portal” like page
- When the user provides the portal what’s being asked for (in this case correct answer to CAPTCHA) the firewall opens an exception to their IP and the user is redirected onward to the site they were looking for.
Is this secure against a determined hacker? Definitely not. Solving the CAPTCHA takes only 3 seconds and they’re in through the first layer of defense. That’s why it’s critical that this is treated as an extra (thin) layer of defense and not as the only layer.
Is this secure against 99% off all attacks out there? (worm / botnet attacks on autopilot): Definitely
How is it implemented?
- PHP page shows the portal page
- If CAPTCHA is correct, IP is collected from $_SERVER[“REMOTE_ADDR”], sanitized and saved in a file that keeps track of allowed IPs
- PHP page then triggers a shell script that updates iptables with the following entry:
iptables -t nat -A PREROUTING -p tcp --dport 80 -s $AllowedIP -d $YourIP -j DNAT --to-destination $TargetIP:80
How can this be improved?
Nothing says that access is given with a simple CAPTCHA, there could be full authentication that takes pace before firewall is opened. Furthermore it could close shortly after access is given disallowing any fresh TCP connections. Think of it as authenticated port knocking.
You’ve probably seen a page like this if you’ve stumbled across CloudFlare protected site when using Tor. I don’t know what internal mechanism they use, but I imagine it’s very similar.
When reversing applications it’s useful to see what’s happening under the hood. Up until now I’ve either had to bring out OllyDbg and dive into assembly or rely on a high level tool like Systernals Process Monitor. I’m fond of strace on Linux, but when I searched for “strace for Windows” resulted in tools that were not very reliable. That was couple of years ago.
Today I stumbled on these two API monitors that do exactly what I need on Windows:
Inspired by a very interesting TED talk by Chris Domas, I decided to make my own tool that did the same thing.
Download the binary (.NET compatible)
Download the source code
As you can tell from the source code, the mechanism is very easy:
- Split file into bytes
- Loop through the bytes (currentByte and previousByte)
- X axis is 0 – 255 (currentByte)
- Y axis is 0 – 255 (previousbyte)
- Plot intersections of X and Y
The technical name for this is digraph. Doing this in 3D or 4D would require a very similar process.
Below are screenshots of some of the files that I visualized.
Note how everything is in the upper left corner. That’s because bulk of plain text is ASCII bytes 32 (space) to 126 (~)
Some similarities to a text file in terms of well defined patterns except that binary file won’t be restricted to below byte 127.
Notice the shades of gray.
This was about 32 MB file. If I had a bigger file that was even more random I would expect the entire screen to fill white. Any pattern visible here is a tale tale to a lack of randomness (or a small sample)