The Internet Arms Race

I’m here in sunny Philadelphia, attending NANOG46, a conference for network operators. The conference, thus far, has been excellent, with some great information being disseminated. One of the talks was by a long-time Internet pioneer, Paul Vixie. Vixie has had his hands in a lot of different projects ranging from being the primary author of BIND for many years, starting MAPS way back in 1996, and more recently, involvement with the Conficker Working Group.

Vixie’s talk was titled “Internet Superbugs and The Art of War,” and was about the struggle between Internet operators and the “criminal” element that uses the Internet for spam, DDOS attack, etc. The crux of the talk centered around the fact that it costs the bad guys next to nothing to continually evolve their attacks and use the network for their nefarious activities. On the flip side, however, it costs the network operators a good deal of time and money to try and stop these attacks.

Years ago, attacks were generally sourced from a single location and it was relatively easy to mitigate them. In addition, tracking down the source of the attack was simple enough, so legal action could be taken. At the very least, the network provider upstream from the attacker could disable the account and stop the attack.

Fast forward to today and we have botnets that are used for sending spam, performing DDOS attacks, and causing other sorts of havoc. It becomes next to impossible to mitigate a DDOS attack because the attack can be sourced from hundreds and thousands of machines simultaneously. This costs the bad guys nothing to deploy because users are largely ignorant and don’t understand the importance of patching and securing their networks. This results in millions of machines on the Internet that are exploitable. The bad guys write viruses, worms, trojans, etc. that infect these machines and turn them into zombie machines for their botnet.

Fighting these attacks becomes an exercise in futility. We use blacklists to block traffic from places we know are sending spam, we use anti-virus software to prevent infection of our machines, and more. When Conficker was detected and analyzed, researchers realized that this infection was a new evolution of attack. Conficker used cryptographic signatures to verify updates, pseudo-random lists of websites for updates, and more. The website lists are an excellent example of the costs paid by the good guys vs the bad guys.

The first generation of Conficker used a generated list of websites for updates. This list was 250 sites per day, making it difficult, but not impossible to mitigate. So, the people fighting this outbreak started buying up these domains in an attempt to prevent Conficker from updating. The authors of Conficker responded by upping this list to 50,000 per day, making it nearly impossible to buy them up. Fortunately, the people working to prevent the outbreak were able to work with ICANN and the various ccTLD companies to monitor and block purchases of these sites. Sites that already existed were thoroughly checked to ensure they weren’t hosting the new version of Conficker.

Vixie brought up an interesting point about all of this activity, though. The authors of Conficker made a relatively simple change to Conficker to make it use 50,000 domains. The people fighting Conficker spent many hours and days, not to mention a significant amount of money, to mitigate this. Smaller ccTLD companies that don’t have 24×7 abuse staff are unable to cope. They don’t have the budget to be able to do all of this work for free. As the workload climbs, they’re more likely to turn a blind eye.

All of this, in turn, means that our current mode of reacting to these attacks and mitigating them does not scale. It merely results in lost revenue and frustration. Additionally, creating lists of places to avoid, generating lists of bad content, etc. will never be able to scale over time. There is a breaking point, somewhere, and at that point we have no recourse unless we change our way of thinking.

Along the same line of thought, I came across a pretty decent quote today, originally posted by Don Franke from ISC(2):

“PC security is no longer about a virus that trashes your hard drive. It’s about botnets made up of millions of unpatched computers that attack banks, infrastructures, governments. Bandwidth caps will contribute to this unless the thinking of Internet providers and OS vendors change. Because we are all inter-connected now.”

If you read the original post, it explains how moving to bandwidth caps will only exacerbate the security problem because users will no longer be interested in wasting time downloading updates, but rather saving that bandwidth for things they’re interested in.

Overall, it was a very interesting talk and a very different way of thinking. There is no definitive answer as to what direction we need to go in to resolve this, but it’s definitely something that needs to be investigated.

 

Leave a Reply

Your email address will not be published. Required fields are marked *