NANOG 46 – Final Thoughts

Nanog 46 is wrapping up today and it has been an incredible experience. This particular Nanog seemed to have an underlying IPv6 current to it, but, if you believe the reports, IPv6 is going to have to become the standard in the next couple of years. We’ll be running dual-stack configurations for some time to come, but IPv6 rollout is necessary.

To date, I haven’t had a lot to do with IPv6. A few years ago I set up one of the many IPv6 shims, just to check out connectivity, but never really went anywhere with it. It was nothing more than a tech demo at the time, with no real content out there to bother with. Content exists today, however, and will continue to grow as time moves on.

IPv6 connectivity is still spotty and problematic for some, though, and there doesn’t seem to be a definitive, workable solution. For instance, if your IPv6 connectivity is not properly configured, you may lose access to some sites as you receive DNS responses pointing you at IPv6 content, but that you cannot reach. This results in either a major delay in falling back to IPv4 connectivity, or complete breakage. So one of the primary problems right now is whether or not to send AAAA record responses to DNS requests when the IPv6 connectivity status of the receiver is unknown. Google, from what I understand, is using a whitelist system. When a provider has sufficient IPv6 connectivity, Google adds them to their whitelist and the provider is then able to receive AAAA records.

Those problems aside, I think rolling out IPv6 will be pretty straightforward. My general take on this is to run dual-stack to start, and probably for the forseeable future, and getting the network to hand out IPv6 addresses. Once that’s in place, then we can start offering AAAA records for services. I’m still unsure at this point how to handle DNS responses to users with possibly poor v6 connectivity.

Another area of great interest this time around is DNSSEC. I’m still quite skeptical about DNSSEC as a technology, partly due to ignorance, partly due to seeing problems with what I do understand. Rest assured, once I have a better handle on this, I’ll finish up my How DNS Works series.

I’m all for securing the DNS infrastructure and doing something to ensure that DNS cannot be poisoned the same way it can today. DNSSEC aims to add security to DNS such that you can trust the responses you receive. However, I have major concerns with what I’ve seen of DNSSEC so far. One of the bigger problems I see is that each and every domain (zone) needs to be signed. Sure, this makes sense, but my concern is the cost involved to do so. SSL Certificates are not cheap and are a recurring cost. Smaller providers may run into major issues with funding such security. As a result, they will be unable to sign their domains and participate in the secure infrastructure.

Another issue I find extremely problematic is the fallback to TCP. Cryptographic signatures are big, and they tend to be bigger, the larger the key you use. As a result, DNS responses are exceeding the size of UDP and falling back to TCP. One reason DNS works so well today is that the DNS server doesn’t have to worry about retransmissions, state of connections, etc. There is no handshake required, and the UDP packets just fly. It’s up to the client to retransmit if necessary. When you move to TCP, the nature of the protocol means that both the client and server need to keep state information and perform any necessary retransmissions. This takes up socket space on the server, takes time, and uses up many more CPU cycles. Based on a lightning talk during today’s session, when the .ORG domain was signed, they saw a 100-fold increase in TCP connections, moving from less than 1 query per second to almost 100. This concerns me greatly as the majority of the Internet has not enabled DNSSEC at this point. I can see this climbing even more, eventually overwhelming the system and bringing DNS to its knees.

I also believe that moving in this direction will allow the “bad guys” to DoS attack servers in much easier ways as they can easily trigger TCP transactions, perform various TCP-based attacks, and generally muck up the system further.

So what’s the alternative? Well, there is DNSCurve, though I know even less about that as it’s very much a fringe technology at this point. In fact, the first workable patch against djbdns was only released in the past few weeks. It’s going to take some time to absorb what’s out there, but based on the current move to DNSSEC, my general feeling is that no matter how much better DNSCurve may or may not be, it doesn’t have much of a chance. Even so, there’s a lot more to learn in this arena.

I also participated in a Security BOF. BOFs are, essentially, less structured talks on a given subject. There is a bit more audience participation and the audience tends to be a bit smaller. The Security BOF was excellent as there were conversations about abuse, spam, and methods of dealing with each. The spam problem is, of course, widespread and it’s comforting to know that you’re not the only one without a definitive answer. Of course, the flip side of that is that it’s somewhat discouraging to know that even the big guys such as Google are still facing major problems with spam. The conversation as a whole, though, was quite enlightening and I learned a lot.

One of the more exciting parts of Nanog for me, though, was to meet some of the Internet greats. I’ve talked to some of these folks via email and on various mailing lists, but to meet them in person is a rare honor. I was able to meet and speak with both Randy Bush and Paul Vixie, both giants in their fields. I was able to rub elbows with folks from Google, Yahoo, and more. I’ve exchanged PGP keys with several people throughout the conference, serving as a geek’s autograph. I have met some incredible people and I look forward to talking with them in the future.

If you’re a network operator, or your interests lie in that direction, I strongly encourage you to make a trip to at least one NANOG in your lifetime. I’m hooked at this point and I’m looking forward to being able to attend more meetings in the future.

 

The Internet Arms Race

I’m here in sunny Philadelphia, attending NANOG46, a conference for network operators. The conference, thus far, has been excellent, with some great information being disseminated. One of the talks was by a long-time Internet pioneer, Paul Vixie. Vixie has had his hands in a lot of different projects ranging from being the primary author of BIND for many years, starting MAPS way back in 1996, and more recently, involvement with the Conficker Working Group.

Vixie’s talk was titled “Internet Superbugs and The Art of War,” and was about the struggle between Internet operators and the “criminal” element that uses the Internet for spam, DDOS attack, etc. The crux of the talk centered around the fact that it costs the bad guys next to nothing to continually evolve their attacks and use the network for their nefarious activities. On the flip side, however, it costs the network operators a good deal of time and money to try and stop these attacks.

Years ago, attacks were generally sourced from a single location and it was relatively easy to mitigate them. In addition, tracking down the source of the attack was simple enough, so legal action could be taken. At the very least, the network provider upstream from the attacker could disable the account and stop the attack.

Fast forward to today and we have botnets that are used for sending spam, performing DDOS attacks, and causing other sorts of havoc. It becomes next to impossible to mitigate a DDOS attack because the attack can be sourced from hundreds and thousands of machines simultaneously. This costs the bad guys nothing to deploy because users are largely ignorant and don’t understand the importance of patching and securing their networks. This results in millions of machines on the Internet that are exploitable. The bad guys write viruses, worms, trojans, etc. that infect these machines and turn them into zombie machines for their botnet.

Fighting these attacks becomes an exercise in futility. We use blacklists to block traffic from places we know are sending spam, we use anti-virus software to prevent infection of our machines, and more. When Conficker was detected and analyzed, researchers realized that this infection was a new evolution of attack. Conficker used cryptographic signatures to verify updates, pseudo-random lists of websites for updates, and more. The website lists are an excellent example of the costs paid by the good guys vs the bad guys.

The first generation of Conficker used a generated list of websites for updates. This list was 250 sites per day, making it difficult, but not impossible to mitigate. So, the people fighting this outbreak started buying up these domains in an attempt to prevent Conficker from updating. The authors of Conficker responded by upping this list to 50,000 per day, making it nearly impossible to buy them up. Fortunately, the people working to prevent the outbreak were able to work with ICANN and the various ccTLD companies to monitor and block purchases of these sites. Sites that already existed were thoroughly checked to ensure they weren’t hosting the new version of Conficker.

Vixie brought up an interesting point about all of this activity, though. The authors of Conficker made a relatively simple change to Conficker to make it use 50,000 domains. The people fighting Conficker spent many hours and days, not to mention a significant amount of money, to mitigate this. Smaller ccTLD companies that don’t have 24×7 abuse staff are unable to cope. They don’t have the budget to be able to do all of this work for free. As the workload climbs, they’re more likely to turn a blind eye.

All of this, in turn, means that our current mode of reacting to these attacks and mitigating them does not scale. It merely results in lost revenue and frustration. Additionally, creating lists of places to avoid, generating lists of bad content, etc. will never be able to scale over time. There is a breaking point, somewhere, and at that point we have no recourse unless we change our way of thinking.

Along the same line of thought, I came across a pretty decent quote today, originally posted by Don Franke from ISC(2):

“PC security is no longer about a virus that trashes your hard drive. It’s about botnets made up of millions of unpatched computers that attack banks, infrastructures, governments. Bandwidth caps will contribute to this unless the thinking of Internet providers and OS vendors change. Because we are all inter-connected now.”

If you read the original post, it explains how moving to bandwidth caps will only exacerbate the security problem because users will no longer be interested in wasting time downloading updates, but rather saving that bandwidth for things they’re interested in.

Overall, it was a very interesting talk and a very different way of thinking. There is no definitive answer as to what direction we need to go in to resolve this, but it’s definitely something that needs to be investigated.

 

Digital Armageddon

April 1, 2009. The major media outlets are all over this one. Digital Armageddon. The end of computing as we know it. Again. But is it? Should we all just “Chill Out?”

So what happens April 1, 2009? Well, Conficker activates. Well, sort of. It activates the latest revision of its auto-update algorithm, switching the number of domains it can find updates on from 250 per day to 50,000 per day. Conficker, in its current form, isn’t really malicious beyond techniques to prevent detection. In order to become malicious, it will need to download an update to the base code.

There are two methods by which Conficker will update its base code. The first method is to download the code via a connection to one of the 50,000 domains it generates. However, it does not scan all 50,000 domains at once. Instead, it creates a random list of 500 of the 50,000 generated domains and scans them for an update. If no update is found, Conficker sleeps for 24 hours and starts over by generating a new list of 50,000 domains, randomly picking 500, and contacting them for an update. The overall result of this is that it becomes nearly impossible to block all of the generated domains, increasing the likelyhood that an update will get through. On the flip side, this process appears that it would result in a very slow spread of updates. It can easily take days, weeks, or months for a single machine to finally stumble upon a live domain.

The second method is to download the code via a peer-to-peer connection between infected hosts. As I understand it, the peer-to-peer mechanism has been active since revision C of Conficker has been in the wild. This mechanism allows an update to spread from system to system in a very rapid manner. Additionally, based on how the peer-to-peer mechanism works, it appears that blocking it is difficult, at best.

So what is the risk here? Seriously, is my computer destined to become a molten heap of slag, a spam factory, or possibly a zombie soldier in a botnet attack against foreign governments? Is all hope lost? Oh my , are we all going to die!

For the love of all things digital, pull it together! It’s not as bad as it looks! First off all, if you consistently update your machines and keep your anti-virus up to date, chances of you being infected are very low. If you don’t keep up to date, then perhaps you should start. At any rate, fire up a web browser and search for a Conficker scanner. Most of the major anti-virus vendors have one. Make sure you’re familiar with the company you’re downloading the scanner from, though, a large number of scam sites have popped up since Conficker hit the mainstream media.

If you’re a network admin, you have a bigger job. First, I’d recommend any windows machines you are responsible for are patched. Yes, that includes those machines on that private network that is oh-so impossible to get to. Conficker can spread via samba shares and USB keys as well. Next, try scanning your network for infections. There are a number of Conficker scanners out there now thanks to the Honeynet Project and Dan Kaminsky. I have personally used both the proof-of-concept python scanner, as well as the latest version of nmap.

If you’re using nmap, the following command line works quite well and is incredibly fast :

nmap -sC –script=smb-check-vulns –script-args=safe=1 -p139,445 \
-d -PN -n -T4 –min-hostgroup 256 –min-parallelism 64 \
-oA conficker_scan

Finally, as a network admin, you should probably have some sort of Intrusion Detection System (IDS) in place. Snort is an open source IDS that works quite well and has a large community following. IDS signatures exist to detect all known variants of Conficker.

So calm down, take a deep breath, and don’t worry. I find it extremely unlikely that April 1 will result in anything more than a blip in network activity. Instead, concentrate on detection and patching. Conficker isn’t Skynet…. Yet.

 

They’re Watching You… (Book Review: Little Brother)

My good friend Wil Wheaton (yeah, we’ve never met.. or talked…) mentioned a captivating book he read a few months ago. What really caught my attention was that he handed the book off to his son because he thought it was a book he could share with him. Having children myself, I decided to take a look at the book to see what all the fuss was about. That book is called Little Brother .

Little Brother is a book about a teenager caught up in global events that forever change his life. After a terrorist attack in his neighborhood, the Department of Homeland Security swoops in to save the day. What follows is a terrifying look into the future of our own country as privacy erodes and Big Brother takes over.

Cory Doctorow weaves a tale that is not only believable, but may be an eery foreshadowing of real events. It is a glaring reminder that we, as citizens, must ensure that the government continues to serve rather than control us.

I heartily recommend checking this book out. Cory has released Little Brother under the Creative Commons License and has it available as a free download on his website. I strongly encourage you to support Cory and buy a copy if you like the book. And if you like Cory’s work, his website has free downloads of other stories he has written.

Steal the Net’s Identity

Imagine this. You wake up in the morning, go about your daily chores, and finally sit down to surf the web, read some news, check your mail, etc. A some point, you decide to log in to your bank to check your accounts. You get there, login, and you’re greeted with a page explaining that the site is down for maintenance. Oh well, you’ll come back later. In the meantime, someone drains your account using the username and password that you just graciously handed them, not realizing that the site you went to was not where you intended to go.

Sound familiar? Yeah, I guess it sounds a bit like a phishing attack, though a tad more sophisticated. I mean, you did type in the address for the bank yourself, didn’t you? It’s not like you clicked on a link in a email or something. But in the end, you arrived at the wrong site, cleverly designed, and gave them your information.

So how the hell did this happen? How could you end up at the wrong site when you personally put in the address, your computer has all the latest in virus scanning, firewalling, etc? You spelled it right, too! It’s almost as if someone took over the bank’s computer!

Well, they did. Sort of. But they did it without touching the bank’s computers at all. They used the DNS system to inject a false address for the bank website, effectively re-directing you to their site. How is this possible? Well, it’s a flaw in the DNS protocol itself that allows this. The Matasano Security blog posted about this on Monday, though the post was quickly removed. You may still be able to see the copy that Google has cached.

Let me start from the beginning. On July 8th, Dan Kaminsky announced that he had discovered a flaw in the DNS protocol and had been working, in secret, with vendors to release patches to fix this problem. This was a huge effort, one of the very first the world has ever seen. In the end, patches were released for Bind, Microsoft DNS, and others.

The flaw itself is interesting, to say the least. When a user requests an address for a domain, it usually goes to a local DNS cache for resolution. If the cache doesn’t know the answer, it follows a set of rules that eventually allow it to ask a server that is authoritative for that domain. When the cache asks the authoritative server, the packet contains a Query ID (QID). Since caches usually have multiple requests pending at any given time, the QID helps distinguish which response matches which request. Years ago, there was a way to spoof DNS by guessing the QID. This was pretty simple to do because the QID was sequential. So, the attacker could guess the QID and, if they could get their response back to the server faster than the authoritative server could, they would effectively hijack the domain.

So, vendors patched this flaw by randomizing the QID. Of course, if you have enough computing power, it’s still possible to guess the QID by cracking the random number generator. Difficult, but possible. However, the computing power to do this in a timely manner wasn’t readily available back then. So, 16-bit random QIDs were considered secure enough.

Fast forward to 2008. We have the power, and almost everyone with a computer has it. It is now possible to crack something like this in just a few seconds. So, this little flaw rears its ugly head once again. But there’s a saving grace here. When you request resolution for a domain name, you also receive additional data such as a TTL. The TTL, or Time To Live, defines how long an answer should be kept in the cache before asking for resolution again. This mechanism greatly reduces the amount of DNS traffic on the network because, in many cases, domain names tend to use the same IP address for weeks, months, and, in many cases, years. So, if the attacker is unsuccessful in his initial attack, he has to wait for the TTL to expire until he can try again.

There was another attack, back in the day, that allowed an attacker to overwrite entries in the cache, regardless of the TTL. As I mentioned before, when a DNS server responds, it can contain additional information. Some of this information is in the form of “glue” records. These are additional responses, included in the original response, that helps out the requester.

Let’s say, for instance, that you’re looking for the address for google.com. You ask your local cache, which doesn’t currently know the answer. It forwards that request on to the root servers responsible for .com domains using a process known as recursion. When the root server responds, the response will be the nameserver responsible for google.com, such as ns1.google.com. The cache now needs to contact ns1.google.com, but it does not know the address for that server, so it would have to make additional requests to the root servers to determine this. However, the root server already includes a glue record that gives the cache this information, without the cache asking for it. In a perfect world, this is wonderful because it makes the resolution process faster and reduces the amount of DNS traffic required. Unfortunately, this isn’t a perfect world. Attackers could exploit this by including glue records for domains that they were not authoritative for, effectively injecting records into the cache.

Again, vendors to the rescue! The concept of a bailiwick was introduced. In short, if a cache was looking for the address of google.com, and the response included the address for yahoo.com, it would ignore the yahoo.com information. This was known as a bailiwick check.

Ok, we’re safe now, right? Yeah, no. If we were safe, there wouldn’t be much for me to write about. No, times have changed… We now have the power to predict 16-bit random numbers, overcoming the QID problem. But TTL’s save us, right? Well, yes, sort of. But what happens if we combine these two attacks? Well, interesting things happen, actually.

What happens if we look up a nonexistent domain? Well, you get a response of NXDOMAIN, of course. Well yeah, but what happens in the background? Well, the cache goes through the exact same procedure it would normally go through for a valid domain. Remember, the cache has no idea that the domain doesn’t exist until it asks. Once it receives that NXDOMAIN, though, it will cache that response for a period of time, usually defined by the owner of the root domain itself. However, since it does go through the same process of resolving, there exists an attack vector that can be exploited.

So let’s combine the attacks. We know that we can guess the QID given enough guessing. And, we know that we can inject glue records for domains, provided they are within the same domain the response is for. So, if we can guess the QID, respond to a non-existent domain, and include glue records for a real domain, we can poison the cache and hijack the domain.

So now what? We already patched these two problems! Well, the short-term answer is another patch. The new patch adds additional randomness to the equation in the form of the source port. So, when a DNS server makes a request, it randomizes the QID and the source port. Now the attacker needs to guess both in order to be successful. This basically makes it a 32-bit number that needs to be guessed, rather than a 16-bit number. So, it takes a lot more effort on the part of the attacker. This helps, but, and this is important, it is still possible to perform this attack given enough time. This is NOT a permanent fix.

That’s the new attack in a nutshell. There may be additional details I’m not aware of, and Dan will be presenting them at the Blackhat conference in August. In the meantime, the message is to patch your server! Not every server is vulnerable to this, some, such as DJBDNS, have been randomizing source ports for a long time, but others are. If in doubt, check with your vendor.

This is pretty big news, and it’s pretty important. Seriously, this is not a joke. Check your servers and patch. Proof of concept code is in the wild already.

Hide that data…

Data security is a pretty hot topic these days, especially when it comes to portable data.  In fact, recent reports put airport laptop theft in the tens of thousands a week.  Most, if not all, of these laptops have sensitive data on them, whether it be sensitive to the user, or sensitive to the user’s employer.  And to make matters worse, most of these laptops lack anything beyond basic security such as a Windows logon password.

But is security that much of an issue?  Is it that difficult to effectively secure the data on a laptop, or any other computer for that matter?  Well, it depends on the type of security we’re talking about.  There are significant differences between securing data on a machine that is not powered as opposed to a machine that is powered and processing that data.  In the latter case, firewalls, anti-virus software, and good programming practices will help to shield that data from nosy intruders.

If your machine is not powered, and the attacker can gain physical access, is there any way to protect the data?  The answer is actually quite simple.  There exists a product that can encrypt the data on your machine, either in chunks, or as a whole.  In fact, with the latest version, you can even choose to have it deploy a decoy operating system, just in case you’re being tortured for your password..  What is this wondrous software, and how much is it going to cost you?  It’s called TrueCrypt, and it’s FREE.

TrueCrypt is a data encryption tool that runs on Windows, Mac OS X, and Linux.  In fact, if you’re a decent programmer, you can probably get it to work on most any operating system as the source is freely available.  The TrueCrypt website highlights the following as main features:

  • Creates a virtual encrypted disk within a file and mounts it as a real disk.
  • Encrypts an entire partition or storage device such as USB flash drive or hard drive.
  • Encrypts a partition or drive where Windows is installed (pre-boot authentication).
  • Encryption is automatic, real-time (on-the-fly) and transparent.
  • Provides two levels of plausible deniability, in case an adversary forces you to reveal the password:
    1) Hidden volume (steganography) and hidden operating system.
    2) No TrueCrypt volume can be identified (volumes cannot be distinguished from random data).
  • Encryption algorithms: AES-256, Serpent, and Twofish. Mode of operation: XTS.

There is a small amount of overhead when using encryption, but for most business applications, that’s an acceptable sacrifice for the security gained.  Even without the use of hidden volumes or decoy operating systems, TrueCrypt offers a safe, secure manner by which you can protect your data.  And, if you so choose, you can mode TrueCrypt volumes between computers and even operating systems, such as on a USB flash drive, while maintaining compatibility.  In fact, I use this feature on a daily basis.  I have a small 1 Gig USB flash drive with a TrueCrypt partition on it where I store some personal information such as a copy of portable Thunderbird.  Included on the USB drive, in an unencrypted area, is a copy of TrueCrypt for Windows, Mac, and Linux.  Thus, if I ever need to mount the drive on an operating system without a copy of TrueCrypt, I’ve brought my own.

TrueCrypt 6.0 was released over the July 4th holiday.  This latest release adds some great new features.  Parallel encryption and decryption, meaning it will use all of the processors (or cores) on a multi-processor system, was added.  This allows TrueCrypt to run substantially faster on multi-processor systems.  Also added was the ability to create and run hidden, or decoy, operating systems.  Hopefully I’ll never find myself in a situation where such a decoy is needed, but perhaps James Bond will find this new feature useful.  A number of minor enhancements were made as well, including a number of bug fixes.  The current version history can be found here, and you can download the latest version here.

TrueCrypt is a wonderful tool, even for personal data protection.  I recommend looking into it, and even integrating it into your everyday life.  It’s a small change, barely noticeable for most, but the security benefits are staggering.  Just don’t forget your password, ok?

Instant Kernel-ification

 

Server downtime is the scourge of all administrators, sometimes to the extent of bypassing necessary security upgrades, all in the name of keeping machines online.  Thanks to an MIT graduate student, Jeffery Brian Arnold, keeping a machine online, and up to date with security patches, may be easier than ever.

Ksplice, as the project is called, is a small executable that allows an administrator the ability to patch security holes in the Linux kernel, without rebooting the system.  According to the Ksplice website :

“Ksplice allows system administrators to apply security patches to the Linux kernel without having to reboot. Ksplice takes as input a source code change in unified diff format and the kernel source code to be patched, and it applies the patch to the corresponding running kernel. The running kernel does not need to have been prepared in advance in any way.”

Of course, Ksplice is not a perfect silver bullet, some patches cannot be applied using Ksplice.  Specifically, any patch that require “semantic changes to data structures” cannot be applied to the running kernel.  A semantic change is a change “that would require existing instances of kernel data structures to be transformed.”

But that doesn’t mean that Ksplice isn’t useful.  Jeffery looked at 32 months of kernel security patches and found that 84% of them could be applied using Ksplice.  That’s sure to increase the uptime.

I have to wonder, though, what is so important that you need that much uptime.  Sure, it’s nice to have the system run all the time, but if you have something that is absolutely mission critical, that must run 24×7, regardless, won’t you have a backup or two?  Besides which, you generally want to test patches before applying them to such sensitive systems.

There are, of course, other uses for this technology.  As noted on the Ksplice website, you can also use Ksplice to “add debugging code to the kernel or to make any other code changes that do not modify data structure semantics.”  Jeffery has posted a paper detailing how the technology works.

Pretty neat technology.  I wonder if this will lead to zero downtime kernel updates direct from Linux vendors.  As it is now, you’ll need to locate and manually apply kernel patches using this tool.

 

Ooh.. Bad day to be an IIS server….

Web-based exploits are pretty common nowadays.  It’s almost daily that we heard of sites being compromised one way or another.  Today, it’s IIS servers.  IIS is basically a web-server platform developed by Microsoft.  It runs on Windows-based servers and generally serves ASP, or Active Server Pages, dynamic content similar to that of PHP or Ruby.  There is some speculation that this is related to a recent security advisory from Microsoft, but this has not been confirmed.

Several popular blogs, including one on the Washington Post, have posted information describing the situation.  There is a bit of confusion, however, as to what exactly the attack it.  It appears that the IIS servers were infected by using the aforementioned vulnerability.  Other web servers are being infected using SQL injection attacks.  So it looks like there are several attack vectors being used to spread this particular beauty.

Many of the reports are using Google searches to estimate the number of infected systems.  Estimates put that figure at about 500,000, but take that figure with a grain of salt.  While there are a lot affected, using Google as the source of this particular metric is somewhat flawed.  Google reports the total number of links found referring to a particular search string, so there may be duplicated information.  It’s safe to say, however, that this is pretty widespread.

Regardless of the method of attack, and which server is infected, an unsuspecting visitor to the exploited site is exposed to a plethora of attacks.  The malware uses a number of exploits in popular software packages, such as AIM, RealPlayer, and iTunes, to gain access to the visitor’s computer.  Once the visitor is infected, the malware watched for username and password information, reporting that information back to a central server.  Both ISC and ShadowServer have excellent write-ups on both the server exploit as well as the end-user exploit.

Be careful out there, kids…

Scamtastic!

 

So, I’m sitting here, working away and my cell phone rings.  I look up and the caller ID shows “1108” ….  Hrm..  well, that’s odd, but I’ve seen some pretty odd stuff show up on the caller ID for my cell, so I answer the call.

“Hello, this is <unintelligible> from Domain Registry Support, and I speaking with …”

I own a few domain names, and I had just registered one with GoDaddy a few days ago, so I thought, perhaps, that this was a call from GoDaddy.  But why call themselves “Domain Registry Support?”  As I listened, however, I discovered that it was not GoDaddy at all.  This gentleman wanted to verify my contact information, as it was listed in whois.  This particular domain is registered via Network Solutions, so I asked him if that was who he worked for.  He told me, again, that he worked for “Domain Registry Support.”

This all took place in the first 30 or so seconds of the call.  His insistence on not giving me any information made me suspect of the call.  My wife has been contacted a number of times by a credit card scammer, trying to get our information, so I’ve been leery to give out any information to people who call me.

So, I asked the gentleman to hang on a moment and popped up a web browser.  I verified the name of the company again and started a Google search.  Surprise, surprise, I received a page of links about phishing schemes, scams, and assorted complaints.  Unfortunately, as soon as I started typing, my friendly scammer hung up..  Oh well…

Prepare yourself, Firefox 3 is on the way…

Having just released beta 4, the Mozilla Foundation is well on its way to making Firefox 3 a reality.  Firefox 3 aims to bring a host of new features, as well as speed and security enhancements.

On the front end, they updated the theme.  Yes, again.  I’m not entirely sure what the reasoning is, but I’m sure it’s some inane marketing thing.  Probably something along the lines of “we need to make it look shiny and new!”  It’s not bad, though, and only takes a few moments to re-acquaint yourself with the basic functions.

One significant change is the function of the front and back history buttons.  In previous versions you could click towards the bottom of the button and get a history of the pages forward or back in your history, relevant to the button you pressed.  They have combined this into a single button now, with a small dot identifying where in the history you are.  Back history expands to the bottom of the list while forward history moves up.  It’s a little hard to explain in words, but it’s not that difficult in action.

Next up is the download manager.  They revamped the entire download manager, making it look quite different.  Gone is the global “Clear History” button, in is the new “Search” box.  It seems that one of the themes of this release is that history is important, so they added features to allow you to quickly find relevant information.  But fear not, you can still clear the list by right clicking and choosing clear list.  It’s just not as apparent as it used to be.  In addition, you can continue downloads that were interrupted by network problems, or even by closing the browser.

Some of the pop-ups have been reduced as well.  For instance, when new passwords are entered, instead of getting a popup on the screen asking if you want to save the username and password, a bar appears at the top of the page.  This is a bit more fluid, not interrupting the browsing experience as it did in the past.

Many of the dialogs related to security have been re-vamped in an attempt to make them more clear for non-technical users.  For instance, when encountering an invalid SSL certificate, Firefox now displays something like this :

Other warnings have been added as well.  Firefox now attempts to protect you from malware and web forgeries.  In additions, the browser now handles Extended Validation SSL certificates, displaying the name of the company in green on the location bar.  Clicking on the icon to the left of the URL provides a small popup with additional information about your connection to the remote website.

A plugin manager has been added, allowing the user to disable individual plugins.  This is a very welcome addition to the browser.

The bookmark manager has been updated as well.  In addition to placing bookmarks in folders, users can now add tags.  Using the bookmark sidebar, users can quickly search by tag, locating bookmarks that are in multiple folders.  Smart bookmarks show the most recently used bookmarks, as well as the most recently bookmarked sites and tags.

The location bar has been updated as well.  As you type in the location bar, Firefox automatically searches through your bookmarks, tags, and history, displaying the results.  Results are sorted by both frequency of visits, as well as how recent your last visit was.  For users who clear their history on a regular basis, this makes the location bar much more useful.

Behind the scenes there have been a number of welcome changes.  The most noticeable change is speed.  Beta 4 is insanely fast compared to previous versions.  In fact, it seems to be significantly faster than Internet Explorer, Opera, and others!  And, as an added bonus, it seems to use less memory as well.  Ars Technica did some testing to this effect and came out with some surprising results.

Mozilla attributes both the speed increase to improvements in the JavaScript engine, as well as profile-guided optimizations.  In short, they used profiling tools to identify bottlenecks in the code and fix them.  The reduction in memory is attributed to new allocators and collectors, as well as a reduction in leaky code.

Firefox 3 was built on top of the updated Gecko 1.9 engine.  The Gecko engine is responsible for the actual layout of the page on the screen.  It supports the various web standards such as CSS, HTML, XHTML, JavaScript, and more.  As the Gecko engine has evolved, it has gained additional capabilities, as well as performance.  In fact, using this new engine, Firefox now passes the coveted Acid 2 test.

Overall, the latest beta feels quite stable and I’ve begun using it on a daily basis.  It is definitely faster than previous releases.  I definitely recommend checking it out.  On a Windows machine, it will install separately from your primary Firefox installation.  It imports all of your bookmarks and settings after you install it, so there is no danger of losing anything from your primary install.  Just be aware that there is no current guarantee that any new bookmarks, changes, add-ons, etc. will be imported into the final installation.  Many add-ons are still non-functional, though there are plenty more that work fine.

Best of luck!