DirecTV – Ugh…

I’ve written before about my dissatisfaction with DirecTV. So I’ve had the service for about a year and while it’s worked, I’ve noticed that I’m starting to download TV shows more often. Part of this is because I sent care packages to a friend in the Navy, and part of it is due to some of the features I lost when I moved to DirecTV. My family still uses the DVR pretty regularly, though, and there are some shows that I like to watch when they’re on.

The DVR has been acting a little strange lately, though. Actually, for about the last 1-2 weeks. Some of the recordings are inaccessible, showing only a black screen when you try to play them. Some of the recordings have odd periods where artifacts will start to appear and suddenly the show jumps, skipping over portions. So I decided to call DirecTV and see if they have a resolution. What a waste of time.. Here’s the gist of my conversation:

DirecTV: Hi, how can I help you?

Me: I’m having some problems with my DVR.

DirecTV: Ok, how about you explain what problems you are having and we’ll see if we can fix them.

Me: Well, I’m having a few problems. Some of the recordings I have are showing just black screens, no audio or video. And I’m having a problem with live TV when I try to rewind or pause. On some occasions, I am unable to rewind, and on others, I’ll get a message about Live TV having saved the recording and o I want to keep it. Then it jumps me to the current program, often making me lose 10-20 minutes of the program.

DirecTV: Ok, how are you trying to record the programs?

Me: Umm.. Either through the standard timers, or through hitting the record button.

At this point, the rep begins going through an explanation of how to record a program and how you can’t do it from the guide screen, etc. I interrupt and explain that I don’t have a problem recording, it’s the end result that is the problem.

Me: This all started about a week or two ago, so were there any upgrades?

DirecTV: I’m not showing any recent upgrades. I am seeing that these are known issues, however, and they have been escalated to engineering.

Me: Ok… But these issues just started. This has only been happening a short period of time, yet you’re telling me no changes have been made. Is it possible that I have a bad hard drive?

DirecTV: Correct. I’ll let engineering know that you’re experiencing these problems as well. As I said, these are known issues and we are working on them.

Me: Ok. So how do I know if the problem has been resolved? Will I see an upgrade or something?

DirecTV: Just continue using the DVR as you normally do. If the problems go away, the issue has been resolved. Or, you can call us in the future.

Me: *sigh* Ok, thanks I guess…

Seriously.. Come on.. No troubleshooting, other than talking to me. No asking what kind of DVR (though I suppose they could have that info in their records), no asking for verification of software levels, etc. Just told me that it was a known issue. I’m not really convinced, and with the way she basically brushed me off, I’m not at all happy about dealing with DirecTV… Yet I’m locked into a contract… Damn…

Has anyone else seen issues like this? Any tips on how to resolve it? At the moment I’m recording everything I can to DVD. After that’s done, I’ll try re-formatting the hard drive.. That is, if I can find the option to do it. They updated a few months ago and all the stupid menus changed… Argh…

Aaargh….

Back at the beginning of August, a small game developer based in the UK asked for honest feedback on a fairly straightforward question, “Why do people pirate my games?” I can only imagine how many emails he received in response. So, he read each one and compiled his thoughts in a well-written response. And, to top it off, he’s changing the way he does business in an attempt to make some of those pirates honest.

Cliff Harris, the game developer, received the typical pirating reasons. These include cost, ease of access, and DRM. A few surprising reasons included the “I don’t believe in intellectual property” response and complaints about current generation game quality. And, of course, there were also responses about pirating because they could.

Cost was somewhat surprising because he mentioned that while there were the normal complaints about the high cost of current games, there were also complaints about the price of his games, which ran in the $19-23 range. In some ways, I can agree with this. Games you buy at a retail chain generally run $50-60 when they are first released. Over time, depending on the platform, prices will drop. Ultimately, it takes years for most titles to drop into the sub-$20 range. One argument against price was the thought of impulse buying. I know, for myself, that impulse buying is a big one. I spend a great deal of time determining what the next console game I get will be due to their high cost. On the other hand, sites like Big Fish Games allow for quick impulse buys.

Quality is another interesting reason. When a new games comes out, there’s generally a lot of hype. Unfortunately, and probably as expected, most games don’t live up to the hype. The major letdown in most new games seems to be the gameplay or lack of content. The game is too short, or difficult to play due to poor control schemes. Game demos often don’t show the full game, giving false impressions. In the end, you pay a good deal of money for a game you don’t enjoy. And to top it off, there’s no way to get your money back. So many people opt to pirate the game instead of paying money for something they might not like. Of course, more often than not, they still don’t pay for the game, even if they do like it.

For myself, I don’t really have any interest in pirating games these days. While I would love to have the latest and greatest games (Bioshock and Mass Effect come to mind), pirating often means that you lose some of the features. You almost always lose the online portion of the game, since most online games use some form of DRM to ensure authenticity. Growing up, getting a job, and having little time to play might be a reason too… ;)

Of course, there is one particular reason to pirate games that seems to come to the forefront of my mind these days. DRM. Let’s say I can only get a few games a year. And let’s say I put off getting something like Bioshock or Spore, opting to get it later when it hits the bargain bin. The problem is, these games may never hit the bargain bin. Or, when they do, they won’t work anymore. Why? Because in order for the games to work, the activation servers for these games must be up and running. Good business sense dictates that most times, when a service costs more to run than the revenue it brings in, it’s time to discontinue that service. So when it becomes more costly to run the authentication servers as compared to the revenue the game is bringing in, they’ll get turned off. Or, worst case, the company maintaining those servers dissolves and the servers get deactivated because there’s no-one to run them anymore. The effect, in the end, is the same. The game I purchased is unusable. I recently saw it phrased another way, “you can’t buy new games anymore, you only rent them.”

The sole purpose of DRM, of course, is to prevent piracy. And is it working? Well, sure it is. It prevents casual pirating, such as making a copy for your friend down the street. With DRM, casual pirating becomes more difficult, often out of the reach of typical users. So in this way, DRM is a win. On the other hand, the advent of the Internet has made it extremely easy to find and download copies of games and other programs that have been altered to remove the need for activation. In other words, the DRM was cracked. Casual pirating becomes easier again.

And let’s face facts, DRM is not nearly as hard to crack as you may think. Let’s take a look at the latest “state-of-the-art” DRM as applied to the new game, Spore. Spore was already on the torrent sites, with a full workaround for the DRM, 3 days PRIOR to its release in the US! The game hadn’t even been released yet, and it was already pirated! Great job, DRM.

Of course, pirating is illegal, and there are no real excuses for it. But publishers should learn the reasons behind pirating. Why has it become so big? Is it purely because pirated versions are readily available via the Internet? What makes a pirate start pirating? Is there anything that can be done to reduce pirating, without alienating the legitimate user? Surely the draconian DRM schemes we use today aren’t working well. In fact, there are some games that won’t work unless they can re-activate every few days! And if you can’t get a connection to re-activate, you can’t play! But, dammit, I bought this game!

I have to applaud Cliff on his response to all of this, though. He has decided, through all of this, to not only reduce the price of his games, but to give up DRM completely (even though his DRM was a one-time lookup thing and almost completely non-intrusive) and lengthen his game demos. That takes a lot of courage to do, especially since his livelihood is riding on this. Hell, his willingness to do this has even intrigued me to the point where I may just have to buy one of his games, just to support his decision! I suppose I’ll have to check out his selection and see what’s there to play…

They’re Watching You… (Book Review: Little Brother)

My good friend Wil Wheaton (yeah, we’ve never met.. or talked…) mentioned a captivating book he read a few months ago. What really caught my attention was that he handed the book off to his son because he thought it was a book he could share with him. Having children myself, I decided to take a look at the book to see what all the fuss was about. That book is called Little Brother .

Little Brother is a book about a teenager caught up in global events that forever change his life. After a terrorist attack in his neighborhood, the Department of Homeland Security swoops in to save the day. What follows is a terrifying look into the future of our own country as privacy erodes and Big Brother takes over.

Cory Doctorow weaves a tale that is not only believable, but may be an eery foreshadowing of real events. It is a glaring reminder that we, as citizens, must ensure that the government continues to serve rather than control us.

I heartily recommend checking this book out. Cory has released Little Brother under the Creative Commons License and has it available as a free download on his website. I strongly encourage you to support Cory and buy a copy if you like the book. And if you like Cory’s work, his website has free downloads of other stories he has written.

Play with your Wii and get Fit!

The Wii is pretty popular these days. Nintendo has done an excellent job providing entertainment for just about anyone with this one device. Funnily enough, that includes fitness buffs. About 3 months ago (91 days, actually), Nintendo launched the Wii Fit.

The Wii Fit is one cool little device. It’s essentially a flat surface, about 3 inches tall, packed full of electronics. The board is broken down, internally, into four quadrants, each quadrant having its own scale. What this means is that when you stand on it, four scale simultaneously weigh you, resulting in both an accurate total weight, as well as a weight distribution that can be used to identify your balance. Thus they named the board the Wii Balance Board. Yeah.. Marketing.. They’re such geniuses.

 

So, 3 months ago, I went out and bought one of these beasts. Yes, I stood in line, at midnight, just to make sure I got one. Not a bad idea, apparently, as they have become somewhat scarce these days. At ay rate, I got one, and I started using it that morning. A lot has been said about the benefits of the Wii Fit, but I will tell you, from experience, that I’m damn happy I spent the $90 or so on it. I’ve missed weighing myself once since I bought it.

What’s so special about this thing anyway? Well, I’ve done a lot of thinking about that because I’ve fought with weight loss in the past, and I’ve always failed miserably. In fact, for the last 3 years or so, I’ve neither gained nor lost a pound. And while that may sound good, what you don’t realize is that I spent about a year and a half hitting the gym 2-3 times a week, I tried dieting, and I’ve tried casual exercise at home. Nothing seemed to work, and I was getting a tad frustrated. I wasn’t massively overweight, but my doctor did categorize me as morbidly obese. That puts my BMI over 30. In fact, my BMI was almost 38 when I started using the Wii Fit.

There are those that will argue that BMI is a bad measurement. I agree, to an extent. BMI is 100% a calculation of your body weight and your height. It does not take into account other factors such as muscle versus fat, or activity level. In fact, taller people tend to have an unnaturally high BMI, purely because of how BMI is calculated. Regardless, the intention of BMI is to provide a general idea of the optimum body weight for any given person.

Arguments aside, I wasn’t happy with my weight, or my BMI. I’m not the most active person in the world, and I don’t really enjoy exercise that much. But I had to do something, and the Wii Fit seemed, at least to me, to be a good idea. So far, it has worked out better than I ever hoped. I’ve dropped 35 pounds in 3 months, lowering my BMI to under 33. I feel fantastic, energetic, and a hell of a lot more confident. On the downside, though, I’m going to need a new wardrobe pretty soon.. Belts will only hold me over for so long. :)

So how did I do this? How did I lose so much weight in such a relatively short time? Well, first and foremost, I have to hand a lot of the credit to the Wii Fit. No, not because I use it to exercise, although that does help, but more-so because it tracks my weight. Seriously! Every morning I get up and weigh myself, and I immediately know where I stand. I know if I’ve slacked off too much, or if I’m on track to losing the weight I want to. It’s incredibly satisfying to look at the graphs every so often and see the curve of the line indicating the weight you’ve lost.

I do about 20-25 minutes of exercise on the Wii Fit 3-4 times a week. My schedule has changed a little recently, so one day a week I’m usually on the Wii Fit for a little over 30 minutes. That’s it for the Wii Fit! The rest of it is on my own.

I’ve reduced my food intake by a lot, which is probably the hardest thing I’ve done. I love food. I’m not keen on gorging myself, but I absolutely love my wife’s cooking. I also love pizza and chinese. I used to be able to eat half a pizza with no problem at all. So, reducing my intake was difficult. I cut out soda and candy right from the start. Occasionally I’ll have a soda, but not often. Reducing the rest was a matter of spreading it out a bit over the day. I eat smaller meals for breakfast, lunch, and dinner, and I usually have a snack or two during the day. That snack is anything from a handful of vegetables, to something like a handful of pretzels or a small bowl of pudding. I don’t watch calories or fat that much, but I am aware, somewhat, of what I’m taking in.

Next, I spend about 30 minutes a day walking. This is usually the first half of my lunch break. I’ll head out of the office, and walk around town for a while. I end up covering about 1.5 miles during that walk. It’s a casual walk, but at a somewhat brisk pace. I don’t make any lengthy stops, mostly just stop at street corners so I don’t become road pizza. It rains once in a while, and I’ll miss out on walking that day. That hasn’t happened too often, though, so I guess I’ve been lucky. My plan is to do laps up and down the parking garage if it rains for more than a day, though.

Finally, I do a little exercise before bed. It’s about a 5-10 minute workout routine. I spend about 5 minutes lifting barbells to strengthen my upper body, and then I do a series of abdominal crunches and finally I do the Plank for about 60 seconds. Occasionally I do an exercise I learned in the military called a butterfly, though I’ve only found it referenced online as a six-inch killers. It’s pretty simple, just lay on your back with your hands underneath your buttocks. Lift your feet about six inches above the ground and hold them there for 10-30 seconds, depending on your exercise level. Drop them slowly down, rest for a moment, and then start again. Another variant on this is to hold both feet at six inches, then move one foot up to about 12 inches. Switch feet, bringing one down and one up, sort of in a kicking motion. Each “kick” counts as half a rep. Do about 10 reps, then rest. The idea is to do a small set of exercises just before bed, enough to get the heart pumping, but not enough to work up a real sweat.

Overall, I do roughly an hours worth of exercise a day, and I usually end up taking the weekend off, though I’m always watching what I eat, and I endeavor to do my nightly routine every night. It’s working so far, and I truly feel great. I want to shave off another 50 or so pounds before I’m really satisfied, but I’ve definitely started moving in the right direction!

Programming *is* Rocket Science!

John Carmack is something of an idol to me. He’s an incredible programmer that has created some of the most advanced graphical games ever seen on the PC. He also dabbles in amateur rocketry with his rocketry company, Armadillo Aerospace, whom I’ve written about before.

I joined the Amateur Rocketry mailing list a couple years ago. The aRocket list is a great place to read about what’s going on in the amateur rocketry scene. The various rocket scientists on the list openly discuss designs, fuel mixtures, and a host of other topics. There’s also a lot of advice for both those getting into the game, as well as those who have been in a while.

Recently, John posted a note about the Rocket Racing League and some advice about the programming controlling vital components of the jets. Unfortunately, the mailing list archives require you to be a member of the list to view, but I’ll include some snippets of his post here.

The test pilot for the rocket racing league project made the suggestion that we should not allow the computer to shutdown the engine during the critical five to fifteen second period where the plane is at takeoff speed, but too close to the ground to make the turn needed to get backdown on a runway. We currently shut the engine down whenever a sensor goes out of expected range, and there are indeed plausible conditions where this can happen even when the engine is operating acceptably, such as a pressure transducer line cracking from vibration. On the other hand, there are plausible conditions where you really do want the computer to shut the valves immediately, such as a manifold explosion blowing the chamber off.

Disregarding the question of whether it was a good idea or not, this seems a really straightforward thing to implement. However, I cautioned everyone that this type of modification has all the earmarks of something that will almost certainly cause some problems when implemented.

Shutting off the engines on a regular plane is bad enough, but we’re talking about a full-blown rocket with wings here. I can imagine that a sudden loss of engines is enough to cause a good deal of stress for any pilot, but losing the engines just as the plane is taking off could be devastating. Of course, the engine exploding could be pretty devastating too.

We did implement it, and guess what? It caused a problem. We caught it in a static test and fixed it, and haven’t seen another problem with it since, but it still fell into the category of “risky to implement”. If we weren’t operating at a high testing tempo, I wouldn’t have done it. I certainly wouldn’t have done it if we only got one testing opportunity a year (of course, I wouldn’t undertake a project that only got one testing opportunity a year…).

Our flight control code really isn’t all that complicated, the change was only a few lines of code, and I’m a pretty good programmer. The exact nature of why I considered it a bit risky deal with internal details, but the point is that even fairly trivial sounding changes aren’t risk free to make. There are certainly some classes of changes that I make to our codebase regularly that I don’t bat an eyelash at, but you can’t usually tell the difference without intimate knowledge of the code.

I’ve found similar situations in my own programs. There are areas of the code that I’ll change, knowing it will have no real effect on anything else, and then there are those areas where changes are trivial, but they cause odd problems that come back to bite you later. Testing is, of course, the best way to find these problems, but testing isn’t always possible. But then, I’m not writing code that could mean the difference between life and death for a pilot. Not *that* has to be some serious stress.

Many software guys do not have a reasonable gut check feel for the assessment of software changes in an aerospace project. My normal coding practice has over an order of magnitude more test cycles than Armadillo does physical tests, and Armadillo does over an order of magnitude more tests than most other aerospace companies. Things are fundamentally different at different orders of magnitude.

John’s team probably runs tests more than any other team out there. He has successfully married the typical programming cycle with aerospace engineering. They constantly make incremental improvements and then run out to test them. And as surprising as it sounds, it seems to cost them less to do this. By making incremental improvements, they can control, to some degree, the impact on the system. What this means in the end is that they don’t spend an inordinate amount of time building this huge, complex system, only to have it explode on the first test. Not that they haven’t had their share of failures, but they’ve been a bit less spectacular than some.

John also presented some additional info from his other job.

As another cautionary tale, I recently had the entire codebase for our current Id Software project analyzed by a high end static analysis company. I was very pleased when they reported that our discovered defect rate was well under half the average that they find on codebases of comparable size. However, there were still over a hundred things that we could look at and say, “yes, that is broken”. Sure, most of them wouldn’t really be a problem, but it illustrates the inherent danger of large software solutions. The best defense, by far, is to be as small and simple as possible.

Small and simple is definitely the best. The more complexity you add, the more bugs and odd behavior pop up. Use the KISS principle!

Switching Gears…

Ok, so I did it. I made the switch. I bought a Mac. Or, more specifically, I bought a Macbook Pro.

Why? Well, I had a few reasons. Windows is the standard for most office applications, and it’s great for gaming, but I find it to be a real pain to code in. I’m not talking code for Windows applications, I’m talking code for web applications. Most of my code is perl and PHP and I really have no interest in fighting with Windows to get a stable development platform for these. Sure, I can remotely access the files I need, but then I’m tethered to an Internet connection. I had gotten around this (somewhat) by installing Linux on my Windows machine via VirtualBox. It worked wonderfully, but it’s slower that way, and there are still minor problems with accessibility, things not working, etc.

OSX seemed to fit the bill, though. By default, it comes with apache and PHP, you can install MySQL easily, and it’s built on top of BSD. I can drop to a terminal prompt and interact with it the same way I interact with a Linux machine. In fact, almost every standard command I use on my Linux servers is already on my Macbook.

Installing Apple’s XCode developer tools gives me just about everything else I could need, including a free IDE! Though, this particular IDE is more suited for C++, Java, Ruby, Python, and Cocoa. Still, it’s free and that’s nothing to scoff at. I have been using a trial of Komodo, though, and I’m leaning towards buying myself a copy. $295 is steep, though.

What really sold me on a Mac is the move to Intel processors and their Bootcamp software. I play games, and Mac doesn’t have the widest library of games, so having a Windows machine available is a must. Thanks to Bootcamp, I can continue to play games while keeping my development platform as well. Now I have OSX as my primary OS and a smaller Bootcamp partition for playing games. With the nVidia GeForce card in this beast, as well as a fast processor and 2GB of RAM, I’m set for a while..

There are times, though, when I’d like to have Windows apps at my fingertips, while I’m in OSX. For that, I’ve tried both Parallels and VMWare Fusion. Parallels is nice, and it’s been around for a while. It seems to work really well, and I had no real problems trying it out. VMWare Fusion 2 is currently in beta, and I installed that as well. I’m definitely leaning towards VMWare, though, because I’ve used them in the past, and they really know virtual machines. Both programs have a nifty feature that lets you run Windows apps in such as way as to make it seem like they’re running in OSX. In parallels it’s called Coherence, and in VMWare it’s called Unity. Neat features!

So far I’ve been quite pleased with my purchase. The machine is sleak, runs fast, and allows me more flexibility than I’ve ever had in a laptop. It does run a bit hot at times, but that’s what lapdesks are for.. :)

So now I’m an Apple fan… I’m sure you’ll be seeing posts about OSX applications as I learn more about my Mac. I definitely recommend checking them out if you’ve never used one. And, if you have used one in the past, pre-OSX days, check them out now. I hates the old Mac OS, but OSX is something completely different, definitely work a second look.

Steal the Net’s Identity

Imagine this. You wake up in the morning, go about your daily chores, and finally sit down to surf the web, read some news, check your mail, etc. A some point, you decide to log in to your bank to check your accounts. You get there, login, and you’re greeted with a page explaining that the site is down for maintenance. Oh well, you’ll come back later. In the meantime, someone drains your account using the username and password that you just graciously handed them, not realizing that the site you went to was not where you intended to go.

Sound familiar? Yeah, I guess it sounds a bit like a phishing attack, though a tad more sophisticated. I mean, you did type in the address for the bank yourself, didn’t you? It’s not like you clicked on a link in a email or something. But in the end, you arrived at the wrong site, cleverly designed, and gave them your information.

So how the hell did this happen? How could you end up at the wrong site when you personally put in the address, your computer has all the latest in virus scanning, firewalling, etc? You spelled it right, too! It’s almost as if someone took over the bank’s computer!

Well, they did. Sort of. But they did it without touching the bank’s computers at all. They used the DNS system to inject a false address for the bank website, effectively re-directing you to their site. How is this possible? Well, it’s a flaw in the DNS protocol itself that allows this. The Matasano Security blog posted about this on Monday, though the post was quickly removed. You may still be able to see the copy that Google has cached.

Let me start from the beginning. On July 8th, Dan Kaminsky announced that he had discovered a flaw in the DNS protocol and had been working, in secret, with vendors to release patches to fix this problem. This was a huge effort, one of the very first the world has ever seen. In the end, patches were released for Bind, Microsoft DNS, and others.

The flaw itself is interesting, to say the least. When a user requests an address for a domain, it usually goes to a local DNS cache for resolution. If the cache doesn’t know the answer, it follows a set of rules that eventually allow it to ask a server that is authoritative for that domain. When the cache asks the authoritative server, the packet contains a Query ID (QID). Since caches usually have multiple requests pending at any given time, the QID helps distinguish which response matches which request. Years ago, there was a way to spoof DNS by guessing the QID. This was pretty simple to do because the QID was sequential. So, the attacker could guess the QID and, if they could get their response back to the server faster than the authoritative server could, they would effectively hijack the domain.

So, vendors patched this flaw by randomizing the QID. Of course, if you have enough computing power, it’s still possible to guess the QID by cracking the random number generator. Difficult, but possible. However, the computing power to do this in a timely manner wasn’t readily available back then. So, 16-bit random QIDs were considered secure enough.

Fast forward to 2008. We have the power, and almost everyone with a computer has it. It is now possible to crack something like this in just a few seconds. So, this little flaw rears its ugly head once again. But there’s a saving grace here. When you request resolution for a domain name, you also receive additional data such as a TTL. The TTL, or Time To Live, defines how long an answer should be kept in the cache before asking for resolution again. This mechanism greatly reduces the amount of DNS traffic on the network because, in many cases, domain names tend to use the same IP address for weeks, months, and, in many cases, years. So, if the attacker is unsuccessful in his initial attack, he has to wait for the TTL to expire until he can try again.

There was another attack, back in the day, that allowed an attacker to overwrite entries in the cache, regardless of the TTL. As I mentioned before, when a DNS server responds, it can contain additional information. Some of this information is in the form of “glue” records. These are additional responses, included in the original response, that helps out the requester.

Let’s say, for instance, that you’re looking for the address for google.com. You ask your local cache, which doesn’t currently know the answer. It forwards that request on to the root servers responsible for .com domains using a process known as recursion. When the root server responds, the response will be the nameserver responsible for google.com, such as ns1.google.com. The cache now needs to contact ns1.google.com, but it does not know the address for that server, so it would have to make additional requests to the root servers to determine this. However, the root server already includes a glue record that gives the cache this information, without the cache asking for it. In a perfect world, this is wonderful because it makes the resolution process faster and reduces the amount of DNS traffic required. Unfortunately, this isn’t a perfect world. Attackers could exploit this by including glue records for domains that they were not authoritative for, effectively injecting records into the cache.

Again, vendors to the rescue! The concept of a bailiwick was introduced. In short, if a cache was looking for the address of google.com, and the response included the address for yahoo.com, it would ignore the yahoo.com information. This was known as a bailiwick check.

Ok, we’re safe now, right? Yeah, no. If we were safe, there wouldn’t be much for me to write about. No, times have changed… We now have the power to predict 16-bit random numbers, overcoming the QID problem. But TTL’s save us, right? Well, yes, sort of. But what happens if we combine these two attacks? Well, interesting things happen, actually.

What happens if we look up a nonexistent domain? Well, you get a response of NXDOMAIN, of course. Well yeah, but what happens in the background? Well, the cache goes through the exact same procedure it would normally go through for a valid domain. Remember, the cache has no idea that the domain doesn’t exist until it asks. Once it receives that NXDOMAIN, though, it will cache that response for a period of time, usually defined by the owner of the root domain itself. However, since it does go through the same process of resolving, there exists an attack vector that can be exploited.

So let’s combine the attacks. We know that we can guess the QID given enough guessing. And, we know that we can inject glue records for domains, provided they are within the same domain the response is for. So, if we can guess the QID, respond to a non-existent domain, and include glue records for a real domain, we can poison the cache and hijack the domain.

So now what? We already patched these two problems! Well, the short-term answer is another patch. The new patch adds additional randomness to the equation in the form of the source port. So, when a DNS server makes a request, it randomizes the QID and the source port. Now the attacker needs to guess both in order to be successful. This basically makes it a 32-bit number that needs to be guessed, rather than a 16-bit number. So, it takes a lot more effort on the part of the attacker. This helps, but, and this is important, it is still possible to perform this attack given enough time. This is NOT a permanent fix.

That’s the new attack in a nutshell. There may be additional details I’m not aware of, and Dan will be presenting them at the Blackhat conference in August. In the meantime, the message is to patch your server! Not every server is vulnerable to this, some, such as DJBDNS, have been randomizing source ports for a long time, but others are. If in doubt, check with your vendor.

This is pretty big news, and it’s pretty important. Seriously, this is not a joke. Check your servers and patch. Proof of concept code is in the wild already.

Hide that data…

Data security is a pretty hot topic these days, especially when it comes to portable data.  In fact, recent reports put airport laptop theft in the tens of thousands a week.  Most, if not all, of these laptops have sensitive data on them, whether it be sensitive to the user, or sensitive to the user’s employer.  And to make matters worse, most of these laptops lack anything beyond basic security such as a Windows logon password.

But is security that much of an issue?  Is it that difficult to effectively secure the data on a laptop, or any other computer for that matter?  Well, it depends on the type of security we’re talking about.  There are significant differences between securing data on a machine that is not powered as opposed to a machine that is powered and processing that data.  In the latter case, firewalls, anti-virus software, and good programming practices will help to shield that data from nosy intruders.

If your machine is not powered, and the attacker can gain physical access, is there any way to protect the data?  The answer is actually quite simple.  There exists a product that can encrypt the data on your machine, either in chunks, or as a whole.  In fact, with the latest version, you can even choose to have it deploy a decoy operating system, just in case you’re being tortured for your password..  What is this wondrous software, and how much is it going to cost you?  It’s called TrueCrypt, and it’s FREE.

TrueCrypt is a data encryption tool that runs on Windows, Mac OS X, and Linux.  In fact, if you’re a decent programmer, you can probably get it to work on most any operating system as the source is freely available.  The TrueCrypt website highlights the following as main features:

  • Creates a virtual encrypted disk within a file and mounts it as a real disk.
  • Encrypts an entire partition or storage device such as USB flash drive or hard drive.
  • Encrypts a partition or drive where Windows is installed (pre-boot authentication).
  • Encryption is automatic, real-time (on-the-fly) and transparent.
  • Provides two levels of plausible deniability, in case an adversary forces you to reveal the password:
    1) Hidden volume (steganography) and hidden operating system.
    2) No TrueCrypt volume can be identified (volumes cannot be distinguished from random data).
  • Encryption algorithms: AES-256, Serpent, and Twofish. Mode of operation: XTS.

There is a small amount of overhead when using encryption, but for most business applications, that’s an acceptable sacrifice for the security gained.  Even without the use of hidden volumes or decoy operating systems, TrueCrypt offers a safe, secure manner by which you can protect your data.  And, if you so choose, you can mode TrueCrypt volumes between computers and even operating systems, such as on a USB flash drive, while maintaining compatibility.  In fact, I use this feature on a daily basis.  I have a small 1 Gig USB flash drive with a TrueCrypt partition on it where I store some personal information such as a copy of portable Thunderbird.  Included on the USB drive, in an unencrypted area, is a copy of TrueCrypt for Windows, Mac, and Linux.  Thus, if I ever need to mount the drive on an operating system without a copy of TrueCrypt, I’ve brought my own.

TrueCrypt 6.0 was released over the July 4th holiday.  This latest release adds some great new features.  Parallel encryption and decryption, meaning it will use all of the processors (or cores) on a multi-processor system, was added.  This allows TrueCrypt to run substantially faster on multi-processor systems.  Also added was the ability to create and run hidden, or decoy, operating systems.  Hopefully I’ll never find myself in a situation where such a decoy is needed, but perhaps James Bond will find this new feature useful.  A number of minor enhancements were made as well, including a number of bug fixes.  The current version history can be found here, and you can download the latest version here.

TrueCrypt is a wonderful tool, even for personal data protection.  I recommend looking into it, and even integrating it into your everyday life.  It’s a small change, barely noticeable for most, but the security benefits are staggering.  Just don’t forget your password, ok?