Windows .ANI Vulnerability

Another day, another vulnerability… This time it’s animated cursors. You know, those crazy animated cursors Microsoft included in one of their Plus! packs back in the day?

Well, it seems that there’s a stack overflow exploit in the way they’re handled by the OS. In a nutshell, when it copies the data into memory, it doesn’t properly check the size of the memory being copied. The result is that memory is overwritten and the stack overflows.

The Zero-day Emergency Response Team has a pretty good writeup on their site about the exploit as well as a patch to resolve the problem. This is a pretty big security issue, so I recommend at least checking out the info on their site.

This vulnerability affects Windows 98, 2000, XP, Server 2003, and Vista. The Internet Storm Center also warns that other unsupported versions of Windows, probably Windows 95 and ME, are also likely affected. Neither ZERT nor Microsoft are likely to release a patch for Windows 95 or ME. Additionally, they have a nice matrix that explains which mail clients are vulnerable to this as well.

Microsoft has released an out-of-cycle patch for this vulnerability. You can find the relevent files on their advisory page, bulletin MS07-017. Patches for Windows 2000, XP, Server 2003, and Vista are available. If you still use Windows 98, the ZERT patch is your only option.

Update : eEye had released a patch back on March 30th for this vulnerability. However, this patch only ensures that .ANI files are loaded from the SystemRoot and not anywhere else. While this helps prevent most exploits, if an attacker can somehow gain access to the SystemRoot, the system is still vulnerable.

Please take special note : This is being actively exploited in the wild. This is a serious remote access vulnerability which can lead to your computer being compromised. Please make sure you have an anti-virus program installed and up-to-date. And remember, your first line of defense is you. Be responsible, know the risks, install the patches, and keep yourself safe.

Review – EA Replay (PSP)

Anticipation : 7
Expectation : 7
Initial Reaction : 3
Overall : 5
Genre : Various

 

EA Replay is a collection of old “classic” games.  Included in the collection are the following :

In addition to these classic games, EA decided to add some extra content such as multiplayer, collectible cards, and mid-game saving.

Unfortunately, this collection falls well short of being fun and entertaining.  My primary interest was the Wing Commander and Syndicate games.  I remember playing these on my PC and thoroughly enjoying them.  In fact, the Wing Commander series is still one of my all time favorites.

It’s not that the games don’t live up to present day expectations.  I’m realistic, I know that these aren’t next-gen multi-million dollar megahit games.  I realize we’re not talking about the latest in graphics and gameplay.  But I do expect them to play the way they did back when they were new games.

Wing Commander falls way short of this goal.  The WC games included are apparently the SNES versions.  The controls are just too quick!  It’s extremely hard to identify and target incoming ships and the controls are confusing.  Unfortunately, this killed the entire experience for me as I was very much looking forward to playing WC again.

Budokan and Syndicate are a little better.  For the most part, they’re what I remember from years past, although the Syndicate they included was the SNES version.  The gameplay seems to be identical to the originals and while not the best games in the collection, they’re not the worst.

The rest of the collection is actually pretty new to me.  I’ve heard of Road Rash, but never truly played it.  After taking a look, it reminds me of Pole Position, but with a bike.  The controls are responsive and the games seem to play pretty well.

B.O.B. is pretty fun to play.  I vaguely remember hearing about this game, but never played it.  B.O.B. is a side-scrolling platformer game.  It’s pretty neat, actually, and I had some fun playing it.  Worth checking out.

Jungle Strike and Desert Strike are pretty fair games.  I’m not a huge fan of games like this, so I don’t have much to say.  They’re worth playing if you’re fans of helicopter shooters, but if not, avoid them.

Mutant League Football is actually pretty fun.  Apparently this was a play on the Madden series of the day and they did a pretty good job with it.  Definitely worth a look.

Haunting Starring Polterguy is a very odd game.  The idea is to scare a family out of their home by screaming, making noise, and haunting items.  It’s a fun game to try out, but I don’t think it really stands the test of time.

Ultima is just plain horrible.  Again, this is not the original Ultima series from the PC, but a  port from the SNES version.  This game is simply horrible, just avoid it.

And finally, Virtual Pinball.  Not much to say here, it’s a pinball game.  Fun for a little bit, not much beyond that.

 

Overall I was extremely disappointed with the collection.  If I knew that most of these were ports from the SNES version, I would have passed off the collection altogether.  While I have had some fun playing Road Rash, B.O.B., and Mutant League Football, the game has mostly collected dust.

 

If you really need that classic-gaming fix, however, pick up the Sega Genesis Collection instead.  I’ll be reviewing that in the near future.  Definitely worth looking into.

phpTodo 0.8 Beta Released

A new version of phpTodo, version 0.8 Beta, was released today.  It’s been almost six months since the last release, mostly due to lack of time.  My primary goal for this release was to add ATOM support and get all the bugs fixed.  I feel I was able to accomplish both of these goals.

I think an official 1.0 release is imminent, assuming I have time to work on the program.  I have a few features I’d like to add before 1.0 if I can.  If they do get added, a 0.9 gamma version will be released before 1.0 becomes official.

After the 1.0 release, I’d like to get group support added.  In addition, I’m thinking about switching from single category based tasks to tags.  This would allow a single todo item to be placed into several categories at the same time.  Feed support will be updated as well, keeping in-line with the current feature set.

 

Overall, I’m quite happy with this project.  It’s helped me out in numerous ways, organizing my personal todo lists as well as giving me the opportunity to work on an open-source project.  I’d love to hear some feedback concerning this project, especially if you’re using it on a daily basis.  I’m definitely open to suggestions for improvements and I’d like to get some additional CSS layouts to include with the distribution.  You can leave any comments you may have right here on this blog entry.

Thanks to everyone who has already sent me suggestions and bug reports.  I hope to hear from more of you soon!  If you’re interested in trying out phpTodo, check out the demo site.

Hard drive failure reports

FAST ’07, the File and Storage Technology conference, was held from February 13th through the 16th. During the conference, a number of interesting papers were presented, two of which I want to highlight. I learned of these papers through posts on Slashdot rather than actually attending the conference. Honestly, I’m not a storage expert, but I find these studies interesting.

The first study, “Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?” was written by a Carnegie Mellon University professor, Garth Gibson, and a recent PhD graduate, Bianca Schroeder.

This study looked into the manufacturer’s specifications of MTTF, mean time to failure, and AFR, annual failure rate, compared to real-world hard drive replacement rates. The paper is heavily littered with statistical analysis, making it a rough read for some. However, if you can wade through all of the statistics, there is some good information here.

Manufacturers generally list MTTF rates of 1,000,000 to 1,500,000 hours. AFR is calculated by taking the number of hours in a year and dividing it by the MTTF. This means that the AFR ranges from 0.54% to 0.88%. In a nutshell, this means you have a 0.5 to 0.9% chance of your hard drive failing each year.

As explained in the study, determining whether a hard drive has failed or not is problematic at best. Manufacturers report that up to 40% of drives returned as bad are found to have no defects.

The study concludes that real world usage shows a much higher failure rate than that of the published MTTF values. Also, the failure rates between different types of drives such as SCSI, SATA, and FC, are similar. The authors go on to recommend some changes to the standards based on their findings.

The second study, “Failure Trends in a Large Disk Drive Population” was presented by a number of Google researchers, Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr´e Barroso. This paper is geared towards trying to find trends in the failures. Essentially, the goal is to create a reliable model to predict a drive failure so that the drive can be replaced before essential data is lost.

The researchers used an extensive database of hard drive statistics gathered from the 100,000+ hard drives deployed throughout their infrastructure. Statistics such as utilization, temperature, and a variety of SMART (Self-Monitoring Analysis and Reporting Technology) signals were collected over a five year period.

This study is well written and can be easily understood by non-academicians and those without statistical analysis training. The data is clearly laid out and each parameter studied is clearly explained.

Traditionally, temperature and utilization were pinpointed as the root cause of most failures. However, this study clearly shows a very small correlation between failure rates and these two parameters. In fact, failure rates due to high utilization seemed to be highest for drives under one year old, and stayed within 1% of low utilization drives. It was only at the end of a given drives expected lifetime that the failure rate due to high utilization jumped up again. Temperature was even more of a surprise showing that low temperature drives failed more often than high temperature drives until about the third year of life.

The report basically concludes that a reliable model of failure detection is mostly impossible at this time. The reason for this is that there is no clear indication of a reliable parameter for detecting imminent failure. SMART signals were useful in indicating impending failures and most drives fail within 60 days of the first reported errors. However, 36% of their failed drives reported no errors at all, making SMART a poor overall predictor.

Unfortunately, neither of these studies elaborated on the manufacturer or model of the drives used. This is likely due to professional courtesy and a lack of interest in being sued for defamation of character. While these studies will doubtlessly be useful to those designing large-scale storage networks, manufacturer specific information would be of great help.

For me, I mostly rely on Seagate hard drives. I’ve had very good luck with them, having had only a handful fail on me over the past few years. Maxtor used to be my second choice for drives, but they were acquired by Seagate at the end of 2005. I tend to stay away from Western Digital drives having had several bad experiences with them in the past. In fact, my brother had one of their drives literally catch fire and destroy his computer. IBM has also had some issues in the past, especially with their Deskstar line of drives which many people nicknamed the “Death Star” drive.

With the amount of information currently stored on hard drives today, and the massive amount in the future, hard drive reliability is a concern for many vendors. It should be a concern for end-users as well, although end-users are not likely to take this concern seriously. Overall these two reports are excellent overview of the current state of reliability and the trends seen today. Hopefully drive manufacturers can take these reports and use them to design changes to increase reliability, and to facilitate earlier detection of impending failures.

Review – Metal Gear Solid : Portable Ops (PSP)

Anticipation : 9
Expectation : 9
Initial Reaction : 10
Overall : 9
Genre : Third-Person Action/Adventure

I was first introduced to Metal Gear on the Playstation 2 console. The gameplay and story was incredibly engaging and I thoroughly enjoyed the experience. Based on that experience, I purchased Metal Gear Solid for the Gameboy. The graphics were horrible compared to the Playstation, but I expected that. The game itself was pretty good.

Fast forward to the PSP launch and Metal Gear Acid. While I was caught a little by surprise at the card based gameplay, I was pretty satisfied overall with the experience. In fact, I plan on getting Metal Gear Acid 2 at some point in the future.

I picked up a copy of Metal Gear Solid: Portable Ops after reading up on all the hype. I was pretty excited about the game prior to it’s release and couldn’t wait to get my hands on it. My enthusiasm was not in vain, MGS:PO is an incredible game.

The game opens with Snake being captured by his old unit, FoxHound. After rescuing another prisoner and escaping from the prison, Snake start on a mission to save the world. Again. Think Jack Bauer, but cooler.

General gameplay is similar to what previous MGS games provided. Sneaking around, attacking from hidden positions, sneaking up on unsuspecting enemies… It’s all there. It seems that Konami spared nothing when preparing this game for the PSP. The graphics are simply incredible, the controls are almost perfect, and the gameplay is amazing.

But wait, there’s more! You can recruit additional troops by capturing them. Each recruit comes with unique skills that assist you in accomplishing your goals. You can place each recruit into special units that give you additional abilities within the game. The spy unit gathers intelligence about locations you visit in the game. The tech unit manufactures new technology for combating the enemy. The medical unit heals your injured troops and sometimes produces useful items.

Multiplayer has a number of modes that you can take part in. Cyber Survival pits your team against other teams around the world. Cyber Survival is mostly hands off, outcomes being determined by a central server. However, loading up your troops with advanced gear can help to make your team a winner. During these missions, teams can encounter unique characters or capture prisoners of war which they bring back to your system.

There are also other multiplayer modes such as deathmatch, team deathmatch, and capture. These games can be played in either Real or Virtual mode. The difference between these modes is rather simple. In real mode, if your character is killed, he’s a permanent loss from your game. Virtual mode allows you to play to your hearts content without the chance of losing a character forever.

MGS:PO is the first game I’ve played that has Game Sharing. Game Sharing is a method by which the game can be played with other PSP owners that don’t have their own copy of the game. They download a client from your PSP and then join in the multiplayer fun.

Overall, MGS:PO is an incredible game. The gameplay, story, and controls are all top notch. Definitely check this one out, it’s worth it.

Book Review : 19 Deadly Sins of Software Security

Security is a pretty hot topic these days. Problems range from zombie computers acquired through viral attacks, to targeted intrusion of high-visibility targets. In many cases, insecure software is to blame for the security breach. With the increasing complexity of today’s software and the increased presence of criminals online, security is of the utmost importance.

19 Deadly Sins of Software Security was written by a trio of security researchers and software developers. The original list of 19 sins was developed by John Viega at the behest of Amit Yoran who was the Director of the Department of Homeland Security’s National Cyber Security Division. The list details 19 of the most common security flaws that exist in computer software.

The book details each flaw and the potential security risks posed when the flaw exists in your code. Examples of flawed software are presented to provide an insight into the seriousness of these flaws. The authors also detail ways to find these flaws in your code, and steps to prevent the problem in the future.

Overall the book covers most of the commonly known security flaws. These include SQL Injection, Cross Site Scripting, and Buffer Overruns. There are also a few lesser known flaws such as Integer Overflows and Format String problems.

The authors recognize that software flaws can also consist of conceptual and usability errors. For instance, one of the sins covered is the failure to protect network traffic. While the book goes into greater detail, this flaw generally means that the designer did not take into account the open network and failed to encrypt important data.

The last chapter covers usability. The authors detail how many applications leave too many options open for the user while making dialogs cryptic in nature. Default settings are either set too loose for proper security, or the fallback mechanisms used in the event of a failure cause more harm than good. As the Microsoft Security Response Center put it, “Security only works if the secure way also happens to be the easy way.”

This book is great for both novice and seasoned developers. As with most security books, it covers much of the same material, but is presented in new ways. Continual reminders about security can only help developers produce more secure code.

[Other References]

10 Immutable Laws of Security Administration

10 Immutable Laws of Security

Michael Howard’s Weblog

John Viega’s HomePage

Linux Software Raid

I had to replace a bad hard drive in a Linux box recently and I thought perhaps I’d detail the procedure I used.  This particular box uses software raid, so there are a few extra steps to getting the drive up and running.

Normally when a hard drive fails, you lose any data on it.  This is, of course, why we back things up.  In my case, I have two drives in a raid level 1 configuration.  There are a number of raid levels that dictate various states of redundancy (or lack thereof in the instance of level 0).  The raid levels are as follows (Copied from Wikipedia):

  • RAID 0: Striped Set
  • RAID 1: Mirrored Set
  • RAID 3/4: Striped with Dedicated Parity
  • RAID 5: Striped Set with Distributed Parity
  • RAID 6: Striped Set with Dual Distributed Parity

There are additional raid levels for nested raid as well as some non-standard raid levels.  For more information on those, see the Wikipedia article referenced above.

 

The hard drive in my case failed in kind of a weird way.  Only one of the partitions on the drive was malfunctioning.  Upon booting the server, however, the bios complained about the drive being bad.  So, better safe than sorry, I replaced the drive.

Raid level 1 is a mirrored raid.  As with most raid levels, the hard drives being raided should be identical.  It is possible to use different models and sizes in the same raid, but there are drawbacks such as a reduction in speed, possible increased failure rates, wasted space, etc.  Replacing a drive in a mirrored raid is pretty straightforward.  After identifying the problem drive, I physically removed the faulty drive and replaced it with a new one.

The secondary drive was the failed drive, so this replacement was pretty easy.  In the case of a primary drive failure, it’s easiest to move the secondary drive into the primary slot before replacing the failed drive.

Once the new drive has been installed, boot the system up and it should load up your favorite Linux distro.  The system should boot normally with a few errors regarding the degraded raid state.

After the system has booted, login to the system and use fdisk to partition the new drive.  Make sure you set the drive IDs back to Linux raid.  When finished, the partition table will look something like this :

   Device Boot      Start         End      Blocks   Id  System
/dev/hdb1   *           1          26      208813+  fd  Linux raid autodetect
/dev/hdb2              27        3850    30716280   fd  Linux raid autodetect
/dev/hdb3            3851        5125    10241437+  fd  Linux raid autodetect
/dev/hdb4            5126       19457   115121790    f  W95 Ext'd (LBA)
/dev/hdb5            5126        6400    10241406   fd  Linux raid autodetect
/dev/hdb6            6401        7037     5116671   fd  Linux raid autodetect
/dev/hdb7            7038        7164     1020096   82  Linux swap
/dev/hdb8            7165       19457    98743491   fd  Linux raid autodetect

Once the partitions have been set up, you need to format the drive with a filesystem.  This is a pretty painless process depending on your filesystem of choice.  I happen to be using ext3 as my filesystem, so I use the mke2fs program to format the drive.  To format an ext3 partition use the following command (This command, as well as the commands that follow, need to be run as root, so be sure to use sudo.) :

mke2fs -j /dev/hdb1

Once all of the drives have been formatted you can move on to creating the swap partition.  This is done using the mkswap program as follows :

mkswap /dev/hdb7

Once the swap drive has been formatted, activate it so the system can use it.  The swapon command achieves this goal :

swapon -a /dev/hdb7

And finally you can add the drives to the raid using mdadm.  mdadm is a single command with a plethora of uses.  It builds, monitors, and alters raid arrays.  To add a drive to the array use the following :

mdadm -a /dev/md1 /dev/hdb1

And that’s all there is to it.  If you’d like to watch the array rebuild itself, about as much fun as watching paint dry, you can do the following :

watch cat /proc/mdstat

And that’s all there is to it.  Software raid has come a long way and it’s quite stable these days.  I’ve been happily running it on my Linux machines for several years now.  It works well when hardware raid is not available or as a cheaper solution.  I’m quite happy with the performance and reliability of software raid and I definitely recommend it.

Godshell Toaster Wiki Open

I’m pleased to announce that the Godshell Toaster Wiki is now open for editing.

This wiki is intended to be a complete source of information for the qmail toaster I put together several years ago. This particular toaster uses Pawel Foremski’s excellent qmail-spp patch to allow on-the-fly modifications of the qmail server. With this toaster, a server administrator can write small shell scripts to alter the behavior of the server with minimal programming knowledge.

I have spent a considerable amount of time compiling the information that currently exists in the wiki and will continue to add and edit data in the future. Please feel free to take a look at the site and contribute!

Carmack on the PS3 and 360

John Carmack, the 3D game engine guru from id Software and a game developer I hold in very high regard, and Todd Hollenshead, CEO of id Software, were recently interviewed by GameInformer. Carmack recently received a Technology Emmy for his work and innovation on 3D engines, a well deserved award.

I was a bit surprised while reading the interview. Carmack seems to be a pretty big believer in DirectX these days, and thinks highly of the XBox 360. On the flip side, he’s not a fan of the asymmetric CPU of the PS3 and thinks Sony has dropped the ball when it comes to tools. I never realized that Carmack was such a fan of DirectX. He used to tout OpenGL so highly.

Todd and Carmack also talked about episodic gaming. Their general consensus seems to be that episodic gaming just isn’t there yet. It doesn’t make sense because by the time you get the first episode out, you’ve essentially completed all of the development. Shipping episodes at that point doesn’t make sense since you’ve already spent the capital to make the game to begin with.

Episodic games seem like a great idea from the outside, but perhaps they’re right. Traditionally, the initial games have sold well, but expansion packs don’t. Episodic gaming may be similar in nature with respect to sales. If the content is right, however, perhaps episodes will work. But then there’s the issue of release times. If you release a 5-10 hour episode, when is the optimal time to release the next episode? You’ll have gamers who play the entire episode on the day it’s released and then get bored waiting for more. And then there’s the gamers who take their time and finish the episode in a week or two. If you release too early, you upset those some people who don’t want to have to pay for content constantly, while waiting may cause those bored customers to lose interest.

The interview covered a few more areas such as DirectX, Quakecon, and Hollywood. I encourage you to check it out, it makes for good reading!

iPhone… A revolution?

So the cat’s out of the bag. Apple is already in the computer business, the music business, the video/TV business, and now they’re joining the cell phone business. Wow, they’ve come pretty far in the last 7 years alone.

So what is this iPhone thing anyway? Steve says it’s going to revolutionize phones, and that it’s 5 years ahead of the current generation. So does it really stack up? Well, since it’s only a prototype at this point, that’s a little hard to say. The feature set is impressive, as was the demonstration given at Macworld. Most of the reviews I’ve read have been pretty positive too.

So let’s break this down a little bit and see what we have. The most noticeable difference is the complete and total lack of a keypad/keyboard. In fact, there are a grant total of four buttons on this thing, five if you count up/down volume as two. And only one of them is on the actual face of the device. This may seem odd at first, but the beauty here is that any application developed for the iPhone can arbitrarily create their own buttons. How? Why?

Well, the entire face of the phone is one giant touchscreen. In fact, it’s a multi-touch screen meaning that you can touch multiple points on the screen at the same time for some special effects such as zooming in on a picture. This means that developers are not tied to a pre-defined keypad and can create what they need as the application is run. So, for instance, the phone itself has a large keypad for dialing a telephone number. In SMS and email mode, the keypad is shrunk slightly and becomes a full keyboard.

As Steve pointed out in his keynote, this is very similar to what happens on a PC today. A PC application can display buttons and controls in any configuration it needs, allowing the user to interact with it through use of a mouse. Now imagine the iPhone taking the place of the PC and your finger taking the place of the mouse. Your finger is a superb pointing device and it’s pretty handy too.

The iPhone runs an embedded version of OSX, allowing it access to a full array of rich applications. It should also allow developers a access to a familiar API for programming. While no mention of third-party development has been made yet, you can bet that Apple will release some sort of SDK. The full touchscreen capabilities of this device will definitely make for some innovative applications.

It supports WiFi, EDGE, and Bluetooth 2.0 in addition to Quad-Band GSM for telephony. WiFi access is touted as “automatic” and requires no user intervention. While this is likely true in situations where there is no WiFi security in place, the experience when in a secure environment is unknown. More details will likely be released over the coming months.

Cingular is the provider of choice right now. Apple signed an exclusivity contract with Cingular, so you’re tied to their network for the time being. Being a Cingular customer myself, this isn’t such a bad thing. I like Cingular’s network as I’ve had better luck with it than the other networks I’ve been on.

In addition to phone capabilities, the iPhone is a fully functional iPod. It syncs with iTunes as you would expect, has an iPod docking connector, and supports audio and video playback. One of the cooler features is the ability to tip the iPhone on it’s side to enable landscape mode. The iPhone automatically switches to landscape mode when it detects the change in pitch. Video must be viewed in landscape mode.

So it looks like the iPhone has all of the current smartphone capabilities and then some. But how will it do in the market? The two models announced at Macworld are priced pretty high. The 4 Gig model will run you $499 for the 4 Gig model, and $599 for the 8 Gig. This makes the iPhone one of the more expensive phones on the market. However, it seems that Apple is betting that a unified device, phone/iPod/camera/Internet, will be worth the premium price. They may be right, but only time will tell.

UPDATE : According to an article in the New York Times, Jobs is looking to restrict third-party applications on the iPhone. From the article :

“These are devices that need to work, and you can’t do that if you load any software on them,” he said. “That doesn’t mean there’s not going to be software to buy that you can load on them coming from us. It doesn’t mean we have to write it all, but it means it has to be more of a controlled environment.”

So it sounds like Apple is interested in third-party apps, but in a controlled manner. This means extra hoops that third-party developers need to jump through. This may also entail additional costs for the official Apple stamp of approval, meaning that smaller developers may be locked out of the system. Given the price point of the phone, I hope Apple realizes the importance of third-party apps and the impact they have. Without additional applications, Apple just has a fancy phone with little or no draw.