Internet Toll Booths

Net Neutrality has been a hot topic for some time.  At the heart of the issue is a struggle to increase revenues by charging for content, as well as access.  The term “Net Neutrality” itself refers to the idea that the network should be neutral, or free.  A “free” network would have no access restrictions preventing a user from accessing the content they wanted.  Andrew Odlyzko, Director of the Digital Technology Center at the University of Minnesota, recently published a paper(PDF) concerning Net Neutrality.  He highlights the struggle between big business and market fairness, a struggle that has existed for a very long time.

Think back to the early days of the Internet when providers charged for access by either a transfer or time limit.  This practice gradually gave way to unlimited access for a flat monthly fee.  In more recent times, reports have surfaced about providers who are limiting the total amount of traffic a user can transfer per month.  While providers aren’t coming out and saying it, they have seemingly reverted back to the pay-per-meg days of old.

More concerning, perhaps, is the new practice of throttling specific traffic.  While this seems to be centered around BitTorrent and Peer-to-Peer traffic at the moment, what’s to prevent a provider from throttling site-specific traffic.  In fact, what’s to prevent the provider from creating “Walled Gardens” and charging the end user for access to “extra” content not included in the garden?

Apparently nothing, as some companies have already been doing this, and others have announced plans to.  More recently, the FCC has decided to step in and look into the allegations of data tampering.  Of course, the FCC seems to have problems of it’s own at the moment.

So what is the ultimate answer to this question?  Should the ISP have the right to block and even tamper with data?  Should the end-user have the right to free access to the Internet?  These are tough questions, ones that have been heavily debated for some time, and will likely be debated far into the future.

For myself, my opinion is based on being both a subscriber, as well as an engineer for a service provider.  The provider has built the infrastructure used to access the Internet.  Granted, the funds used to build that infrastructure were provided by the subscribers, but the end result is the same.  The infrastructure is owned by the provider.  As with most property, the owner is generally free to do what they want with it, though this can be a pretty hotly debated topic as well, and perhaps a discussion for a later date.

For now, let’s assume that the owner has the right to modify and use what they own with the only limits being those laws that protect safety.  In other words, I am free to dictate the rules in my own hotel.  Kids can only play in the play room, drinks and food are only allowed in the dining room, and no-one walks through the hall without shoes on.  I will only provide cable TV with CNN and the weather channel, and the pool is only open from 1pm to 5pm on weekdays.  As a hotel owner, I can set these rules, and enforce them by having any guest who violates them removed.  That is my right as a hotel owner.  Of course, if the guests don’t like my rules, they are free to stay at another hotel.

How is this different from an ISP?  An ISP can set the rules however they want, and the subscriber can vote on those rules through the use of their wallet.  Don’t like the rules?  Cancel your subscription and go elsewhere.

Of course, this brings up a major problem with the current state of Internet access.  Unfortunately, there are many areas, even heavily populated ones, where there is no other provider to go to.  In some cases there is a telephone company to provide access, but no alternative such as cable.  In others, cable exists, but the phone company doesn’t have high-speed access yet.  And, in the grand tradition of greed and power, the providers in those areas are able to charge whatever rates they want (with some limitations, as set by the government), and allow or block access in any manner they wish.  And since there are no alternatives, the subscriber is stuck with service they don’t want at a rate they don’t want to pay.

So, my view is somewhat convoluted by the fact that competition between providers is non-existent in some areas.  Many subscribers are stuck with the local carrier and have no choice.  And while I believe that the provider should be able to run their network as they choose, it muddies the waters somewhat because the subscriber cannot vote with their wallet unless they are willing to go without access.

I don’t find the idea of a “walled garden” as that much of a problem, per se.  Look at AOL, for instance.  They flourished for a long time and they were a perfect example of a walled garden at the beginning.  More recent times have led to them allowing full Internet access, but the core AOL client still exists and allows them to feed specific content to the customer.  If providers were willing to lower rates and provide interfaces such as AOLs, I can easily see some users jumping at the opportunity.  Likewise, I see users, such as myself, who are willing to pay a premium for unadulterated access to the Internet.

My hope is that the Internet remains unmolested and open to those who want access.  We can only wait and see what will happen in the end.

A new hairpiece for Mozilla?

Back in October I wrote about a new technology from Mozilla Labs called Prism.  Since then, the team at Mozilla has been working on some newer technology.

First up is something called Personas.  Personas is a neat little extension that lets you modify the Firefox theme on the fly.  You are presented with a small menu, accessible via an icon on the status bar.  From the menu, you can choose from a number of different “themes” that will change the design of the default Firefox theme.

Overall, personas is just a neat little extension with no real purpose other than breaking up the monotony.  You can set it to randomly select a persona, which will cause the persona to change for each instance of the browser.  More options are definitely needed, such as a custom list of personas to choose from, but it’s a decent start.

More interesting, however, is the second technology I’d like to present.  Dubbed Weave, this technology is a bit more on-par with what I’ve been looking forward to for years.  Weave presents the user with a way to record their individual user settings, store them on a remote server, and sync them up with any other installation of Firefox.  In fact, Weave aims to allow the user to sync their preferences with other third-party applications, such as social networks and browsers.

To be honest, I have no real interest whatsoever in social networks.  I avoid MySpace like the plague, and I haven’t bothered to look into Facebook at all.  My on-line collaboration, thus far, has been mostly through traditional means, Instant Message, E-Mail, and the Web.  In fact, I’m not sure any of my online activities fall into the so-called “Social” category.  So, my interest here lies merely in the distribution of my personal metadata between applications that I access.  I would love to be able to “log in” to any computer and immediately download my browser settings, bookmarks, and maybe even my browsing history.  Having all of that information in one central location that can be accessed whenever I need it is a wonderful thought.

I currently use the Bookmark Sync and Sort extension which allows me to upload my bookmarks to my own personal server and synchronize them with other installations of Firefox.  Other such extensions exist to allow you to sync with Google, Foxmarks, and more, but I prefer to have complete control over my data, rather than placing it on a third-party server.

Weave promises to be an open framework for metadata handling, or services integration.  The offer the following view of the process (click for larger image) :

In essence, you access your metadata via a web browser, phone, or some other third-party application.  That application, being Weave-aware, allows you to view and manipulate your metadata.  You can choose to make some of your data available to outside users, such as friends and family, or even make it completely open to the world.  At the same time, any new metadata you create is automatically synchronized with the central servers, updating it instantly wherever you access it.

Weave looks to be a pretty exciting project, one I plan on keeping an eye on.

HERO Returns!

Greetings and welcome to a new year.  Same as the old year, but incremented by one.  Exciting, eh?

I stumbled across an article the other day about an old friend of mine.  I worked on him all through high-school, learning quite a bit about robotics along the way.  His name?  HERO 2000.

HERO had all sorts of cool gadgets including a full robotic arm, speech synthesis, a bunch of sensors to detect light, sound, heat, and more.  You could even write programs, in BASIC, that automated the robot to do different tasks.  I spent quite a bit of time programming him for a variety of tasks, getting him set up for shows, and just playing around with all of the different sensors and other features.  Like I said, I learned a lot.

So, back to the article I mentioned.  Apparently, HeathKit, the original makers of the HERO robot, are at it again.  The HERO robot is coming back, this year!  The new HE-Robot is supposedly available now, according to an article on DeviceGuru, with educational kits coming in January and February.

According to the specifications, the new HERO runs Windows XP Pro on an Intel Core 2 Duo processor.  I’m not impressed with Windows, but I’m sure that can be replaced easily enough.  In fact, with the large OSS crowd out there, I’ll be there’s a full Linux OS install for HERO before the end of the year.

At any rate, the robot comes with a webcam, cd-rom/cd-rw (for on-the-go burning, of course), a bunch of sensors, speakers, and more.  The only thing I see missing is the arm.  And, unfortunately, based on the pictures available, it doesn’t look like the arm will ever be available.  Just not enough room for it.

So, how about price.  Well, it appears that White Box Robots is the manufacturer of this particular machine.  According to their website, the Series 9 PC-Bot, which the HE-RObot is based on, runs a cool $7,995.  Ugh.  At that price, I can research and build my own.  There are less expensive models, including a few that run Linux (which means that drivers already exist), so let’s hope HeathKit sells them for a lower price.  I would love to buy one of these as a kit and build it with my sons, but $5,000 is just way out of my price range…  Anyone want to donate one to me?  :)  Pretty please?

“Today is officially a bad day…”

The X Prize Cup was held this past weekend, and among the contestants was Armadillo Aerospace, headed by none other than 3D game programming guru, John Carmack. John and his intrepid crew have been working for about seven years on their rocketry project. Currently, their goal is to enter, and win, the Northrop Grumman Lunar Lander Challenge.

The challenge is described as follows :

The Competition is divided into two levels. Level 1 requires a rocket to take off from a designated launch area, rocket up to 150 feet (50 meters) altitude, then hover for 90 seconds while landing precisely on a landing pad 100 meters away. The flight must then be repeated in reverse and both flights, along with all of the necessary preparation for each, must take place within a two and a half hour period.

The more difficult course, Level 2, requires the rocket to hover for twice as long before landing precisely on a simulated lunar surface, packed with craters and boulders to mimic actual lunar terrain. The hover times are calculated so that the Level 2 mission closely simulates the power needed to perform the real lunar mission.

It sounds simple, but, as with most rocketry projects, it’s not. John and his team competed in 2006, but were not successful. They had another chance this past weekend to take another shot at the prize, a cool $350,000. Six flight attempts later, however, they walked away empty handed.

That’s not to say, however, that they didn’t accomplish anything at all. Even among the failures, Armadillo accomplished a lot. Two flights were quite successful, though they only accomplished the first part of the level 1 flight. A third flight was able to hover for a full 83 seconds, despite a crack in the combustion chamber. Overall, these were great accomplishments.

John Demar, a regular on the ARocket mailing list, was kind enough to post a bunch of photos from the event. John Carmack, just prior to leaving for the cup, posted a video of the AST qualifying flight at the Oklahoma space port. I’m sure John, or one of the crew, will post a complete run-down of the event on the Armadillo site when they get back home.

While they were unsuccessful this year, they were the only team to enter the competition. I can’t wait to see what they do next!

AIR, and a Prism

Web 2.0 is upon us, and with it comes new technologies determined to integrate it with our daily activities.  Thus far, interacting with the web has been through the use of a web browser such as Firefox, Opera, or Internet Explorer.  But, times are changing…

 

Let’s take a peek at two new technologies that are poised to change the web world and truly integrate web-based applications with traditional desktop applications.

 

First up is Adobe AIR, formerly known as Apollo.  According to Adobe :

Adobe® AIR lets developers use their existing web development skills in HTML, AJAX, Flash and Flex to build and deploy rich Internet applications to the desktop.

In a nutshell, it brings web-based content directly into standalone applications.  In other words, developers can write a complete web-based application and distribute it as a downloadable application.  It’s a pretty neat concept as it allows you access to the standard UI elements of a desktop application, while allowing you to use standard web technologies such as HTML and JavaScript.

It’s cross-platform, like Java, so you can build a small distributable application without needing to distribute the framework as well.  It also supports offline use.  In other words, you can interact with web-based applications, while not connected to the Internet.  There are limitations, of course, but all of your interactions are queued up and synchronized with the online portion of the application the next time you connect.

It looks like a pretty cool technology.  Time will tell if it takes off or not.  One drawback, depending on who you are, is that this is not an open-source solution.  This is an Adobe product and with that comes all of the Adobe licensing.

 

The other new technology is Mozilla Lab’s Prism.  Prism is similar to AIR in that it strives to create desktop-based applications using web technologies, but so far, it’s doing it in a manner opposite to that of AIR.  Prism allows you to encapsulate on-line content into a simple desktop application, minus any of the fancy UI elements associated with the Firefox web browser.  The result is a fast web-based application running in a normal desktop window.

It doesn’t sound like much now, but it has potential.  Mozilla has plans to add new functionality to the web to allow for offline data storage, 3D graphics, and more.  So, instead of extending the capabilities of Prism, Mozilla wants to extend the capabilities of the web.

So why the different approach?  Well, with AIR, if you are away from your computer for some reason, you may not be able to access the same content you normally would.  AIR may not be installed on the new machine, and you may not have permission to install it.  You can likely access the web-based version of the application you were using, but you may end up with limited functionality.

Prism, on the other hand, allows you to use web applications as if they were desktop applications.  But, at the end of the day, it’s still a web application.  So, if you find yourself on someone else’s machine, without Prism, a simple web browser will do.

 

Both technologies clearly have advantages and only time will tell if either, or both, survive.  It’s a strange, new world, and I’m excited….

Stop Car Thief!

Technology is wonderful, and we are advancing by leaps and bounds every day.  Everything is becoming more connected, and in some cases, smarter.  For instance, are you aware that almost every modern vehicle is microprocessor controlled?  Computers inside the engine control everything from the fuel mixture to the airflow through the engine.

Other computer-based systems in your car can add features such as GPS navigation, or even connect you to a central monitoring station.  GM seems to have cornered the market on mobile assistance with it’s OnStar service.  OnStar is an in-vehicle system that allows the owner to report emergencies, get directions, make phone calls, and even remotely unlock your car doors.

Well, OnStar recently announced plans to add another feature to it’s service.  Dubbed “Stolen Vehicle Slowdown,” this new service allows police to remotely stop a stolen vehicle.  The service is supposed to start in 2009 with about 20 models.  Basically, when the police identify a stolen vehicle, they can have the OnStar technician automatically disable the vehicle, leaving the thief with control over the steering and brakes, only.  OnStar also mentions that they may issue a verbal warning to the would-be thief, prior to disabling the car.

 

But is this too much power?  What are the implications here?  OnStar is already a wireless system that allows remote unlocking of your car doors.  It reports back vehicle information to OnStar who can then warn you about impending vehicle problems.  Remote diagnostics can be run to determine the cause of a malfunction.  And now, remotely, the vehicle can be disabled?

As history has shown us, nothing is unhackable.  How long will it be until hackers identify a way around the OnStar security and find a way to disable vehicles at-will?  It will likely start as a joke, disabling vehicles on the highway, causing havoc with traffic, but in a relatively safe manner.  How about disabling the boss’s car?  But then someone will use this new power for evil.  Car jackers will start disabling cars to make it easier to steal them.  Criminals will disable vehicles so they can rob or harm the driver.

So how far is too far?  Is there any way to make services such as this safe?

Whoa! Slow down! Or else…

There have been rumblings over the past few years about companies that are throttling customer bandwidth and, in some instances, canceling their service. I can confirm one of the rumors, having worked for the company, and I would tend to believe the other rumors. The problem with most of these situations is that none of these companies ever solidly defines what will result in throttling or loss of service. In fact, most of them merely put clauses in their Terms of Service that states that the bandwidth they are purchasing is not sustained, not guaranteed, etc.

Once particular company has been in the news as of late, having cut customers off time and time again. In fact, they have, what appears to be, a super-secret internal group of customer support representatives that deal with the “offenders.” Really, I’m not making this up. Check out this blog entry. This is pretty typical of companies that enact these types of policies. What I find interesting here is how Comcast determines who to disable. According to the blog entry by Rocketman, Comcast is essentially determining who the top 1% of users are for each month and giving them a high-usage warning. The interesting bit is that this is almost exactly how my previous employer was handling it.

Well, apparently Comcast has come out with a statement to clarify what excessive usage is. According to Comcast, excessive usage is defined as “a user who downloads the equivalent of 30,000 songs, 250,000 pictures, or 13 million emails.” So let’s pull this apart a little. The terms they use are rather interesting. Songs? Pictures? How is this even close to descriptive enough to use? A song can vary wildly in size depending on the encoding method, bitrate, etc. So the same song can range from 1 MB to 100 MB pretty easily. How about pictures then? Well, what kind of pictures? After all, thumbnails are pictures too. So, again, we can vary the size of a picture from 10 KB to 10 MB, depending on the size and detail of the picture. And, of course, let’s not forget emails. An average email is about 10 KB or so, but these can also range up to several MB in size.

So let’s try out some simple math on this. Email seems to be the easiest to deal with, so we’ll use that. 13 Million emails in one month, assuming a 10 KB average size for each email, results in approximately 130 GB of data. That’s only an average of 50 KB per seconds over the course of 30 days. If we assume a user is only on the computer for 8 hours a day, that’s an average of 150KB per second for the entire 8 hours each day. Of course, we don’t normally download at such a consistent rate, it’s much more bursty in nature.

Now, I don’t believe the average user is going to download this much data, but there are business professionals who could easily exceed this rate. But I think the bigger issue here is how these companies are handling these issues. They advertise and sell access rates ranging anywhere from 3 Meg to 10 Meg and then get upset when the customers actually use that bandwidth. Assuming a 3M profile, that means you can download something in the range of 972 GB of data in one month. 10M is even more fun, allowing a max rate of about 3.2 TB. Think about that for a minute. That means you can only use about 13% of a 3M profile, and 4% of a 10M profile before they’ll terminate your service.

While I understand that providers need to ensure that everyone receives a consistent, reliable service, I don’t believe they can treat customers like this. We’ll see how this turns out over time, but I expect that as video becomes more popular, you’ll see customers that exceed this rate on a much more consistent basis. I wonder how providers will handle that…

Satellite TV Woes

Back in the day, I had Analog and then Digital cable.  Having been employed by a sister company of the local cable company, I enjoyed free cable.  There were a lot of digital artifacts, and the picture wasn’t always that great, but it was free and I learned to live with it.

After I left that company, I had to pay full price for the digital cable I had installed.  Of course, I was used to a larger package and the price just outright shocked me.  With the cable modem included, it was somewhere in the $150 a month range.  Between the signal issues on the cable TV, and the constant cable modem outages, I happily decided to drop both the cable and the cable modem and move on to DSL and Satellite TV.

My first foray into Satellite TV was with Dish Networks.  The choice to do so was mostly guided by my brother’s employment by Dish.  So, we checked it out and had it installed.  At the time, they were running a free DVR promotion, so we grabbed that as well.

Dish is great.  The DVR was a dual tuner, so we were able to hook two TVs up to it.  We could record two shows at once, and watch two recorded shows at the same time, one on each TV.  It was pure TV bliss, and it got better.  Dish started adding little features here and there that I started noticing more and more.  First, the on-screen guide started showing better summaries of the shows.  Then it would show the year the show was produced in.  And finally, it started showing actual episode number information.  Little things, but it made all the difference.

Dish, however, had it’s problems.  My family and I only watch a few channels.  The kids like the cartoon channels : Cartoon Network, Nickelodeon, Noggin, and Boomerang.  My wife enjoys the local channels for current shows such as CSI and Law and Order, and also the educational channels such as The History Channel, The Science Channel, and Discovery.  And myself, I’m into stuff like Scifi, FX, and occasionally, G4.  CSI and Law and Order are on my menu as well.  The problem is, in order to get all of the channels we wanted, we needed to subscribe to the largest Dish package.  It’s still cheaper than cable, but more money than we wanted to pay to pick up one or two extra channels.

Enter DirecTV.  DirecTV offered all the channels we wanted in their basic package.  So, we ordered it.  As it turns out, they’ve partnered with Verizon, so we can get our phone, DSL, and dish all on the same bill.  Personally, I couldn’t care less about that, but I guess it is convenient.

At any rate, we got DirecTV about a month or so ago.  Again, we got the DVR, but there’s a problem there.  DirecTV doesn’t offer a dual TV DVR.  It’s dual tuner so we can tape two shows simultaneously, but you can only hook a single TV up to it.  Our other TV has a normal DirecTV receiver on it.  Strike one against DirecTV, and we didn’t even have it hooked up yet.

So the guy comes and installs all the new stuff.  They used the same mount that the Dish Networks dish was mounted on, as well as the same cables, so that was convenient.  Dish Networks used some really high quality cables, so I was pleased that we were able to keep them.  Everything was installed, and the installer was pretty cool.  He explained everything and then went on his way.

I started messing around with the DVR and immediately noticed some very annoying problems.  The remote is a universal remote.  Dish Networks used them too.  The problem with the DirecTV remote, however, is that apparently when you interact with the TV, VCR, or DVD player, it needs to send the signal to the DirecTV receiver first before it will send the signal to the other equipment.  This means merely pressing the volume control results in nothing.  You need to hold the volume down for about a second before it will change the volume on the TV.  Very, very annoying.  I also noticed a considerable pause between pressing buttons on the controller and having the DVR respond.  The standalone receiver is much quicker, but there is definitely a noticeable lag there.  Strike two.

Continuing to mess around with the DVR, I started checking out how to set up the record timers and whatnot.  DirecTV has a nice guide feature the automatically breaks down the channels into sub-groups such as movie channels, family channels, etc.  They also have a nicer search feature than Dish does.  As you type in what you’re searching for, it automatically refreshes the list of found items, allowing you a quick shortcut to jump over and choose what you’re looking for.  Dish allows you to put arbitrary words in and record based on title matches, but I’m not sure if DirecTV does.  I never used that feature anyway.  So the subgroups and the search features are a score for DirecTV.

Once in the guide, however, it gets annoying.  Dish will automatically mask out any unsubscribed channels for you, where DirecTV does not.  Or, rather, if they do, it’s buried somewhere in the options and I can’t find it.  Because of this, I find all sorts of movies and programs that look cool, but give me a “you’re not subscribed to this channel” message when I try to watch them.  Quite annoying.

I set up a bunch of timers for shows my family and I like to watch.  It was pretty easy and worked well.  A few days later, I checked the shows that recorded.  DirecTV groups together episodes for shows which is a really nice feature.  However, I noticed that one or two shows never recorded.  Dish had a problem once in a while with recording new shows where the show didn’t have a “new” flag on it and it would skip it.  Thinking this was the problem with DirecTV, I just switched the timer to record all shows.  I’d have to delete a bunch of shows I already saw, but that’s no big deal.

Another week goes by, still no shows.  Apparently DirecTV doesn’t want me to watch my shows.  Now I’m completely frustrated.  Strike three.

Unfortunately, I’m in a two year contract, so I just have to live with this.  I’m definitely looking to get my Dish Networks setup back at the end, though.  That extra few bucks we spent on Dish was well worth it.

 

DirecTV definitely has some features that Dish doesn’t, but the lack of a dual tuner, the lag time between the controller and the receiver, and the refusal to tape some shows is just too much.  The latter two I can live with, but the dual TV DVR was just awesome and I really miss it.  Since I only have the DVR on the main TV in the house, I need to wait until the kids go to bed before I can watch my shows in peace.  Of course, I need to go to bed too since I get up early for work.  This leaves virtually no time for the few shows I watch, and as a result, I have a bunch of stuff recorded that I haven’t been able to watch yet.  And, since it’s that time of the year where most of my shows aren’t being shown, I know that it’s only going to get worse.

I’m just annoyed at this point.  If you have a choice between Dish and DirecTV, I definitely suggest Dish.  It’s much better in the long run and definitely worth the extra few dollars.

Backups? Where?

It’s been a bit hectic, sorry for the long time between posting.

 

So, backups.  Backups are important, we all know that.  So how many people actually follow their own advice and back their data up?  Yeah, it’s a sad situation for desktops.  The server world is a little different, though, with literally tens, possibly hundreds of different backup utilities available.

 

My preferred backup tool of choice is the Advanced Maryland Automatic Network Disk Archiver, or AMANDA for short.  AMANDA has been around since before 1997 and has evolved into a pretty decent backup system.  Initially intended for single tape-based backups, options have been added recently to allow for tape spanning and disk-based backups as well.

Getting started with AMANDA can be a bit of a chore.  The hardest part, at least for me, was getting the tape backup machine running.  Once that was out of the way, the rest of it was pretty easy.  The config can be a little overwhelming if you don’t understand the options, but there are a lot of guides on the Internet to explain it.  In fact, the “tutorial” I originally used is located here.

Once it’s up and running, you’ll receive a daily email from Amanda letting you know how the previous nights backup went.  All of the various AMANDA utilities are command-line based.  There is no official GUI at all.  Of course, this causes a lot of people to shy away from the system.  But overall, once you get the hang of it, it’s pretty easy to use.

Recovery from backup is a pretty simple process.  On the machine you’re recovering, run the amrecover program.  You then use regular filesystem commands to locate the files you want to restore and add them to the restore list.  When you’ve added all the files, issue the extract command and it will restore all of the files you’ve chosen.  It’s works quite well, I’ve had to use it once or twice…  Lemme tell ya, the first time I had to restore from backups I was sweatin bullets..  After the first one worked flawlessly, subsequent restores were completed with a much lower stress level.  It’s great to know that there are backups available in the case of an emergency.

AMANDA is a great tool for backing up servers, but what about clients?  There is a Windows client as well that runs using Cygwin, a free open-source Linux-like environment for Windows.  Instructions for setting something like this up are located in the AMANDA documentation.  I haven’t tried this, but it doesn’t look too hard.  Other client backup options include remote NFS and SAMBA shares.

Overall, AMANDA is a great backup tool that has saved me a few times.  I definitely recommend checking it out.

Hard drive failure reports

FAST ’07, the File and Storage Technology conference, was held from February 13th through the 16th. During the conference, a number of interesting papers were presented, two of which I want to highlight. I learned of these papers through posts on Slashdot rather than actually attending the conference. Honestly, I’m not a storage expert, but I find these studies interesting.

The first study, “Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?” was written by a Carnegie Mellon University professor, Garth Gibson, and a recent PhD graduate, Bianca Schroeder.

This study looked into the manufacturer’s specifications of MTTF, mean time to failure, and AFR, annual failure rate, compared to real-world hard drive replacement rates. The paper is heavily littered with statistical analysis, making it a rough read for some. However, if you can wade through all of the statistics, there is some good information here.

Manufacturers generally list MTTF rates of 1,000,000 to 1,500,000 hours. AFR is calculated by taking the number of hours in a year and dividing it by the MTTF. This means that the AFR ranges from 0.54% to 0.88%. In a nutshell, this means you have a 0.5 to 0.9% chance of your hard drive failing each year.

As explained in the study, determining whether a hard drive has failed or not is problematic at best. Manufacturers report that up to 40% of drives returned as bad are found to have no defects.

The study concludes that real world usage shows a much higher failure rate than that of the published MTTF values. Also, the failure rates between different types of drives such as SCSI, SATA, and FC, are similar. The authors go on to recommend some changes to the standards based on their findings.

The second study, “Failure Trends in a Large Disk Drive Population” was presented by a number of Google researchers, Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr´e Barroso. This paper is geared towards trying to find trends in the failures. Essentially, the goal is to create a reliable model to predict a drive failure so that the drive can be replaced before essential data is lost.

The researchers used an extensive database of hard drive statistics gathered from the 100,000+ hard drives deployed throughout their infrastructure. Statistics such as utilization, temperature, and a variety of SMART (Self-Monitoring Analysis and Reporting Technology) signals were collected over a five year period.

This study is well written and can be easily understood by non-academicians and those without statistical analysis training. The data is clearly laid out and each parameter studied is clearly explained.

Traditionally, temperature and utilization were pinpointed as the root cause of most failures. However, this study clearly shows a very small correlation between failure rates and these two parameters. In fact, failure rates due to high utilization seemed to be highest for drives under one year old, and stayed within 1% of low utilization drives. It was only at the end of a given drives expected lifetime that the failure rate due to high utilization jumped up again. Temperature was even more of a surprise showing that low temperature drives failed more often than high temperature drives until about the third year of life.

The report basically concludes that a reliable model of failure detection is mostly impossible at this time. The reason for this is that there is no clear indication of a reliable parameter for detecting imminent failure. SMART signals were useful in indicating impending failures and most drives fail within 60 days of the first reported errors. However, 36% of their failed drives reported no errors at all, making SMART a poor overall predictor.

Unfortunately, neither of these studies elaborated on the manufacturer or model of the drives used. This is likely due to professional courtesy and a lack of interest in being sued for defamation of character. While these studies will doubtlessly be useful to those designing large-scale storage networks, manufacturer specific information would be of great help.

For me, I mostly rely on Seagate hard drives. I’ve had very good luck with them, having had only a handful fail on me over the past few years. Maxtor used to be my second choice for drives, but they were acquired by Seagate at the end of 2005. I tend to stay away from Western Digital drives having had several bad experiences with them in the past. In fact, my brother had one of their drives literally catch fire and destroy his computer. IBM has also had some issues in the past, especially with their Deskstar line of drives which many people nicknamed the “Death Star” drive.

With the amount of information currently stored on hard drives today, and the massive amount in the future, hard drive reliability is a concern for many vendors. It should be a concern for end-users as well, although end-users are not likely to take this concern seriously. Overall these two reports are excellent overview of the current state of reliability and the trends seen today. Hopefully drive manufacturers can take these reports and use them to design changes to increase reliability, and to facilitate earlier detection of impending failures.