Get it while it’s hot….

Firefox 3.0, out now. Get it, it’s definitely worth it.

Oh, are you still here? Guess you need some incentive then. Well, let’s take a quick look at the new features.

Probably the most talked about feature in the new release is the “Awesome Bar.” Yeah, the name is kind of lame, but the functionality is quite cool. The new bar combines the old auto-complete history feature with your bookmarks. In short, when you start typing in the Address Bar, Firefox auto-completes based on history, bookmarks, and tags. A drop-down appears below the location bar, showing you the results that best match what you’re typing. The results include the name of the page, the address, and the tags you’ve assigned (if it’s a bookmark).

While I find this particular feature of the new Firefox to be the most helpful, many people do not. The reason I’ve heard cited for this hatred is that this forces the user into something new, breaking the “simplicity” of Firefox. And while I can agree, somewhat, with that, I don’t think it’s that big a deal. I do agree, however, that the developers should have included a switch to revert back to the old behavior. I did stumble upon a new extension and a few configuration options that can switch you back, though. The extension, called oldbar, modifies the presentation of the results so it resembles the old Firefox 2.0 results. The writer of the extension is quick to point out that the underlying algorithm is still the Firefox 3.0 version.

You can also check out these two configuration options in the about:config screen:

  • browser.urlbar.matchOnlyTyped (default: False)
  • browser.urlbar.maxRichResults (default: 12)

Setting the matchOnlyTyped option to True makes Firefox only display entries that have been previously typed. The maxRichResults option is a number that determines the maximum number of entries that can appear in the drop down. Unfortunately, there is no current way to revert back to the previous search algorithm. This has left a number of people quite upset.

Regardless, I do like the new “Awesome Bar,” though it did take a period of adjustment. One thing I never really liked was pouring through my bookmarks looking for something specific. Even though I meticulously labeled each one, placed it in a special folder, and synchronized them so they were the same on all of my machines, I always had a hard time finding what I needed. The new “Awesome Bar” allows me to search history and bookmarks simultaneously, helping me quickly find what I need.

And to make it even better, Firefox 3.0 adds support for tags. What is a tag, you ask? Well, it’s essentially a keyword you attach to a bookmark. Instead of filing bookmarks away in a tree of folders (which you can still do), you assign one or more tags to a bookmark. Using tags, you can quickly search your bookmarks for a specific theme, helping you find that elusive bookmarks quickly and efficiently. Gone are the days of trying to figure out which folder best matches a page you’re trying to bookmark, only to change your mind later on and desperately search for it in that other folder. Now, just add tags that describe it and file it away in any folder. Just recall one of the tags you used, and you’ll find that bookmark in no time. Of course, I still recommend using folders, for sanity’s sake.

Those are probably two of the most noticeable changes in the new Firefox. The rest is a little more subtle. For instance, speed has increased dramatically, both in rendering, and in JavaScript execution. Memory usage seems to be better as well, taking up much less memory than previous versions.

On the security side of things, Firefox 3 adds support for the new EV-SSL certificates, displaying the owner of the site in green, next to the favicon in the URL bar:

Firefox now tries to warn the user about potential virus and malware sites by checking them against the Google Safe Browsing blacklist. When you encounter a potentially harmful page, a warning message appears:

Similarly, if the page you are visiting appears to be a forgery, likely an attempt at phishing, you get this warning message:

Finally, the SSL error page is a little more clear, trying to explain why a particular page isn’t working. That error looks like this:

There are other security additions including add-on protection, anti-virus integration, parental controls on Windows Vista, and more. Overall, it appears they have put quite a lot of work into making Firefox 3.0 more secure.

There are other new features that you can read about here. Check them out, and then give Firefox 3.0 a shot. Download it, it’s worth it.

Headless Linux Testing Clients

As part of my day to day job, I’ve been working on a headless Linux client that can be transported from site to site to automate some network testing.  I can’t really go into detail on what’s being tested, or why, but I did want to write up a (hopefully) useful entry about headless client and some of the changes I made to the basic CentOS install to get everything to work.

First up was the issue of headless operation.  We’re using Cappuccino SlimPRO SP-625 units with the triple Gigabit Ethernet option.  They’re not bad little machines, though I do have a gripe with the back cover on them.  It doesn’t properly cover all of the ports on the back, leaving rather large holes where dust and dirt can get in.  What’s worse is that the power plug is not surrounded and held in place by the case, so I can foresee the board cracking at some point from the stress of the power cord…  But, for a sub-$800 machine, it’s not all that bad.

Anyway, on to the fun.  These machines will be transported to various locations where testing is to be performed.  On-site, there will be no keyboard, no mouse, and no monitor for them.  However, sometimes things go wrong and subtle adjustments may need to be made.  This means we need a way to get into the machine, locally, just in case there’s a problem with the network connection.  Luckily, there’s a pretty simple means of accessing a headless Linux machine without the need to lug around an extra monitor, keyboard, and mouse.  If you’ve ever worked on a switch or router, you’ll know where I’m going with this.

Most technician have access to a laptop, especially if they have to configure routers or switches.  Why not access a Linux box the same way?  Using the agetty command, you can.  A getty is a program that manages terminals within Unix.  Those terminals can be physical, like the local keyboard, or virtual, like a telnet or ssh session.  The agetty program is an alternative getty program that has some non-standard features such as baud rate detection, adaptive tty, and more.  In short, it’s perfect for direct serial, or even dial-in connections.

Setting this all up is a snap, too.  By default, CentOS (and most Linux distros) set up six gettys for virtual terminals.  These virtual terminals use yet another getty, mingetty, which is a minimalized getty program with only enough features for virtual terminals.  In order to provide serial access, we need to add a few lines to enable agettys on the serial ports.

But wait, what serial ports do we have?  Well, assuming they are enabled in the BIOS, we can see them using the dmesg and setserial commands.  The dmesg command prints out the current kernel message buffer to the screen.  This is usually the output from the boot sequence, but if your system has been up a while, it may contain more recent messages.  We can use dmesg to determine the serial interfaces like this :

[friz@test ~]$ dmesg | grep serial
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A

As you can see from the output above, we have both a ttyS0 and a ttyS1 port available on this particular machine.  Now, we use setserial to make sure the system recognizes the ports:

[friz@test ~]$ sudo setserial -g /dev/ttyS[0-1]
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
/dev/ttyS1, UART: 16550A, Port: 0x02f8, IRQ: 3

The output is similar to dmesg, but setserial actually polls the port to get the necessary information, thereby ensuring that it’s active.  Also note, you will likely need to run this command as root to make it work.

Now that we know what serial ports we have, we just need to add them to the inittab and reload the init daemon.  Adding these to the inittab is pretty simple.  Your inittab will look something like this:

#
# inittab       This file describes how the INIT process should set up
#               the system in a certain run-level.
#
# Author:       Miquel van Smoorenburg, <miquels@drinkel.nl.mugnet.org>
#               Modified for RHS Linux by Marc Ewing and Donnie Barnes
#

# Default runlevel. The runlevels used by RHS are:
#   0 – halt (Do NOT set initdefault to this)
#   1 – Single user mode
#   2 – Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 – Full multiuser mode
#   4 – unused
#   5 – X11
#   6 – reboot (Do NOT set initdefault to this)
#
id:3:initdefault:

# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6

# Trap CTRL-ALT-DELETE
ca::ctrlaltdel:/sbin/shutdown -t3 -r now

# When our UPS tells us power has failed, assume we have a few minutes
# of power left.  Schedule a shutdown for 2 minutes from now.
# This does, of course, assume you have powerd installed and your
# UPS connected and working correctly.
pf::powerfail:/sbin/shutdown -f -h +2 “Power Failure; System Shutting Down”

# If power was restored before the shutdown kicked in, cancel it.
pr:12345:powerokwait:/sbin/shutdown -c “Power Restored; Shutdown Cancelled”

# Run gettys in standard runlevels
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon

Just add the following after the original gettys lines:

# Run agettys in standard runlevels
s0:2345:respawn:/sbin/agetty -L -f /etc/issue.serial 9600 ttyS0 vt100
s1:2345:respawn:/sbin/agetty -L -f /etc/issue.serial 9600 ttyS1 vt100

Let me explain quickly what the above means.  Each line is broken into multiple fields, separated by a colon.  At the very beginning of the line is an identifier, s0 and s1 in our case.  Next comes a list of the runlevels for which this program should be spawned.  Finally, the command to run is last.

The agetty command takes a number of arguments:

    • The -L switch disables carrier detect for the getty.
    • The next switch, -f, tells agetty to display the contents of a file before the login prompt, /etc/issue.serial in our case.
    • Next is the baud rate to use.  9600 bps is a good default value to use.  You can specify speeds up to 152,200 bps, but they may not work with all terminal programs.
    • Next up is the serial port, ttyS0 and ttyS1 in our example.
    • Finally, the terminal emulation to use.  VT100 is probably the most common, but you can use others.

Now that you’ve added the necessary lines, reload the init daemon via this command:

[friz@test ~]$ sudo /sbin/init q

At this point, you should be able to connect a serial cable to your Linux machine and access it via a program such as minicom, PuTTY, or hyperterminal.  And that’s all there is to it.

You can also redirect the kernel to output all console messages to the serial port as well.  This is accomplished by adding a switch to the kernel line in your /etc/grub.conf file like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/hdc3
#          initrd /initrd-version.img
#boot=/dev/hdc
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-53.1.21.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-53.1.21.el5 ro root=LABEL=/ console=ttyS0,9600
initrd /initrd-2.6.18-53.1.21.el5.img

The necessary change is highlighted above.  The console switch tells the kernel that you want to re-direct the console output.  The first option is the serial port to re-direct to, and the second is the baud rate to use.

And now you have a headless Linux system!  These come in handy when you need a Linux machine for remote access, but you don’t want to deal with having a mouse, keyboard, and monitor handy to access the machine locally.

DIVX : Return of the useless

In the late 1990’s, Circuit City partnered with an entertainment law firm, Ziffren, Brittenham, Branca and Fischer, to create a new type of video standard called DIVX.  DIVX was intended to be an alternative to movie rentals.  In short, you’d purchase a DIVX disc, reminiscent of a DVD, and you had the right to watch that disc as many times as you wanted within 48 hours of your first viewing.  After 48 hours passed, you had to pay an additional fee to continue viewing the disc.

This all sounds fine and dandy until you realize a few things.  This was a new format, incompatible with DVD players, which had come onto the market a few years earlier.  As a result, expensive DIVX or DIVX/DVD combo players had to be purchased.  These players had to be connected to a phone line so they could verify that the owner could play the disc.

The DIVX format quickly died out, leaving early adopters stranded with unusable discs and useless players.  Another fine example of the usefulness of DRM schemes.

Fast forward to 2008 and to Flexplay EntertainmentFlexplay is a new twist on the old DIVX format.  This time, however, consumers only have to pay once.  Sort of.

Flexplay is a fully compatible DVD disc, with a twist.  You purchase the disc, and after you open the package, you have 48 hours to watch it before it “self-destructs.”  According to How Stuff Works, a standard DVD is a dual-layer disc that starts life as two separate pieces.  After the data is written on each piece, they are glued together using a resin adhesive.  The adhesive is clear, allowing laser light to pass through the first layer when necessary and read the second layer.

Flexplay works by replacing the resin adhesive with a special chemical compound that changes when exposed to oxygen.  Over time, the compound changes color and becomes opaque, rendering the DVD useless.  Once the disc has become opaque, it gets thrown away.

Before you begin fearing for the environment, Flexplay has a recycling program!  Flexplay offers two recycling options, local recycling and mail-in.  They claim that the discs are “no different in their environmental impact than regular DVDs” and that they comply with EPA standards.  Of course, they don’t point out that regular DVDs tend to be kept rather than thrown away.  The also offer this shining gem of wisdom, just before mentioning their mail-in recycling option:

“And of course, a Flexplay No-Return DVD Rental completely eliminates the energy usage and emissions associated with a return trip to the video rental store.”

It’s a good thing mailing the disc back to Flexplay is different than mailing a DVD back to NetFlix or Blockbuster…  Oh..  wait..

And this brings up another good point.  The purpose of Flexplay is to offer an alternative to rental services.  With both Netflix and Blockbuster, I can request the movies I want online, pay a minimal fee, and have them delivered directly to my house.  At worst, I may drive to a local rental store and rent a movie, similar to that of driving to a store selling Flexplay discs.  With Netflix and Blockbuster, I can keep those movies and watch them as many times as I want, way beyond the 48 hour period I would have for a Flexplay disc.  And, for the environmentally conscious, I then return the disc so it can be sent to another renter, removing the local landfill from the equation.

In short, this is yet another horrible idea.  The environmental impact this would have is astounding, if it ever took off.  Hopefully the public is smart enough to ignore it.

Windows 7… Take Two… Or Maybe Three?

Well, looks like the early information on Windows 7 might be wrong.  According to an interview with Steven Sinofsky, Senior Vice President of Windows and Windows Live Engineering at Microsoft, there are a few details you may have heard that may not be entirely true.  But then again, it seems that Mr Sinofsky did tap dance around a lot of the questions asked.

First and foremost, the new kernel.  There has been a lot of buzz about the new MinWin kernel, which many believe to be integral to the next release of Windows.  However, according to the interview, that may not be entirely true.  When asked about the MinWin kernel, Mr Sinofsky replied that they are building Windows 7 on top of the Windows Server 2008 and Windows Vista foundation.  There will be no new driver compatibility issues with the new release.  When asked specifically about the minimum kernel, he dodged the question, trying to focus on how Microsoft communicates, rather than new features of Windows.

So does this mean the MinWin kernel has been cut?  Well, not necessarily, but I do think it means that we won’t see the MinWin kernel in the form it has been talked about.  That is, very lightweight, and very efficient.  In order to provide 100% backwards compatibility with Vista, they likely had to add a lot more to the kernel, moving it from a lightweight, back into the heavyweight category.  This blog post by Chris Flores, a director at Microsoft, seems to confirm this as well.

The release date has also been pushed back to the original 2010 date that was originally stated.  At a meeting before the Inter-American Development Bank, Bill Gates had stated that a new release of Windows would be ready sometime in the next year or so.  Mr Sinofsky stated firmly that Windows 7 would be released three years after Vista, putting it in the 2010 timeframe.

Yesterday evening, at the All Things Digital conference, a few more details leaked out.  It was stated again that Windows 7 would be released in late 2009.  Interestingly enough, it seems that Windows 7 has “inherited” a few features from it’s chief competitor, Mac OSX.  According to the All Things Digital site, there’s a Mac OS-X style dock, though I have not been able to find a screenshot showing it.  There are these “leaked” screenshots, though their authenticity (and possibly the information provided with them) is questionable at best.

The biggest feature change, at this point, appears to be the addition of multi-touch to the operating system.  According to Julie Larson-Green, Corporate Vice President of Windows Experience Program Management, multi-touch has been built throughout the OS.  So far it seems to support the basic feature-set that any iPhone or iPod Touch supports.  Touch is the future, according to Bill Gates.  He went on to say:

“We’re at an interesting junction.  In the next few years, the roles of speech, gesture, vision, ink, all of those will become huge. For the person at home and the person at work, that interaction will change dramatically.”

All in all, it looks like Windows 7 will just be more of the same.  With all of the problems they’ve encountered with Vista, I’ll be surprised if Windows 7 becomes the big seller they’re hoping for.  To be honest, I think they would have been better off re-designing everything from scratch with Vista, rather than trying to shovel in new features to an already bloated kernel.

Useful Windows Utilities? Really?

Every once in a while, I get an error that I can’t disconnect my USB drive because there’s a file handle opened by another program.  Unfortunately, Windows doesn’t help much beyond that, and it’s left up to the user to figure out which app and shut it down.  In some cases, the problem persists even after shutting down all of the open apps and you have to resort to looking through the process list in Task Manager.  Of course, you can always log off or restart the computer, but there has to be an easier way.

In Linux, there’s a nifty little utility called lsof.  The name of the utility, lsof, is short for List Open Files, and it does just that.  It displays a current list of open files, including details such as the name of the program using the file, it’s process ID, the user running the process, and more.  The output can be a bit daunting for an inexperienced user, but it’s a very useful tool.  Combined with the power of grep, a user can quickly identify what files a process has open, or what process has a particular file open.  Very handy for dealing with misbehaving programs.

Similar tools exist for Windows, but most of them are commercial tools, not available for free use.  There are free utilities out there, but I hadn’t found any that gave me the power I wanted.  That is, until today.

I stumbled across a nifty tool called Process Explorer.  Funnily enough, it’s actually a Microsoft tool, though they seem to have acquired it by purchasing SysInternals.  Regardless, it’s a very powerful utility, and came in quite handy for solving this particular problem.

 

In short, I had opened a link in Firefox by clicking on it in Thunderbird.  After closing Thunderbird, I tried to un-mount my USB drive, where I have Portable Thunderbird installed, but I received an error that a file was still open.  Apparently Firefox was the culprit, and closing it released the handle.

The SysInternals page on Microsoft’s TechNet site list a whole host of utilities for debugging and monitoring Windows systems.  These can be fairly dangerous in the hands of the inexperienced, but for those of us who know what we’re doing, these tools can be invaluable.  I’m quite happy I stumbled across these.  The closed nature of Windows can be extremely frustrating at times as I cannot figure out what’s going on.  I’m definitely still a Linux user at heart, but these tools make using Windows a tad more bearable.

H.R. 5994

What a title, eh?  Well, that little title up there may impact how you use the Internet in the future..  H.R. 5994, known as the “Internet Freedom and Non-Discrimination Act of 2008,” is the latest attempt by the US Congress to get a handle on Internet access.  In short, this is another play in the Net Neutrality battle.  I’m no lawyer, but it seems that this is a pretty straightforward document.

H.R. 5994 is intended to be an extension of the Clayton Anti-Trust Act of 1914.  It is intended to “promote competition, to facilitate trade, and to ensure competitive and nondiscriminatory access to the Internet.”  The main theme, as I see it, is that providers can’t discriminate against content providers.  In other words, if they prioritize web traffic on the network, then all web traffic, regardless of origin, should be prioritized.

At first glance, this seems to be a positive thing, however there may be a few loopholes.  For instance, take a look the following from Section 28(a):

“(3)(A) to block, to impair, to discriminate against, or to interfere with the ability of any person to use a broadband network service to access, to use, to send, to receive, or to offer lawful content, applications or services over the Internet;”

From the looks of it, it sounds like you can’t prevent known “bad users” from getting an account, provided they are using the account for legal purposes.  As an example, you couldn’t prevent a known spammer from getting an account, provided, of course, that they obey the CAN-SPAM Act.

And what about blocklists?  Spam blocklists are almost a necessity for mail servers these days, otherwise you have to process every single mail that comes in.  3(A) specifically dictates that you can’t block lawful content…  Unfortunately, it’s not always possible to determine if the mail is lawful until it’s processed.  So this may turn into a loophole for spammers.

The act goes on with the following:

“(4) to prohibit a user from attaching or using a device on the provider’s network that does not physically damage or materially degrade other users’ utilization of the network;”

This one is kind of scary because it does not dictate the type of device, or put any limitations on the capabilities of the device, provided it “does not physically damage or materially degrade other users’ utilization of the network.”  So does that mean I can use any type of DSL or Cable modem that I choose?  Am I considered to be damaging the network if I use a device that doesn’t allow the provider local access?  Seems to me that quite a few providers wouldn’t be happy with this particular clause…

Here’s the real meat of the Net Neutrality argument, though.  Section 28(b) states this:

“(b) If a broadband network provider prioritizes or offers enhanced quality of service to data of a particular type, it must prioritize or offer enhanced quality of service to all data of that type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or enhanced quality of service.”

Wham!  Take that!  Basically, you can’t prioritize your own traffic at the expense of others.  So a local provider who offers a VoIP service can’t prioritize their own and not prioritize (or block) Skype, Vonage, or others.  But, there’s a problem here..  Does the service have to use established standards to be prioritized?  For instance, Skype uses a proprietary VoIP model.  So does that mean that providers do not have to prioritize it?

Providers do, however, get some rights as well.  For instance, Section 28 (c) specifically states:

    `(c) Nothing in this section shall be construed to prevent a broadband network provider from taking reasonable and nondiscriminatory measures–
    • `(1) to manage the functioning of its network, on a systemwide basis, provided that any such management function does not result in discrimination between content, applications, or services offered by the provider and unaffiliated provider;
    • `(2) to give priority to emergency communications;
    • `(3) to prevent a violation of a Federal or State law, or to comply with an order of a court to enforce such law;
    • `(4) to offer consumer protection services (such as parental controls), provided that a user may refuse or disable such services;
    • `(5) to offer special promotional pricing or other marketing initiatives; or
    • `(6) to prioritize or offer enhanced quality of service to all data of a particular type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or quality of service.

So providers are allowed to protect the network, protect consumers, and still make a profit.  Of course, assuming this becomes law, only time will tell what the courts will allow a provider to consider “protection” to be…

It looks like this is, at the very least, a good start to tackling this issue.  That is, if you believe that the government should be involved with this.  At the same time, this doesn’t appear to be something most providers would be interested in.  From a consumer standpoint, I want to be able to get the content I want without being blocked because it comes from Google and not Yahoo, who the provider has an agreement with.  Since most consumers are in an area with only one or two providers, this can be a good thing, though.  It prevents a monopoly-type situation where the consumer has no choice but to take the less-than-desirable deal.

This is one of those areas where there may be no solution.  While I side with the providers in that they should be able to manage their network as they see fit, I can definitely see how something needs to be done to ensure that providers don’t take unfair advantage.  Should this become law, I think it will be a win for content providers rather than Internet providers and consumers.

Data Reliance

As we become a more technologically evolved society, our reliance on data increases.  E-Mail, web access, electronic documents, bank accounts, you name it.  The loss of any one of these can have devastating consequences, from loss of productivity, to loss of home, health, or even, in extreme cases, life.

Unfortunately, I get to experience this first hand.  At the beginning of the week, there was a failure on the shared system I access at work.  Initially it seemed this was merely a permissions issue, we had just lost access to the files for a short time.  However, as time passed, we learned that the reality of the situation was much worse.

Like most companies, we rely heavily on shared drive access for collaboration and storage.  Of course, this means that the majority of our daily work exists on those shared drives, making them pretty important.  Someone noticed this at some point and decided that it was a really good idea to back them up on a regular basis.  Awesome, so we’re covered, right?  Well, yeah..  sort of, but not really.

Backups are a wonderful invention.  They ensure that you don’t lose any data in the event of a critical failure.  Or, at the very least, they minimize the amount of data you lose..  Backups don’t run on a constant basis, so there’s always some lag time in there…  But, regardless, they do keep fairly up-to-date records of what was on the drive.

To make matters even better, we have a procedure for backups which includes keeping them off-site.  Off-site storage ensures that we have backups in the event of something like a fire or a flood.  This usually means there’s a bit of time between a failure and a restore because someone has to go get those backups, but that’s ok, it’s all in the name of disaster recovery.

So here we are with a physical drive failure on our shared drive.  Well, that’s not so bad, you’d think, it’s a RAID array, right?  Well, no.  Apparently not.  Why don’t we use RAID arrays?  Not a clue, but it doesn’t much matter right now, all my work from that past year is inaccessible.  What am I supposed to do for today?

No big deal, I’ll work on some little projects that don’t need shared drive access, and they’ll fix the drive and restore our files.  Should only take a few hours, it’ll be finished by tomorrow.  Boy, was I wrong…

Tomorrow comes and goes, as does the next day, and the next.  Little details leak out as time goes on.  First we have a snafu with the wrong backup tapes being retrieved.  Easily fixed, they go get the correct ones.  Next, we receive reports of intermittent corruption of files, but it’s nothing to worry about, it’s only a few files here and there.  Of course, we still have no access to anything, so we can’t verify any of these reports.  Finally, they determine that the access permissions were corrupted and they need to fix them.  Once completed, we re-gain access to our files.

A full work week passes before we finally have drive access back.  Things should go back to normal now, we’ll just get on with our day-to-day business.  *click*  Hrm..  Can’t open the file, it’s corrupt.  Oh well, I’ll just have to re-write that one..  It’s ok though, the corruption was limited.  *click*  That’s interesting..  all the files in this directory are missing..  Maybe they forgot to restore that directory..  I’ll have to let them know…  *click*  Another corrupt file…  Man, my work is piling up…

Dozens of clicks later, the full reality hits me…  I have lost hundred of hours of work.  Poof, gone.  Maybe, just maybe, they can do something to restore it, but I don’t hold much hope…  How could something like this happen?  How could I just lose all of that work?  We had backups!  We stored them off-site!

So, let this be a lesson to you.  Backups are not the perfect solution.  I don’t know all the details, but I can guess what happened.  Tape backup is pretty reliable, I’ve used it myself for years.  I’ve since graduated to hard drive backup, but I still use tapes as a secondary backup solution.  There are problems with tape, though.  Tapes tend to stretch over time, ruining the tape and making them unreliable.  Granted, they do last a while, but it can be difficult to determine when a tape has gone bad.  Couple that with a lack of RAID on the server and you have a recipe for disaster.

In addition to all of this, I would be willing to bet that they did not test backups on a regular basis.  Random checks of data from backups is an integral part of the backup process.  Sure, it seems pointless now, but imagine how pointless it’ll be after hours of restoring files, you find that they’re all corrupt.  Random checks aren’t so bad when you think of it that way…

So I’ve lost a ton of data, and a ton of time.  Sometimes, life just sucks.  Moving forward, I’ll make my own personal backup of files I deem important, and I’ll check them on a regular basis too…

Instant Kernel-ification

 

Server downtime is the scourge of all administrators, sometimes to the extent of bypassing necessary security upgrades, all in the name of keeping machines online.  Thanks to an MIT graduate student, Jeffery Brian Arnold, keeping a machine online, and up to date with security patches, may be easier than ever.

Ksplice, as the project is called, is a small executable that allows an administrator the ability to patch security holes in the Linux kernel, without rebooting the system.  According to the Ksplice website :

“Ksplice allows system administrators to apply security patches to the Linux kernel without having to reboot. Ksplice takes as input a source code change in unified diff format and the kernel source code to be patched, and it applies the patch to the corresponding running kernel. The running kernel does not need to have been prepared in advance in any way.”

Of course, Ksplice is not a perfect silver bullet, some patches cannot be applied using Ksplice.  Specifically, any patch that require “semantic changes to data structures” cannot be applied to the running kernel.  A semantic change is a change “that would require existing instances of kernel data structures to be transformed.”

But that doesn’t mean that Ksplice isn’t useful.  Jeffery looked at 32 months of kernel security patches and found that 84% of them could be applied using Ksplice.  That’s sure to increase the uptime.

I have to wonder, though, what is so important that you need that much uptime.  Sure, it’s nice to have the system run all the time, but if you have something that is absolutely mission critical, that must run 24×7, regardless, won’t you have a backup or two?  Besides which, you generally want to test patches before applying them to such sensitive systems.

There are, of course, other uses for this technology.  As noted on the Ksplice website, you can also use Ksplice to “add debugging code to the kernel or to make any other code changes that do not modify data structure semantics.”  Jeffery has posted a paper detailing how the technology works.

Pretty neat technology.  I wonder if this will lead to zero downtime kernel updates direct from Linux vendors.  As it is now, you’ll need to locate and manually apply kernel patches using this tool.

 

Ooh.. Bad day to be an IIS server….

Web-based exploits are pretty common nowadays.  It’s almost daily that we heard of sites being compromised one way or another.  Today, it’s IIS servers.  IIS is basically a web-server platform developed by Microsoft.  It runs on Windows-based servers and generally serves ASP, or Active Server Pages, dynamic content similar to that of PHP or Ruby.  There is some speculation that this is related to a recent security advisory from Microsoft, but this has not been confirmed.

Several popular blogs, including one on the Washington Post, have posted information describing the situation.  There is a bit of confusion, however, as to what exactly the attack it.  It appears that the IIS servers were infected by using the aforementioned vulnerability.  Other web servers are being infected using SQL injection attacks.  So it looks like there are several attack vectors being used to spread this particular beauty.

Many of the reports are using Google searches to estimate the number of infected systems.  Estimates put that figure at about 500,000, but take that figure with a grain of salt.  While there are a lot affected, using Google as the source of this particular metric is somewhat flawed.  Google reports the total number of links found referring to a particular search string, so there may be duplicated information.  It’s safe to say, however, that this is pretty widespread.

Regardless of the method of attack, and which server is infected, an unsuspecting visitor to the exploited site is exposed to a plethora of attacks.  The malware uses a number of exploits in popular software packages, such as AIM, RealPlayer, and iTunes, to gain access to the visitor’s computer.  Once the visitor is infected, the malware watched for username and password information, reporting that information back to a central server.  Both ISC and ShadowServer have excellent write-ups on both the server exploit as well as the end-user exploit.

Be careful out there, kids…

Virtuality, Part Deux

I had the chance to install VirtualBox and I must say, I’m pretty impressed.  To start, VirtualBox is only a 15 meg download.  That’s pretty small when compared to Virtual PC, and downright puny when compared to VMWare Workstation.  There seems to be a lot packed in there, however.  As with VMWare, VirtualBox has special extensions that can be installed into the guest OS for compatibility.

Installation was a snap, similar to that of VMWare, posing no real problem.  The first problem I encountered was after rebooting the guest and logging in.  Apparently, I ran out of memory on the host OS, so VirtualBox gracefully paused the guest OS and alerted me.  After closing some open programs, I was able to resume the guest OS with no problem.  These low memory errors remain the only real problem I have with VirtualBox at this point.

Networking in VirtualBox is a little different from that of VMWare, and took me a few tries before I figured it out.  By default, the system is installed with no virtual adapters, making NAT the only means by which the guest OS can speak to the world.  By installing a virtual interface on the host, through the use of Host Interface Networking (HIF), you can allow the guest OS direct access to the network.  After the interface is created, it is bridged, through the use of a Windows Network Bridge interface, with the interface you want the traffic to flow out of.  Adding and removing an interface in the bridge sometimes takes a minute or two.  I honestly have no idea what Windows is doing during this time, but until the interface is added/removed, networking ceases to function.  I have also noticed that if VirtualBox is running, regardless of the state of the guest OS, modifying the bridge will fail.

Installation of the guest extensions, which required GCC and the kernel headers on the guest OS to be installed, was relatively painless.  After making sure the necessary packages were installed in CentOS, VirtualBox compiled and installed the extensions.  This allowed me to extend my desktop resolution to 1024×768, as well as enabling auto-capture of the mouse pointer when it enters the virtual machine window.  According to the documentation, the extensions also add support for a synchronized clock, shared folders and clipboard, as well as automated Windows logins (assuming you are running a Windows guest OS).

VirtualBox is quite impressive, and I’ve started using it full time.  It’s not quite as polished as VMWare is, but it definitely wins price-wise.  I’m sure VMWare has many more features that I am not using, that may actually justify the price.  For now, I’ll stick with VirtualBox until something forces me to switch.

In related news, I’ve been informed by LonerVamp that VMWare Server, which is free, would also satisfy my needs.  I am a bit confused, though, that a server product would be released for free while a workstation product would not.  I was initially under the impression that the server product merely hosted the OS, allowing another VMWare product to remotely attach to it.  That doesn’t appear to be correct, however.  Can someone explain the major differences to me?  Why would I want to use Workstation as opposed to Server?