DIVX : Return of the useless

In the late 1990’s, Circuit City partnered with an entertainment law firm, Ziffren, Brittenham, Branca and Fischer, to create a new type of video standard called DIVX.  DIVX was intended to be an alternative to movie rentals.  In short, you’d purchase a DIVX disc, reminiscent of a DVD, and you had the right to watch that disc as many times as you wanted within 48 hours of your first viewing.  After 48 hours passed, you had to pay an additional fee to continue viewing the disc.

This all sounds fine and dandy until you realize a few things.  This was a new format, incompatible with DVD players, which had come onto the market a few years earlier.  As a result, expensive DIVX or DIVX/DVD combo players had to be purchased.  These players had to be connected to a phone line so they could verify that the owner could play the disc.

The DIVX format quickly died out, leaving early adopters stranded with unusable discs and useless players.  Another fine example of the usefulness of DRM schemes.

Fast forward to 2008 and to Flexplay EntertainmentFlexplay is a new twist on the old DIVX format.  This time, however, consumers only have to pay once.  Sort of.

Flexplay is a fully compatible DVD disc, with a twist.  You purchase the disc, and after you open the package, you have 48 hours to watch it before it “self-destructs.”  According to How Stuff Works, a standard DVD is a dual-layer disc that starts life as two separate pieces.  After the data is written on each piece, they are glued together using a resin adhesive.  The adhesive is clear, allowing laser light to pass through the first layer when necessary and read the second layer.

Flexplay works by replacing the resin adhesive with a special chemical compound that changes when exposed to oxygen.  Over time, the compound changes color and becomes opaque, rendering the DVD useless.  Once the disc has become opaque, it gets thrown away.

Before you begin fearing for the environment, Flexplay has a recycling program!  Flexplay offers two recycling options, local recycling and mail-in.  They claim that the discs are “no different in their environmental impact than regular DVDs” and that they comply with EPA standards.  Of course, they don’t point out that regular DVDs tend to be kept rather than thrown away.  The also offer this shining gem of wisdom, just before mentioning their mail-in recycling option:

“And of course, a Flexplay No-Return DVD Rental completely eliminates the energy usage and emissions associated with a return trip to the video rental store.”

It’s a good thing mailing the disc back to Flexplay is different than mailing a DVD back to NetFlix or Blockbuster…  Oh..  wait..

And this brings up another good point.  The purpose of Flexplay is to offer an alternative to rental services.  With both Netflix and Blockbuster, I can request the movies I want online, pay a minimal fee, and have them delivered directly to my house.  At worst, I may drive to a local rental store and rent a movie, similar to that of driving to a store selling Flexplay discs.  With Netflix and Blockbuster, I can keep those movies and watch them as many times as I want, way beyond the 48 hour period I would have for a Flexplay disc.  And, for the environmentally conscious, I then return the disc so it can be sent to another renter, removing the local landfill from the equation.

In short, this is yet another horrible idea.  The environmental impact this would have is astounding, if it ever took off.  Hopefully the public is smart enough to ignore it.

Windows 7… Take Two… Or Maybe Three?

Well, looks like the early information on Windows 7 might be wrong.  According to an interview with Steven Sinofsky, Senior Vice President of Windows and Windows Live Engineering at Microsoft, there are a few details you may have heard that may not be entirely true.  But then again, it seems that Mr Sinofsky did tap dance around a lot of the questions asked.

First and foremost, the new kernel.  There has been a lot of buzz about the new MinWin kernel, which many believe to be integral to the next release of Windows.  However, according to the interview, that may not be entirely true.  When asked about the MinWin kernel, Mr Sinofsky replied that they are building Windows 7 on top of the Windows Server 2008 and Windows Vista foundation.  There will be no new driver compatibility issues with the new release.  When asked specifically about the minimum kernel, he dodged the question, trying to focus on how Microsoft communicates, rather than new features of Windows.

So does this mean the MinWin kernel has been cut?  Well, not necessarily, but I do think it means that we won’t see the MinWin kernel in the form it has been talked about.  That is, very lightweight, and very efficient.  In order to provide 100% backwards compatibility with Vista, they likely had to add a lot more to the kernel, moving it from a lightweight, back into the heavyweight category.  This blog post by Chris Flores, a director at Microsoft, seems to confirm this as well.

The release date has also been pushed back to the original 2010 date that was originally stated.  At a meeting before the Inter-American Development Bank, Bill Gates had stated that a new release of Windows would be ready sometime in the next year or so.  Mr Sinofsky stated firmly that Windows 7 would be released three years after Vista, putting it in the 2010 timeframe.

Yesterday evening, at the All Things Digital conference, a few more details leaked out.  It was stated again that Windows 7 would be released in late 2009.  Interestingly enough, it seems that Windows 7 has “inherited” a few features from it’s chief competitor, Mac OSX.  According to the All Things Digital site, there’s a Mac OS-X style dock, though I have not been able to find a screenshot showing it.  There are these “leaked” screenshots, though their authenticity (and possibly the information provided with them) is questionable at best.

The biggest feature change, at this point, appears to be the addition of multi-touch to the operating system.  According to Julie Larson-Green, Corporate Vice President of Windows Experience Program Management, multi-touch has been built throughout the OS.  So far it seems to support the basic feature-set that any iPhone or iPod Touch supports.  Touch is the future, according to Bill Gates.  He went on to say:

“We’re at an interesting junction.  In the next few years, the roles of speech, gesture, vision, ink, all of those will become huge. For the person at home and the person at work, that interaction will change dramatically.”

All in all, it looks like Windows 7 will just be more of the same.  With all of the problems they’ve encountered with Vista, I’ll be surprised if Windows 7 becomes the big seller they’re hoping for.  To be honest, I think they would have been better off re-designing everything from scratch with Vista, rather than trying to shovel in new features to an already bloated kernel.

Useful Windows Utilities? Really?

Every once in a while, I get an error that I can’t disconnect my USB drive because there’s a file handle opened by another program.  Unfortunately, Windows doesn’t help much beyond that, and it’s left up to the user to figure out which app and shut it down.  In some cases, the problem persists even after shutting down all of the open apps and you have to resort to looking through the process list in Task Manager.  Of course, you can always log off or restart the computer, but there has to be an easier way.

In Linux, there’s a nifty little utility called lsof.  The name of the utility, lsof, is short for List Open Files, and it does just that.  It displays a current list of open files, including details such as the name of the program using the file, it’s process ID, the user running the process, and more.  The output can be a bit daunting for an inexperienced user, but it’s a very useful tool.  Combined with the power of grep, a user can quickly identify what files a process has open, or what process has a particular file open.  Very handy for dealing with misbehaving programs.

Similar tools exist for Windows, but most of them are commercial tools, not available for free use.  There are free utilities out there, but I hadn’t found any that gave me the power I wanted.  That is, until today.

I stumbled across a nifty tool called Process Explorer.  Funnily enough, it’s actually a Microsoft tool, though they seem to have acquired it by purchasing SysInternals.  Regardless, it’s a very powerful utility, and came in quite handy for solving this particular problem.

 

In short, I had opened a link in Firefox by clicking on it in Thunderbird.  After closing Thunderbird, I tried to un-mount my USB drive, where I have Portable Thunderbird installed, but I received an error that a file was still open.  Apparently Firefox was the culprit, and closing it released the handle.

The SysInternals page on Microsoft’s TechNet site list a whole host of utilities for debugging and monitoring Windows systems.  These can be fairly dangerous in the hands of the inexperienced, but for those of us who know what we’re doing, these tools can be invaluable.  I’m quite happy I stumbled across these.  The closed nature of Windows can be extremely frustrating at times as I cannot figure out what’s going on.  I’m definitely still a Linux user at heart, but these tools make using Windows a tad more bearable.

H.R. 5994

What a title, eh?  Well, that little title up there may impact how you use the Internet in the future..  H.R. 5994, known as the “Internet Freedom and Non-Discrimination Act of 2008,” is the latest attempt by the US Congress to get a handle on Internet access.  In short, this is another play in the Net Neutrality battle.  I’m no lawyer, but it seems that this is a pretty straightforward document.

H.R. 5994 is intended to be an extension of the Clayton Anti-Trust Act of 1914.  It is intended to “promote competition, to facilitate trade, and to ensure competitive and nondiscriminatory access to the Internet.”  The main theme, as I see it, is that providers can’t discriminate against content providers.  In other words, if they prioritize web traffic on the network, then all web traffic, regardless of origin, should be prioritized.

At first glance, this seems to be a positive thing, however there may be a few loopholes.  For instance, take a look the following from Section 28(a):

“(3)(A) to block, to impair, to discriminate against, or to interfere with the ability of any person to use a broadband network service to access, to use, to send, to receive, or to offer lawful content, applications or services over the Internet;”

From the looks of it, it sounds like you can’t prevent known “bad users” from getting an account, provided they are using the account for legal purposes.  As an example, you couldn’t prevent a known spammer from getting an account, provided, of course, that they obey the CAN-SPAM Act.

And what about blocklists?  Spam blocklists are almost a necessity for mail servers these days, otherwise you have to process every single mail that comes in.  3(A) specifically dictates that you can’t block lawful content…  Unfortunately, it’s not always possible to determine if the mail is lawful until it’s processed.  So this may turn into a loophole for spammers.

The act goes on with the following:

“(4) to prohibit a user from attaching or using a device on the provider’s network that does not physically damage or materially degrade other users’ utilization of the network;”

This one is kind of scary because it does not dictate the type of device, or put any limitations on the capabilities of the device, provided it “does not physically damage or materially degrade other users’ utilization of the network.”  So does that mean I can use any type of DSL or Cable modem that I choose?  Am I considered to be damaging the network if I use a device that doesn’t allow the provider local access?  Seems to me that quite a few providers wouldn’t be happy with this particular clause…

Here’s the real meat of the Net Neutrality argument, though.  Section 28(b) states this:

“(b) If a broadband network provider prioritizes or offers enhanced quality of service to data of a particular type, it must prioritize or offer enhanced quality of service to all data of that type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or enhanced quality of service.”

Wham!  Take that!  Basically, you can’t prioritize your own traffic at the expense of others.  So a local provider who offers a VoIP service can’t prioritize their own and not prioritize (or block) Skype, Vonage, or others.  But, there’s a problem here..  Does the service have to use established standards to be prioritized?  For instance, Skype uses a proprietary VoIP model.  So does that mean that providers do not have to prioritize it?

Providers do, however, get some rights as well.  For instance, Section 28 (c) specifically states:

    `(c) Nothing in this section shall be construed to prevent a broadband network provider from taking reasonable and nondiscriminatory measures–
    • `(1) to manage the functioning of its network, on a systemwide basis, provided that any such management function does not result in discrimination between content, applications, or services offered by the provider and unaffiliated provider;
    • `(2) to give priority to emergency communications;
    • `(3) to prevent a violation of a Federal or State law, or to comply with an order of a court to enforce such law;
    • `(4) to offer consumer protection services (such as parental controls), provided that a user may refuse or disable such services;
    • `(5) to offer special promotional pricing or other marketing initiatives; or
    • `(6) to prioritize or offer enhanced quality of service to all data of a particular type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or quality of service.

So providers are allowed to protect the network, protect consumers, and still make a profit.  Of course, assuming this becomes law, only time will tell what the courts will allow a provider to consider “protection” to be…

It looks like this is, at the very least, a good start to tackling this issue.  That is, if you believe that the government should be involved with this.  At the same time, this doesn’t appear to be something most providers would be interested in.  From a consumer standpoint, I want to be able to get the content I want without being blocked because it comes from Google and not Yahoo, who the provider has an agreement with.  Since most consumers are in an area with only one or two providers, this can be a good thing, though.  It prevents a monopoly-type situation where the consumer has no choice but to take the less-than-desirable deal.

This is one of those areas where there may be no solution.  While I side with the providers in that they should be able to manage their network as they see fit, I can definitely see how something needs to be done to ensure that providers don’t take unfair advantage.  Should this become law, I think it will be a win for content providers rather than Internet providers and consumers.

Data Reliance

As we become a more technologically evolved society, our reliance on data increases.  E-Mail, web access, electronic documents, bank accounts, you name it.  The loss of any one of these can have devastating consequences, from loss of productivity, to loss of home, health, or even, in extreme cases, life.

Unfortunately, I get to experience this first hand.  At the beginning of the week, there was a failure on the shared system I access at work.  Initially it seemed this was merely a permissions issue, we had just lost access to the files for a short time.  However, as time passed, we learned that the reality of the situation was much worse.

Like most companies, we rely heavily on shared drive access for collaboration and storage.  Of course, this means that the majority of our daily work exists on those shared drives, making them pretty important.  Someone noticed this at some point and decided that it was a really good idea to back them up on a regular basis.  Awesome, so we’re covered, right?  Well, yeah..  sort of, but not really.

Backups are a wonderful invention.  They ensure that you don’t lose any data in the event of a critical failure.  Or, at the very least, they minimize the amount of data you lose..  Backups don’t run on a constant basis, so there’s always some lag time in there…  But, regardless, they do keep fairly up-to-date records of what was on the drive.

To make matters even better, we have a procedure for backups which includes keeping them off-site.  Off-site storage ensures that we have backups in the event of something like a fire or a flood.  This usually means there’s a bit of time between a failure and a restore because someone has to go get those backups, but that’s ok, it’s all in the name of disaster recovery.

So here we are with a physical drive failure on our shared drive.  Well, that’s not so bad, you’d think, it’s a RAID array, right?  Well, no.  Apparently not.  Why don’t we use RAID arrays?  Not a clue, but it doesn’t much matter right now, all my work from that past year is inaccessible.  What am I supposed to do for today?

No big deal, I’ll work on some little projects that don’t need shared drive access, and they’ll fix the drive and restore our files.  Should only take a few hours, it’ll be finished by tomorrow.  Boy, was I wrong…

Tomorrow comes and goes, as does the next day, and the next.  Little details leak out as time goes on.  First we have a snafu with the wrong backup tapes being retrieved.  Easily fixed, they go get the correct ones.  Next, we receive reports of intermittent corruption of files, but it’s nothing to worry about, it’s only a few files here and there.  Of course, we still have no access to anything, so we can’t verify any of these reports.  Finally, they determine that the access permissions were corrupted and they need to fix them.  Once completed, we re-gain access to our files.

A full work week passes before we finally have drive access back.  Things should go back to normal now, we’ll just get on with our day-to-day business.  *click*  Hrm..  Can’t open the file, it’s corrupt.  Oh well, I’ll just have to re-write that one..  It’s ok though, the corruption was limited.  *click*  That’s interesting..  all the files in this directory are missing..  Maybe they forgot to restore that directory..  I’ll have to let them know…  *click*  Another corrupt file…  Man, my work is piling up…

Dozens of clicks later, the full reality hits me…  I have lost hundred of hours of work.  Poof, gone.  Maybe, just maybe, they can do something to restore it, but I don’t hold much hope…  How could something like this happen?  How could I just lose all of that work?  We had backups!  We stored them off-site!

So, let this be a lesson to you.  Backups are not the perfect solution.  I don’t know all the details, but I can guess what happened.  Tape backup is pretty reliable, I’ve used it myself for years.  I’ve since graduated to hard drive backup, but I still use tapes as a secondary backup solution.  There are problems with tape, though.  Tapes tend to stretch over time, ruining the tape and making them unreliable.  Granted, they do last a while, but it can be difficult to determine when a tape has gone bad.  Couple that with a lack of RAID on the server and you have a recipe for disaster.

In addition to all of this, I would be willing to bet that they did not test backups on a regular basis.  Random checks of data from backups is an integral part of the backup process.  Sure, it seems pointless now, but imagine how pointless it’ll be after hours of restoring files, you find that they’re all corrupt.  Random checks aren’t so bad when you think of it that way…

So I’ve lost a ton of data, and a ton of time.  Sometimes, life just sucks.  Moving forward, I’ll make my own personal backup of files I deem important, and I’ll check them on a regular basis too…

Instant Kernel-ification

 

Server downtime is the scourge of all administrators, sometimes to the extent of bypassing necessary security upgrades, all in the name of keeping machines online.  Thanks to an MIT graduate student, Jeffery Brian Arnold, keeping a machine online, and up to date with security patches, may be easier than ever.

Ksplice, as the project is called, is a small executable that allows an administrator the ability to patch security holes in the Linux kernel, without rebooting the system.  According to the Ksplice website :

“Ksplice allows system administrators to apply security patches to the Linux kernel without having to reboot. Ksplice takes as input a source code change in unified diff format and the kernel source code to be patched, and it applies the patch to the corresponding running kernel. The running kernel does not need to have been prepared in advance in any way.”

Of course, Ksplice is not a perfect silver bullet, some patches cannot be applied using Ksplice.  Specifically, any patch that require “semantic changes to data structures” cannot be applied to the running kernel.  A semantic change is a change “that would require existing instances of kernel data structures to be transformed.”

But that doesn’t mean that Ksplice isn’t useful.  Jeffery looked at 32 months of kernel security patches and found that 84% of them could be applied using Ksplice.  That’s sure to increase the uptime.

I have to wonder, though, what is so important that you need that much uptime.  Sure, it’s nice to have the system run all the time, but if you have something that is absolutely mission critical, that must run 24×7, regardless, won’t you have a backup or two?  Besides which, you generally want to test patches before applying them to such sensitive systems.

There are, of course, other uses for this technology.  As noted on the Ksplice website, you can also use Ksplice to “add debugging code to the kernel or to make any other code changes that do not modify data structure semantics.”  Jeffery has posted a paper detailing how the technology works.

Pretty neat technology.  I wonder if this will lead to zero downtime kernel updates direct from Linux vendors.  As it is now, you’ll need to locate and manually apply kernel patches using this tool.

 

Virtuality, Part Deux

I had the chance to install VirtualBox and I must say, I’m pretty impressed.  To start, VirtualBox is only a 15 meg download.  That’s pretty small when compared to Virtual PC, and downright puny when compared to VMWare Workstation.  There seems to be a lot packed in there, however.  As with VMWare, VirtualBox has special extensions that can be installed into the guest OS for compatibility.

Installation was a snap, similar to that of VMWare, posing no real problem.  The first problem I encountered was after rebooting the guest and logging in.  Apparently, I ran out of memory on the host OS, so VirtualBox gracefully paused the guest OS and alerted me.  After closing some open programs, I was able to resume the guest OS with no problem.  These low memory errors remain the only real problem I have with VirtualBox at this point.

Networking in VirtualBox is a little different from that of VMWare, and took me a few tries before I figured it out.  By default, the system is installed with no virtual adapters, making NAT the only means by which the guest OS can speak to the world.  By installing a virtual interface on the host, through the use of Host Interface Networking (HIF), you can allow the guest OS direct access to the network.  After the interface is created, it is bridged, through the use of a Windows Network Bridge interface, with the interface you want the traffic to flow out of.  Adding and removing an interface in the bridge sometimes takes a minute or two.  I honestly have no idea what Windows is doing during this time, but until the interface is added/removed, networking ceases to function.  I have also noticed that if VirtualBox is running, regardless of the state of the guest OS, modifying the bridge will fail.

Installation of the guest extensions, which required GCC and the kernel headers on the guest OS to be installed, was relatively painless.  After making sure the necessary packages were installed in CentOS, VirtualBox compiled and installed the extensions.  This allowed me to extend my desktop resolution to 1024×768, as well as enabling auto-capture of the mouse pointer when it enters the virtual machine window.  According to the documentation, the extensions also add support for a synchronized clock, shared folders and clipboard, as well as automated Windows logins (assuming you are running a Windows guest OS).

VirtualBox is quite impressive, and I’ve started using it full time.  It’s not quite as polished as VMWare is, but it definitely wins price-wise.  I’m sure VMWare has many more features that I am not using, that may actually justify the price.  For now, I’ll stick with VirtualBox until something forces me to switch.

In related news, I’ve been informed by LonerVamp that VMWare Server, which is free, would also satisfy my needs.  I am a bit confused, though, that a server product would be released for free while a workstation product would not.  I was initially under the impression that the server product merely hosted the OS, allowing another VMWare product to remotely attach to it.  That doesn’t appear to be correct, however.  Can someone explain the major differences to me?  Why would I want to use Workstation as opposed to Server?

Virtuality

I’ve been playing around a bit with two of the major virtualization programs, VMWare Workstation and Microsoft Virtual PC.  My interest at this point has only been to allow me to run an alternative operating systems on my primary PC.  In this particular case, I was looking to run CentOS 5.1 on a Windows XP laptop.

I started out with Virtual PC, primarily because of the price, free.  My goal here is to make my life a little easier when developing web applications.  It would be nice to run everything locally, allowing me freedom to work virtually anywhere, and limiting development code on a publicly accessible machine.  There are, of course, other options such as a private network with VPNs, but I’m really not looking to go that far yet.

Right from the start, Virtual PC is somewhat of a problem.  Microsoft’s website claims that VPC can run nearly any x86-based OS, but they only list Windows operating systems on their website.  This is expected, of course.  So, knowing that this is merely an x86 emulation package, I went ahead and installed it.

The first thing I noticed was the extremely long time it took to install.  Installation of the files themselves seemed to go relatively quickly, but the tail end of the install, where it installs the networking portion of VPC was extremely slow and resulted in a loss of connectivity on my machine for about 5-10 minutes.  I experienced a similar issue uninstalling the software.

Once VPC was installed, I used a DVD copy of CentOS 5.1 to install the guest OS.  I attempted a GUI installation, but VPC wouldn’t support the resolution and complained, resulting in a garbled virtual screen.  I resorted to installation using the TUI.  Installation itself went pretty smoothly, no real problems encountered.  Once completed, I fired up the new OS.

Because I was in text-mode within the virtual OS, the window was pretty small on my screen.  I flipped to full-screen mode to compensate.  I noticed, however, that the fonts were not very sharp, looking very washed out.  I also experienced problems cutting and pasting between the host and guest OS.  Simply put, I could not cut and paste between the two at all.

The guest OS seemed sluggish, but I attributed this to an underpowered laptop running two operating systems simultaneously.  Overall, Virtual PC seemed to work ok, but the lack of graphical support and the inability to cut and paste between the host and guest really made working with it problematic.

My second choice for virtualization was VMWare Workstation.  VMWare has been around for a very long time, and I remember using their product years ago.  VMWare is not free, running a cool $189 for a single license, however there is a 30 day trial key you can get.  I signed up for this and proceeded to install the software.

The first major difference between VPC and Workstation is the size of the program.  VPC clocks in at a measly 30 Megs, while Workstation runs about 330 Megs.  Installation is a snap, however, and proceed quite quickly.  I didn’t experience the same network problems that I did with VPC.

Once installed, I proceeded to load the same guest OS using the same memory and hard drive parameters as I did with VPC.  VMWare correctly configured the graphical display to handle the GUI installer, and I was able to install the Gnome desktop as well.  Installation seemed to go a bit quicker than with VPC, despite the added packaged for X Window and Gnome.

After installation was complete, I booted into the new OS.  VMWare popped up a window notifying me that the Guest OS was not running the VMWare Tools package and that installing it would result in added speed and support.  I clicked OK to bypass the message and allowed the OS to continue loading.

Almost immediately I noticed that VMWare was running quite a bit faster than VPC.  The guest OS was very responsive and I was able to quickly configure MySQL and Apache.  I also noticed that VMWare made the guest OS aware of my sound card, USB controller, and just about every other device I have in the system.  Fortunately, it’s easy enough to remove those from the configuration.  I really have no need for sound effects when I’m merely writing code..  :)

Overall, VMWare has been running very well.  Well enough to make me think about getting a license for it and using it full time.  However, my overall goal is to move over to the OSX platform, so I’m not sure I want to blow $200 on something I’ll only use for a few months (fingers crossed)…  Another alternative may be VirtualBox, an open-source alternative.  I’ll be downloading and checking that out soon enough.

In the end, though, if you’re looking to run high-end applications, or even use a specific OS full-time, there’s nothing better than a full install on a real machine as opposed to a virtual one.

Prepare yourself, Firefox 3 is on the way…

Having just released beta 4, the Mozilla Foundation is well on its way to making Firefox 3 a reality.  Firefox 3 aims to bring a host of new features, as well as speed and security enhancements.

On the front end, they updated the theme.  Yes, again.  I’m not entirely sure what the reasoning is, but I’m sure it’s some inane marketing thing.  Probably something along the lines of “we need to make it look shiny and new!”  It’s not bad, though, and only takes a few moments to re-acquaint yourself with the basic functions.

One significant change is the function of the front and back history buttons.  In previous versions you could click towards the bottom of the button and get a history of the pages forward or back in your history, relevant to the button you pressed.  They have combined this into a single button now, with a small dot identifying where in the history you are.  Back history expands to the bottom of the list while forward history moves up.  It’s a little hard to explain in words, but it’s not that difficult in action.

Next up is the download manager.  They revamped the entire download manager, making it look quite different.  Gone is the global “Clear History” button, in is the new “Search” box.  It seems that one of the themes of this release is that history is important, so they added features to allow you to quickly find relevant information.  But fear not, you can still clear the list by right clicking and choosing clear list.  It’s just not as apparent as it used to be.  In addition, you can continue downloads that were interrupted by network problems, or even by closing the browser.

Some of the pop-ups have been reduced as well.  For instance, when new passwords are entered, instead of getting a popup on the screen asking if you want to save the username and password, a bar appears at the top of the page.  This is a bit more fluid, not interrupting the browsing experience as it did in the past.

Many of the dialogs related to security have been re-vamped in an attempt to make them more clear for non-technical users.  For instance, when encountering an invalid SSL certificate, Firefox now displays something like this :

Other warnings have been added as well.  Firefox now attempts to protect you from malware and web forgeries.  In additions, the browser now handles Extended Validation SSL certificates, displaying the name of the company in green on the location bar.  Clicking on the icon to the left of the URL provides a small popup with additional information about your connection to the remote website.

A plugin manager has been added, allowing the user to disable individual plugins.  This is a very welcome addition to the browser.

The bookmark manager has been updated as well.  In addition to placing bookmarks in folders, users can now add tags.  Using the bookmark sidebar, users can quickly search by tag, locating bookmarks that are in multiple folders.  Smart bookmarks show the most recently used bookmarks, as well as the most recently bookmarked sites and tags.

The location bar has been updated as well.  As you type in the location bar, Firefox automatically searches through your bookmarks, tags, and history, displaying the results.  Results are sorted by both frequency of visits, as well as how recent your last visit was.  For users who clear their history on a regular basis, this makes the location bar much more useful.

Behind the scenes there have been a number of welcome changes.  The most noticeable change is speed.  Beta 4 is insanely fast compared to previous versions.  In fact, it seems to be significantly faster than Internet Explorer, Opera, and others!  And, as an added bonus, it seems to use less memory as well.  Ars Technica did some testing to this effect and came out with some surprising results.

Mozilla attributes both the speed increase to improvements in the JavaScript engine, as well as profile-guided optimizations.  In short, they used profiling tools to identify bottlenecks in the code and fix them.  The reduction in memory is attributed to new allocators and collectors, as well as a reduction in leaky code.

Firefox 3 was built on top of the updated Gecko 1.9 engine.  The Gecko engine is responsible for the actual layout of the page on the screen.  It supports the various web standards such as CSS, HTML, XHTML, JavaScript, and more.  As the Gecko engine has evolved, it has gained additional capabilities, as well as performance.  In fact, using this new engine, Firefox now passes the coveted Acid 2 test.

Overall, the latest beta feels quite stable and I’ve begun using it on a daily basis.  It is definitely faster than previous releases.  I definitely recommend checking it out.  On a Windows machine, it will install separately from your primary Firefox installation.  It imports all of your bookmarks and settings after you install it, so there is no danger of losing anything from your primary install.  Just be aware that there is no current guarantee that any new bookmarks, changes, add-ons, etc. will be imported into the final installation.  Many add-ons are still non-functional, though there are plenty more that work fine.

Best of luck!

Bandwidth in the 21st Century

As the Internet has evolved, the one constant has been the typical Internet user.  Typical users used the Internet to browse websites, a relatively low-bandwidth activity.  Even as the capabilities of the average website evolved, bandwidth usage remained relatively low, increasing at a slow rate.

In my own experience, a typical Internet user, accessing the Internet via DSL or cable, only uses a very small portion of the available bandwidth.  Bandwidth is only consumed for the few moments it takes to load a web page, and then usage falls to zero.  The only real difference was the online gamer.  Online gamers use a consistent amount of bandwidth for long periods of time, but the total bandwidth used at any given moment is still relatively low, much lower than the available bandwidth.

Times are changing, however.  In the past few years, peer-to-peer applications such as Napster, BitTorrent, Kazaa, and others have become more mainstream, seeing widespread usage across the Internet.  Peer-to-peer applications are used to distribute files, both legal and illegal, amongst users across the Internet.  Files range in size from small music files to large video files.  Modern applications such as video games and even operating systems have incorporated peer-to-peer technology to facilitate rapid deployment of software patches and updates.

Voice and video applications are also becoming more mainstream.  Software applications such as Joost, Veoh, and Youtube allow video streaming over the Internet to the user’s PC.  Skype allows the user to make phone calls via their computer for little or no cost.  Each of these applications uses bandwidth at a constant rate, vastly different from that of web browsing.

Hardware devices such as the XBox 360, AppleTV, and others are helping to bring streaming Internet video to regular televisions within the home.  The average user is starting to take advantage of these capabilities, consuming larger amounts of bandwidth, for extended periods of time.

The end result of all of this is increased bandwidth within the provider network.  Unfortunately, most providers have based their current network architectures on outdated over-subscription models, expecting users to continue their web-browsing patterns.  As a result, many providers are scrambling to keep up with the increased bandwidth demand.  At the same time, they continue releasing new access packages claiming faster and faster speeds.

Some providers are using questionable practices to ensure the health of their network.  For instance, Comcast is allegedly using packet sniffing techniques to identify BitTorrent traffic.  Once identified, they send a reset command to the local BitTorrent client, effectively severing the connection and canceling any file transfers.  This has caught the attention of the FCC who has released a statement that they will step in if necessary.

Other providers, such as Time Warner, are looking into tiered pricing for Internet access.  Such plans would allow the provider to charge extra for users that exceed a pre-set limit.  In other words, Internet access becomes more than the typical 3/6/9 Mbps access advertised today.  Instead, the high speed access is offset by a total transfer limit.  Hopefully these limits will be both reasonable and clearly defined.  Ultimately, though, it becomes the responsibility of the user to avoid exceeding the limit, similar to that of exceeding the minutes on a cell phone.

Pre-set limits have problems as well, though.  For instance, Windows will check for updates at a regular interval, using Internet bandwidth to do so.  Granted, this is generally a small amount, but it adds up over time.  Another example is PPPoE and DHCP traffic.  Most DSL customers are configured using PPPoE for authentication.  PPPoE sends keep-alive packets to the BRAS to ensure that the connection stays up.  Depending on how the ISP calculates bandwidth usage, these packets will likely be included in the calculation, resulting in “lost” bandwidth.  Likewise, DHCP traffic, used mostly by cable subscribers, will send periodic requests to the DHCP server.  Again, this traffic will likely be included in any bandwidth calculations.

In the end, it seems that substantial changes to the ISP structure are coming, but it is unclear what those changes may be.  Tiered bandwidth usage may be making a comeback, though I suspect that consumers will fight against it.  Advances in transport technology make increasing bandwidth a simple matter of replacing aging hardware.  Of course, replacements cost money.  So, in the end, the cost may fall back on the consumer, whether they like it or not.