DIVX : Return of the useless

In the late 1990’s, Circuit City partnered with an entertainment law firm, Ziffren, Brittenham, Branca and Fischer, to create a new type of video standard called DIVX.  DIVX was intended to be an alternative to movie rentals.  In short, you’d purchase a DIVX disc, reminiscent of a DVD, and you had the right to watch that disc as many times as you wanted within 48 hours of your first viewing.  After 48 hours passed, you had to pay an additional fee to continue viewing the disc.

This all sounds fine and dandy until you realize a few things.  This was a new format, incompatible with DVD players, which had come onto the market a few years earlier.  As a result, expensive DIVX or DIVX/DVD combo players had to be purchased.  These players had to be connected to a phone line so they could verify that the owner could play the disc.

The DIVX format quickly died out, leaving early adopters stranded with unusable discs and useless players.  Another fine example of the usefulness of DRM schemes.

Fast forward to 2008 and to Flexplay EntertainmentFlexplay is a new twist on the old DIVX format.  This time, however, consumers only have to pay once.  Sort of.

Flexplay is a fully compatible DVD disc, with a twist.  You purchase the disc, and after you open the package, you have 48 hours to watch it before it “self-destructs.”  According to How Stuff Works, a standard DVD is a dual-layer disc that starts life as two separate pieces.  After the data is written on each piece, they are glued together using a resin adhesive.  The adhesive is clear, allowing laser light to pass through the first layer when necessary and read the second layer.

Flexplay works by replacing the resin adhesive with a special chemical compound that changes when exposed to oxygen.  Over time, the compound changes color and becomes opaque, rendering the DVD useless.  Once the disc has become opaque, it gets thrown away.

Before you begin fearing for the environment, Flexplay has a recycling program!  Flexplay offers two recycling options, local recycling and mail-in.  They claim that the discs are “no different in their environmental impact than regular DVDs” and that they comply with EPA standards.  Of course, they don’t point out that regular DVDs tend to be kept rather than thrown away.  The also offer this shining gem of wisdom, just before mentioning their mail-in recycling option:

“And of course, a Flexplay No-Return DVD Rental completely eliminates the energy usage and emissions associated with a return trip to the video rental store.”

It’s a good thing mailing the disc back to Flexplay is different than mailing a DVD back to NetFlix or Blockbuster…  Oh..  wait..

And this brings up another good point.  The purpose of Flexplay is to offer an alternative to rental services.  With both Netflix and Blockbuster, I can request the movies I want online, pay a minimal fee, and have them delivered directly to my house.  At worst, I may drive to a local rental store and rent a movie, similar to that of driving to a store selling Flexplay discs.  With Netflix and Blockbuster, I can keep those movies and watch them as many times as I want, way beyond the 48 hour period I would have for a Flexplay disc.  And, for the environmentally conscious, I then return the disc so it can be sent to another renter, removing the local landfill from the equation.

In short, this is yet another horrible idea.  The environmental impact this would have is astounding, if it ever took off.  Hopefully the public is smart enough to ignore it.

Windows 7… Take Two… Or Maybe Three?

Well, looks like the early information on Windows 7 might be wrong.  According to an interview with Steven Sinofsky, Senior Vice President of Windows and Windows Live Engineering at Microsoft, there are a few details you may have heard that may not be entirely true.  But then again, it seems that Mr Sinofsky did tap dance around a lot of the questions asked.

First and foremost, the new kernel.  There has been a lot of buzz about the new MinWin kernel, which many believe to be integral to the next release of Windows.  However, according to the interview, that may not be entirely true.  When asked about the MinWin kernel, Mr Sinofsky replied that they are building Windows 7 on top of the Windows Server 2008 and Windows Vista foundation.  There will be no new driver compatibility issues with the new release.  When asked specifically about the minimum kernel, he dodged the question, trying to focus on how Microsoft communicates, rather than new features of Windows.

So does this mean the MinWin kernel has been cut?  Well, not necessarily, but I do think it means that we won’t see the MinWin kernel in the form it has been talked about.  That is, very lightweight, and very efficient.  In order to provide 100% backwards compatibility with Vista, they likely had to add a lot more to the kernel, moving it from a lightweight, back into the heavyweight category.  This blog post by Chris Flores, a director at Microsoft, seems to confirm this as well.

The release date has also been pushed back to the original 2010 date that was originally stated.  At a meeting before the Inter-American Development Bank, Bill Gates had stated that a new release of Windows would be ready sometime in the next year or so.  Mr Sinofsky stated firmly that Windows 7 would be released three years after Vista, putting it in the 2010 timeframe.

Yesterday evening, at the All Things Digital conference, a few more details leaked out.  It was stated again that Windows 7 would be released in late 2009.  Interestingly enough, it seems that Windows 7 has “inherited” a few features from it’s chief competitor, Mac OSX.  According to the All Things Digital site, there’s a Mac OS-X style dock, though I have not been able to find a screenshot showing it.  There are these “leaked” screenshots, though their authenticity (and possibly the information provided with them) is questionable at best.

The biggest feature change, at this point, appears to be the addition of multi-touch to the operating system.  According to Julie Larson-Green, Corporate Vice President of Windows Experience Program Management, multi-touch has been built throughout the OS.  So far it seems to support the basic feature-set that any iPhone or iPod Touch supports.  Touch is the future, according to Bill Gates.  He went on to say:

“We’re at an interesting junction.  In the next few years, the roles of speech, gesture, vision, ink, all of those will become huge. For the person at home and the person at work, that interaction will change dramatically.”

All in all, it looks like Windows 7 will just be more of the same.  With all of the problems they’ve encountered with Vista, I’ll be surprised if Windows 7 becomes the big seller they’re hoping for.  To be honest, I think they would have been better off re-designing everything from scratch with Vista, rather than trying to shovel in new features to an already bloated kernel.

Useful Windows Utilities? Really?

Every once in a while, I get an error that I can’t disconnect my USB drive because there’s a file handle opened by another program.  Unfortunately, Windows doesn’t help much beyond that, and it’s left up to the user to figure out which app and shut it down.  In some cases, the problem persists even after shutting down all of the open apps and you have to resort to looking through the process list in Task Manager.  Of course, you can always log off or restart the computer, but there has to be an easier way.

In Linux, there’s a nifty little utility called lsof.  The name of the utility, lsof, is short for List Open Files, and it does just that.  It displays a current list of open files, including details such as the name of the program using the file, it’s process ID, the user running the process, and more.  The output can be a bit daunting for an inexperienced user, but it’s a very useful tool.  Combined with the power of grep, a user can quickly identify what files a process has open, or what process has a particular file open.  Very handy for dealing with misbehaving programs.

Similar tools exist for Windows, but most of them are commercial tools, not available for free use.  There are free utilities out there, but I hadn’t found any that gave me the power I wanted.  That is, until today.

I stumbled across a nifty tool called Process Explorer.  Funnily enough, it’s actually a Microsoft tool, though they seem to have acquired it by purchasing SysInternals.  Regardless, it’s a very powerful utility, and came in quite handy for solving this particular problem.

 

In short, I had opened a link in Firefox by clicking on it in Thunderbird.  After closing Thunderbird, I tried to un-mount my USB drive, where I have Portable Thunderbird installed, but I received an error that a file was still open.  Apparently Firefox was the culprit, and closing it released the handle.

The SysInternals page on Microsoft’s TechNet site list a whole host of utilities for debugging and monitoring Windows systems.  These can be fairly dangerous in the hands of the inexperienced, but for those of us who know what we’re doing, these tools can be invaluable.  I’m quite happy I stumbled across these.  The closed nature of Windows can be extremely frustrating at times as I cannot figure out what’s going on.  I’m definitely still a Linux user at heart, but these tools make using Windows a tad more bearable.

H.R. 5994

What a title, eh?  Well, that little title up there may impact how you use the Internet in the future..  H.R. 5994, known as the “Internet Freedom and Non-Discrimination Act of 2008,” is the latest attempt by the US Congress to get a handle on Internet access.  In short, this is another play in the Net Neutrality battle.  I’m no lawyer, but it seems that this is a pretty straightforward document.

H.R. 5994 is intended to be an extension of the Clayton Anti-Trust Act of 1914.  It is intended to “promote competition, to facilitate trade, and to ensure competitive and nondiscriminatory access to the Internet.”  The main theme, as I see it, is that providers can’t discriminate against content providers.  In other words, if they prioritize web traffic on the network, then all web traffic, regardless of origin, should be prioritized.

At first glance, this seems to be a positive thing, however there may be a few loopholes.  For instance, take a look the following from Section 28(a):

“(3)(A) to block, to impair, to discriminate against, or to interfere with the ability of any person to use a broadband network service to access, to use, to send, to receive, or to offer lawful content, applications or services over the Internet;”

From the looks of it, it sounds like you can’t prevent known “bad users” from getting an account, provided they are using the account for legal purposes.  As an example, you couldn’t prevent a known spammer from getting an account, provided, of course, that they obey the CAN-SPAM Act.

And what about blocklists?  Spam blocklists are almost a necessity for mail servers these days, otherwise you have to process every single mail that comes in.  3(A) specifically dictates that you can’t block lawful content…  Unfortunately, it’s not always possible to determine if the mail is lawful until it’s processed.  So this may turn into a loophole for spammers.

The act goes on with the following:

“(4) to prohibit a user from attaching or using a device on the provider’s network that does not physically damage or materially degrade other users’ utilization of the network;”

This one is kind of scary because it does not dictate the type of device, or put any limitations on the capabilities of the device, provided it “does not physically damage or materially degrade other users’ utilization of the network.”  So does that mean I can use any type of DSL or Cable modem that I choose?  Am I considered to be damaging the network if I use a device that doesn’t allow the provider local access?  Seems to me that quite a few providers wouldn’t be happy with this particular clause…

Here’s the real meat of the Net Neutrality argument, though.  Section 28(b) states this:

“(b) If a broadband network provider prioritizes or offers enhanced quality of service to data of a particular type, it must prioritize or offer enhanced quality of service to all data of that type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or enhanced quality of service.”

Wham!  Take that!  Basically, you can’t prioritize your own traffic at the expense of others.  So a local provider who offers a VoIP service can’t prioritize their own and not prioritize (or block) Skype, Vonage, or others.  But, there’s a problem here..  Does the service have to use established standards to be prioritized?  For instance, Skype uses a proprietary VoIP model.  So does that mean that providers do not have to prioritize it?

Providers do, however, get some rights as well.  For instance, Section 28 (c) specifically states:

    `(c) Nothing in this section shall be construed to prevent a broadband network provider from taking reasonable and nondiscriminatory measures–
    • `(1) to manage the functioning of its network, on a systemwide basis, provided that any such management function does not result in discrimination between content, applications, or services offered by the provider and unaffiliated provider;
    • `(2) to give priority to emergency communications;
    • `(3) to prevent a violation of a Federal or State law, or to comply with an order of a court to enforce such law;
    • `(4) to offer consumer protection services (such as parental controls), provided that a user may refuse or disable such services;
    • `(5) to offer special promotional pricing or other marketing initiatives; or
    • `(6) to prioritize or offer enhanced quality of service to all data of a particular type (regardless of the origin or ownership of such data) without imposing a surcharge or other consideration for such prioritization or quality of service.

So providers are allowed to protect the network, protect consumers, and still make a profit.  Of course, assuming this becomes law, only time will tell what the courts will allow a provider to consider “protection” to be…

It looks like this is, at the very least, a good start to tackling this issue.  That is, if you believe that the government should be involved with this.  At the same time, this doesn’t appear to be something most providers would be interested in.  From a consumer standpoint, I want to be able to get the content I want without being blocked because it comes from Google and not Yahoo, who the provider has an agreement with.  Since most consumers are in an area with only one or two providers, this can be a good thing, though.  It prevents a monopoly-type situation where the consumer has no choice but to take the less-than-desirable deal.

This is one of those areas where there may be no solution.  While I side with the providers in that they should be able to manage their network as they see fit, I can definitely see how something needs to be done to ensure that providers don’t take unfair advantage.  Should this become law, I think it will be a win for content providers rather than Internet providers and consumers.

Instant Kernel-ification

 

Server downtime is the scourge of all administrators, sometimes to the extent of bypassing necessary security upgrades, all in the name of keeping machines online.  Thanks to an MIT graduate student, Jeffery Brian Arnold, keeping a machine online, and up to date with security patches, may be easier than ever.

Ksplice, as the project is called, is a small executable that allows an administrator the ability to patch security holes in the Linux kernel, without rebooting the system.  According to the Ksplice website :

“Ksplice allows system administrators to apply security patches to the Linux kernel without having to reboot. Ksplice takes as input a source code change in unified diff format and the kernel source code to be patched, and it applies the patch to the corresponding running kernel. The running kernel does not need to have been prepared in advance in any way.”

Of course, Ksplice is not a perfect silver bullet, some patches cannot be applied using Ksplice.  Specifically, any patch that require “semantic changes to data structures” cannot be applied to the running kernel.  A semantic change is a change “that would require existing instances of kernel data structures to be transformed.”

But that doesn’t mean that Ksplice isn’t useful.  Jeffery looked at 32 months of kernel security patches and found that 84% of them could be applied using Ksplice.  That’s sure to increase the uptime.

I have to wonder, though, what is so important that you need that much uptime.  Sure, it’s nice to have the system run all the time, but if you have something that is absolutely mission critical, that must run 24×7, regardless, won’t you have a backup or two?  Besides which, you generally want to test patches before applying them to such sensitive systems.

There are, of course, other uses for this technology.  As noted on the Ksplice website, you can also use Ksplice to “add debugging code to the kernel or to make any other code changes that do not modify data structure semantics.”  Jeffery has posted a paper detailing how the technology works.

Pretty neat technology.  I wonder if this will lead to zero downtime kernel updates direct from Linux vendors.  As it is now, you’ll need to locate and manually apply kernel patches using this tool.

 

Virtuality, Part Deux

I had the chance to install VirtualBox and I must say, I’m pretty impressed.  To start, VirtualBox is only a 15 meg download.  That’s pretty small when compared to Virtual PC, and downright puny when compared to VMWare Workstation.  There seems to be a lot packed in there, however.  As with VMWare, VirtualBox has special extensions that can be installed into the guest OS for compatibility.

Installation was a snap, similar to that of VMWare, posing no real problem.  The first problem I encountered was after rebooting the guest and logging in.  Apparently, I ran out of memory on the host OS, so VirtualBox gracefully paused the guest OS and alerted me.  After closing some open programs, I was able to resume the guest OS with no problem.  These low memory errors remain the only real problem I have with VirtualBox at this point.

Networking in VirtualBox is a little different from that of VMWare, and took me a few tries before I figured it out.  By default, the system is installed with no virtual adapters, making NAT the only means by which the guest OS can speak to the world.  By installing a virtual interface on the host, through the use of Host Interface Networking (HIF), you can allow the guest OS direct access to the network.  After the interface is created, it is bridged, through the use of a Windows Network Bridge interface, with the interface you want the traffic to flow out of.  Adding and removing an interface in the bridge sometimes takes a minute or two.  I honestly have no idea what Windows is doing during this time, but until the interface is added/removed, networking ceases to function.  I have also noticed that if VirtualBox is running, regardless of the state of the guest OS, modifying the bridge will fail.

Installation of the guest extensions, which required GCC and the kernel headers on the guest OS to be installed, was relatively painless.  After making sure the necessary packages were installed in CentOS, VirtualBox compiled and installed the extensions.  This allowed me to extend my desktop resolution to 1024×768, as well as enabling auto-capture of the mouse pointer when it enters the virtual machine window.  According to the documentation, the extensions also add support for a synchronized clock, shared folders and clipboard, as well as automated Windows logins (assuming you are running a Windows guest OS).

VirtualBox is quite impressive, and I’ve started using it full time.  It’s not quite as polished as VMWare is, but it definitely wins price-wise.  I’m sure VMWare has many more features that I am not using, that may actually justify the price.  For now, I’ll stick with VirtualBox until something forces me to switch.

In related news, I’ve been informed by LonerVamp that VMWare Server, which is free, would also satisfy my needs.  I am a bit confused, though, that a server product would be released for free while a workstation product would not.  I was initially under the impression that the server product merely hosted the OS, allowing another VMWare product to remotely attach to it.  That doesn’t appear to be correct, however.  Can someone explain the major differences to me?  Why would I want to use Workstation as opposed to Server?

Virtuality

I’ve been playing around a bit with two of the major virtualization programs, VMWare Workstation and Microsoft Virtual PC.  My interest at this point has only been to allow me to run an alternative operating systems on my primary PC.  In this particular case, I was looking to run CentOS 5.1 on a Windows XP laptop.

I started out with Virtual PC, primarily because of the price, free.  My goal here is to make my life a little easier when developing web applications.  It would be nice to run everything locally, allowing me freedom to work virtually anywhere, and limiting development code on a publicly accessible machine.  There are, of course, other options such as a private network with VPNs, but I’m really not looking to go that far yet.

Right from the start, Virtual PC is somewhat of a problem.  Microsoft’s website claims that VPC can run nearly any x86-based OS, but they only list Windows operating systems on their website.  This is expected, of course.  So, knowing that this is merely an x86 emulation package, I went ahead and installed it.

The first thing I noticed was the extremely long time it took to install.  Installation of the files themselves seemed to go relatively quickly, but the tail end of the install, where it installs the networking portion of VPC was extremely slow and resulted in a loss of connectivity on my machine for about 5-10 minutes.  I experienced a similar issue uninstalling the software.

Once VPC was installed, I used a DVD copy of CentOS 5.1 to install the guest OS.  I attempted a GUI installation, but VPC wouldn’t support the resolution and complained, resulting in a garbled virtual screen.  I resorted to installation using the TUI.  Installation itself went pretty smoothly, no real problems encountered.  Once completed, I fired up the new OS.

Because I was in text-mode within the virtual OS, the window was pretty small on my screen.  I flipped to full-screen mode to compensate.  I noticed, however, that the fonts were not very sharp, looking very washed out.  I also experienced problems cutting and pasting between the host and guest OS.  Simply put, I could not cut and paste between the two at all.

The guest OS seemed sluggish, but I attributed this to an underpowered laptop running two operating systems simultaneously.  Overall, Virtual PC seemed to work ok, but the lack of graphical support and the inability to cut and paste between the host and guest really made working with it problematic.

My second choice for virtualization was VMWare Workstation.  VMWare has been around for a very long time, and I remember using their product years ago.  VMWare is not free, running a cool $189 for a single license, however there is a 30 day trial key you can get.  I signed up for this and proceeded to install the software.

The first major difference between VPC and Workstation is the size of the program.  VPC clocks in at a measly 30 Megs, while Workstation runs about 330 Megs.  Installation is a snap, however, and proceed quite quickly.  I didn’t experience the same network problems that I did with VPC.

Once installed, I proceeded to load the same guest OS using the same memory and hard drive parameters as I did with VPC.  VMWare correctly configured the graphical display to handle the GUI installer, and I was able to install the Gnome desktop as well.  Installation seemed to go a bit quicker than with VPC, despite the added packaged for X Window and Gnome.

After installation was complete, I booted into the new OS.  VMWare popped up a window notifying me that the Guest OS was not running the VMWare Tools package and that installing it would result in added speed and support.  I clicked OK to bypass the message and allowed the OS to continue loading.

Almost immediately I noticed that VMWare was running quite a bit faster than VPC.  The guest OS was very responsive and I was able to quickly configure MySQL and Apache.  I also noticed that VMWare made the guest OS aware of my sound card, USB controller, and just about every other device I have in the system.  Fortunately, it’s easy enough to remove those from the configuration.  I really have no need for sound effects when I’m merely writing code..  :)

Overall, VMWare has been running very well.  Well enough to make me think about getting a license for it and using it full time.  However, my overall goal is to move over to the OSX platform, so I’m not sure I want to blow $200 on something I’ll only use for a few months (fingers crossed)…  Another alternative may be VirtualBox, an open-source alternative.  I’ll be downloading and checking that out soon enough.

In the end, though, if you’re looking to run high-end applications, or even use a specific OS full-time, there’s nothing better than a full install on a real machine as opposed to a virtual one.

Prepare yourself, Firefox 3 is on the way…

Having just released beta 4, the Mozilla Foundation is well on its way to making Firefox 3 a reality.  Firefox 3 aims to bring a host of new features, as well as speed and security enhancements.

On the front end, they updated the theme.  Yes, again.  I’m not entirely sure what the reasoning is, but I’m sure it’s some inane marketing thing.  Probably something along the lines of “we need to make it look shiny and new!”  It’s not bad, though, and only takes a few moments to re-acquaint yourself with the basic functions.

One significant change is the function of the front and back history buttons.  In previous versions you could click towards the bottom of the button and get a history of the pages forward or back in your history, relevant to the button you pressed.  They have combined this into a single button now, with a small dot identifying where in the history you are.  Back history expands to the bottom of the list while forward history moves up.  It’s a little hard to explain in words, but it’s not that difficult in action.

Next up is the download manager.  They revamped the entire download manager, making it look quite different.  Gone is the global “Clear History” button, in is the new “Search” box.  It seems that one of the themes of this release is that history is important, so they added features to allow you to quickly find relevant information.  But fear not, you can still clear the list by right clicking and choosing clear list.  It’s just not as apparent as it used to be.  In addition, you can continue downloads that were interrupted by network problems, or even by closing the browser.

Some of the pop-ups have been reduced as well.  For instance, when new passwords are entered, instead of getting a popup on the screen asking if you want to save the username and password, a bar appears at the top of the page.  This is a bit more fluid, not interrupting the browsing experience as it did in the past.

Many of the dialogs related to security have been re-vamped in an attempt to make them more clear for non-technical users.  For instance, when encountering an invalid SSL certificate, Firefox now displays something like this :

Other warnings have been added as well.  Firefox now attempts to protect you from malware and web forgeries.  In additions, the browser now handles Extended Validation SSL certificates, displaying the name of the company in green on the location bar.  Clicking on the icon to the left of the URL provides a small popup with additional information about your connection to the remote website.

A plugin manager has been added, allowing the user to disable individual plugins.  This is a very welcome addition to the browser.

The bookmark manager has been updated as well.  In addition to placing bookmarks in folders, users can now add tags.  Using the bookmark sidebar, users can quickly search by tag, locating bookmarks that are in multiple folders.  Smart bookmarks show the most recently used bookmarks, as well as the most recently bookmarked sites and tags.

The location bar has been updated as well.  As you type in the location bar, Firefox automatically searches through your bookmarks, tags, and history, displaying the results.  Results are sorted by both frequency of visits, as well as how recent your last visit was.  For users who clear their history on a regular basis, this makes the location bar much more useful.

Behind the scenes there have been a number of welcome changes.  The most noticeable change is speed.  Beta 4 is insanely fast compared to previous versions.  In fact, it seems to be significantly faster than Internet Explorer, Opera, and others!  And, as an added bonus, it seems to use less memory as well.  Ars Technica did some testing to this effect and came out with some surprising results.

Mozilla attributes both the speed increase to improvements in the JavaScript engine, as well as profile-guided optimizations.  In short, they used profiling tools to identify bottlenecks in the code and fix them.  The reduction in memory is attributed to new allocators and collectors, as well as a reduction in leaky code.

Firefox 3 was built on top of the updated Gecko 1.9 engine.  The Gecko engine is responsible for the actual layout of the page on the screen.  It supports the various web standards such as CSS, HTML, XHTML, JavaScript, and more.  As the Gecko engine has evolved, it has gained additional capabilities, as well as performance.  In fact, using this new engine, Firefox now passes the coveted Acid 2 test.

Overall, the latest beta feels quite stable and I’ve begun using it on a daily basis.  It is definitely faster than previous releases.  I definitely recommend checking it out.  On a Windows machine, it will install separately from your primary Firefox installation.  It imports all of your bookmarks and settings after you install it, so there is no danger of losing anything from your primary install.  Just be aware that there is no current guarantee that any new bookmarks, changes, add-ons, etc. will be imported into the final installation.  Many add-ons are still non-functional, though there are plenty more that work fine.

Best of luck!

Vista… Take Two.

With Windows Vista shipping, Microsoft has turned it’s attention to the next version of Windows.  Currently known as Windows 7, there isn’t a lot of information about this latest iteration.  From the available information, however, it seems that Microsoft *might* be taking a slightly different direction with this version.

Most of the current talk about the next version of Windows has centered around a smaller, more compact kernel known as MinWin.  The kernel of any operating system is the lifeblood of the entire system.  The kernel is responsible for all of the communication between the software and the hardware.

The kernel is arguably the most important part of any operating system and, as such, has resulted in much research, as well as many arguments.  Today, there are two primary kernel types, the monolithic kernel, and the micro kernel.

With a monolithic kernel, all of the code to interface with the various hardware in the computer is built into the kernel.  It all runs in “kernel space,” a protected memory area designated solely to the kernel.  Properly built monolithic kernels can be extremely efficient.  However, bugs in any of the device drivers can cause the entire kernel to crash.  Linux is a good example of a very well built monolithic kernel.

A micro kernel, on the other hand, is a minimalist construct.  It includes only the necessary hooks to implement communication between the software and the hardware in kernel mode.  All other software is run in “user space,”  a separate memory area that can be swapped out to disk when necessary.  Drivers and other essential system software must “ask permission” to interact with the kernel.  In theory, buggy device drivers cannot cause the entire system to fail.  There is a price, however, that of the system call required to access the kernel.  As a result, micro kernels are considered slower than monolithic kernels.  MINIX is a good example of an OS with a micro kernel architecture.

The Windows NT line of operating systems, which includes XP and Vista, uses what Microsoft likes to call a “hybrid kernel.”  In theory, a hybrid kernel combines the best of both monolithic and micro kernels.  It’s supposed to have the speed of a monolithic kernel with the stability of a micro kernel.  I think the jury is still out on this, but it does seem that XP, at least, is much more stable than the Window 9x series of releases which used a monolithic kernel.

So what does all of this mean?  Well, Microsoft is attempting to optimize the core of the operating system, making it smaller, faster, and more efficient.  Current reports from Microsoft indicate that MinWin is functional and has a very small footprint.  The current iteration of MinWin occupies approximately 25 MB of disk space and memory usage of about 40 MB.  This is a considerable reduction in both drive and memory usage.  Keep in mind, however, that MinWin is still being developed and is missing many of the features necessary for it to be comparable with the current shipping kernel.

It seems that Microsoft is hyping this new kernel quite a bit at the moment, but watch for other features to be added as well.  It’s a pretty sure bet that the general theme will change, new flashy gadgets and graphical capabilities, and other such “fluff” will be added.  I’m not sure the market would respond very nicely to a new version of Windows without more flash and shiny…  Windows 7 is supposedly going to ship in 2010, but other reports have it shipping sometime in 2009.  If Vista is any indication, however, I wouldn’t expect Windows 7 until 2011 or 2012.

Meanwhile, it seems that Windows XP is still more popular than Vista.  In fact, it has been reported that InfoWorld has collected over 75,000 signatures on it’s “Save Windows XP” petition.  This is probably nothing more than a marketing stunt, but it does highlight the fact that Vista isn’t being adopted as quickly as Microsoft would like.  So, perhaps Microsoft will fast track Windows 7.  Only time will tell.

Video Cloning

Slashdot ran a story today about a group of graphic designers who put together a realistic World War II D-Day invasion film.  The original story is here.  When Saving Private Ryan was filmed, it took upwards of 1000 extras to film this scene.  These guys did it with 3.

They used green screens and staged pyrotechnics to create the various explosions, props, and other effects.  In the end, the completed scene is incredibly realistic and lifelike.  Incredible work for a small crew.

So, without further ado :