Virtuality

I’ve been playing around a bit with two of the major virtualization programs, VMWare Workstation and Microsoft Virtual PC.  My interest at this point has only been to allow me to run an alternative operating systems on my primary PC.  In this particular case, I was looking to run CentOS 5.1 on a Windows XP laptop.

I started out with Virtual PC, primarily because of the price, free.  My goal here is to make my life a little easier when developing web applications.  It would be nice to run everything locally, allowing me freedom to work virtually anywhere, and limiting development code on a publicly accessible machine.  There are, of course, other options such as a private network with VPNs, but I’m really not looking to go that far yet.

Right from the start, Virtual PC is somewhat of a problem.  Microsoft’s website claims that VPC can run nearly any x86-based OS, but they only list Windows operating systems on their website.  This is expected, of course.  So, knowing that this is merely an x86 emulation package, I went ahead and installed it.

The first thing I noticed was the extremely long time it took to install.  Installation of the files themselves seemed to go relatively quickly, but the tail end of the install, where it installs the networking portion of VPC was extremely slow and resulted in a loss of connectivity on my machine for about 5-10 minutes.  I experienced a similar issue uninstalling the software.

Once VPC was installed, I used a DVD copy of CentOS 5.1 to install the guest OS.  I attempted a GUI installation, but VPC wouldn’t support the resolution and complained, resulting in a garbled virtual screen.  I resorted to installation using the TUI.  Installation itself went pretty smoothly, no real problems encountered.  Once completed, I fired up the new OS.

Because I was in text-mode within the virtual OS, the window was pretty small on my screen.  I flipped to full-screen mode to compensate.  I noticed, however, that the fonts were not very sharp, looking very washed out.  I also experienced problems cutting and pasting between the host and guest OS.  Simply put, I could not cut and paste between the two at all.

The guest OS seemed sluggish, but I attributed this to an underpowered laptop running two operating systems simultaneously.  Overall, Virtual PC seemed to work ok, but the lack of graphical support and the inability to cut and paste between the host and guest really made working with it problematic.

My second choice for virtualization was VMWare Workstation.  VMWare has been around for a very long time, and I remember using their product years ago.  VMWare is not free, running a cool $189 for a single license, however there is a 30 day trial key you can get.  I signed up for this and proceeded to install the software.

The first major difference between VPC and Workstation is the size of the program.  VPC clocks in at a measly 30 Megs, while Workstation runs about 330 Megs.  Installation is a snap, however, and proceed quite quickly.  I didn’t experience the same network problems that I did with VPC.

Once installed, I proceeded to load the same guest OS using the same memory and hard drive parameters as I did with VPC.  VMWare correctly configured the graphical display to handle the GUI installer, and I was able to install the Gnome desktop as well.  Installation seemed to go a bit quicker than with VPC, despite the added packaged for X Window and Gnome.

After installation was complete, I booted into the new OS.  VMWare popped up a window notifying me that the Guest OS was not running the VMWare Tools package and that installing it would result in added speed and support.  I clicked OK to bypass the message and allowed the OS to continue loading.

Almost immediately I noticed that VMWare was running quite a bit faster than VPC.  The guest OS was very responsive and I was able to quickly configure MySQL and Apache.  I also noticed that VMWare made the guest OS aware of my sound card, USB controller, and just about every other device I have in the system.  Fortunately, it’s easy enough to remove those from the configuration.  I really have no need for sound effects when I’m merely writing code..  :)

Overall, VMWare has been running very well.  Well enough to make me think about getting a license for it and using it full time.  However, my overall goal is to move over to the OSX platform, so I’m not sure I want to blow $200 on something I’ll only use for a few months (fingers crossed)…  Another alternative may be VirtualBox, an open-source alternative.  I’ll be downloading and checking that out soon enough.

In the end, though, if you’re looking to run high-end applications, or even use a specific OS full-time, there’s nothing better than a full install on a real machine as opposed to a virtual one.

Scamtastic!

 

So, I’m sitting here, working away and my cell phone rings.  I look up and the caller ID shows “1108” ….  Hrm..  well, that’s odd, but I’ve seen some pretty odd stuff show up on the caller ID for my cell, so I answer the call.

“Hello, this is <unintelligible> from Domain Registry Support, and I speaking with …”

I own a few domain names, and I had just registered one with GoDaddy a few days ago, so I thought, perhaps, that this was a call from GoDaddy.  But why call themselves “Domain Registry Support?”  As I listened, however, I discovered that it was not GoDaddy at all.  This gentleman wanted to verify my contact information, as it was listed in whois.  This particular domain is registered via Network Solutions, so I asked him if that was who he worked for.  He told me, again, that he worked for “Domain Registry Support.”

This all took place in the first 30 or so seconds of the call.  His insistence on not giving me any information made me suspect of the call.  My wife has been contacted a number of times by a credit card scammer, trying to get our information, so I’ve been leery to give out any information to people who call me.

So, I asked the gentleman to hang on a moment and popped up a web browser.  I verified the name of the company again and started a Google search.  Surprise, surprise, I received a page of links about phishing schemes, scams, and assorted complaints.  Unfortunately, as soon as I started typing, my friendly scammer hung up..  Oh well…

And now, a brief intermission

This is a technical blog, and as such, I have endeavored to resist posting personal, religious, and political views that do not directly relate to technology.  I feel that, up to this point, I’ve done a pretty good job with this.  But, occasionally, there is something that I really want to share that makes me re-assess this decision and weigh it against the intended purpose of this blog.

I started this blog on a whim, as a way of getting information out there.  A way of offering my own view on technology, and maybe even helping someone out.  In the end, however, it is my blog, and it’s a place I can post my own thoughts.  And so, I’ve decided to share this with you.  Feel free to skip over it, it’s not technical in nature.  But it did get me thinking, and it has made an impact on me.

 

On March 18th, Barack Hussein Obama, currently running for the democratic presidential nomination, made a speech in Philadelphia, PA.  In it, he addresses the issue of race in America, but not in a way many people have heard it addressed.  He addresses both sides of the issue.  And then he brings them together and explains, in simple terms, the reason race is still such an issue today.

Never have I ever heard this explained in such as way as to make me feel that someone else truly understands my own frustrations with the state of this country.  I’m not racist, and I never have been.  But like so many others, I still find myself scared when walking in a neighborhood not dominated by others of my own color.  I find myself frustrated when jobs, benefits, and more are given to people based solely on their race, and not on their qualifications.  I find myself outraged when simple issues are blown out of proportion, simple because they involve a minority or possibly offended someone.

In this speech, Barack pinpoints and explains these issues, and brings them into the open for everyone to see.  He explains not only how, and what, but why.  I think he truly understands, and truly feels that he can make a change for the better.  And that is why I plan on voting for him.

This speech is incredibly inspiring.  It was written by him, not by an aide, or a staff writer.  These words are his own, and he says them with conviction.  So, without further ado, Mr. Barack Obama.

Prepare yourself, Firefox 3 is on the way…

Having just released beta 4, the Mozilla Foundation is well on its way to making Firefox 3 a reality.  Firefox 3 aims to bring a host of new features, as well as speed and security enhancements.

On the front end, they updated the theme.  Yes, again.  I’m not entirely sure what the reasoning is, but I’m sure it’s some inane marketing thing.  Probably something along the lines of “we need to make it look shiny and new!”  It’s not bad, though, and only takes a few moments to re-acquaint yourself with the basic functions.

One significant change is the function of the front and back history buttons.  In previous versions you could click towards the bottom of the button and get a history of the pages forward or back in your history, relevant to the button you pressed.  They have combined this into a single button now, with a small dot identifying where in the history you are.  Back history expands to the bottom of the list while forward history moves up.  It’s a little hard to explain in words, but it’s not that difficult in action.

Next up is the download manager.  They revamped the entire download manager, making it look quite different.  Gone is the global “Clear History” button, in is the new “Search” box.  It seems that one of the themes of this release is that history is important, so they added features to allow you to quickly find relevant information.  But fear not, you can still clear the list by right clicking and choosing clear list.  It’s just not as apparent as it used to be.  In addition, you can continue downloads that were interrupted by network problems, or even by closing the browser.

Some of the pop-ups have been reduced as well.  For instance, when new passwords are entered, instead of getting a popup on the screen asking if you want to save the username and password, a bar appears at the top of the page.  This is a bit more fluid, not interrupting the browsing experience as it did in the past.

Many of the dialogs related to security have been re-vamped in an attempt to make them more clear for non-technical users.  For instance, when encountering an invalid SSL certificate, Firefox now displays something like this :

Other warnings have been added as well.  Firefox now attempts to protect you from malware and web forgeries.  In additions, the browser now handles Extended Validation SSL certificates, displaying the name of the company in green on the location bar.  Clicking on the icon to the left of the URL provides a small popup with additional information about your connection to the remote website.

A plugin manager has been added, allowing the user to disable individual plugins.  This is a very welcome addition to the browser.

The bookmark manager has been updated as well.  In addition to placing bookmarks in folders, users can now add tags.  Using the bookmark sidebar, users can quickly search by tag, locating bookmarks that are in multiple folders.  Smart bookmarks show the most recently used bookmarks, as well as the most recently bookmarked sites and tags.

The location bar has been updated as well.  As you type in the location bar, Firefox automatically searches through your bookmarks, tags, and history, displaying the results.  Results are sorted by both frequency of visits, as well as how recent your last visit was.  For users who clear their history on a regular basis, this makes the location bar much more useful.

Behind the scenes there have been a number of welcome changes.  The most noticeable change is speed.  Beta 4 is insanely fast compared to previous versions.  In fact, it seems to be significantly faster than Internet Explorer, Opera, and others!  And, as an added bonus, it seems to use less memory as well.  Ars Technica did some testing to this effect and came out with some surprising results.

Mozilla attributes both the speed increase to improvements in the JavaScript engine, as well as profile-guided optimizations.  In short, they used profiling tools to identify bottlenecks in the code and fix them.  The reduction in memory is attributed to new allocators and collectors, as well as a reduction in leaky code.

Firefox 3 was built on top of the updated Gecko 1.9 engine.  The Gecko engine is responsible for the actual layout of the page on the screen.  It supports the various web standards such as CSS, HTML, XHTML, JavaScript, and more.  As the Gecko engine has evolved, it has gained additional capabilities, as well as performance.  In fact, using this new engine, Firefox now passes the coveted Acid 2 test.

Overall, the latest beta feels quite stable and I’ve begun using it on a daily basis.  It is definitely faster than previous releases.  I definitely recommend checking it out.  On a Windows machine, it will install separately from your primary Firefox installation.  It imports all of your bookmarks and settings after you install it, so there is no danger of losing anything from your primary install.  Just be aware that there is no current guarantee that any new bookmarks, changes, add-ons, etc. will be imported into the final installation.  Many add-ons are still non-functional, though there are plenty more that work fine.

Best of luck!

Bandwidth in the 21st Century

As the Internet has evolved, the one constant has been the typical Internet user.  Typical users used the Internet to browse websites, a relatively low-bandwidth activity.  Even as the capabilities of the average website evolved, bandwidth usage remained relatively low, increasing at a slow rate.

In my own experience, a typical Internet user, accessing the Internet via DSL or cable, only uses a very small portion of the available bandwidth.  Bandwidth is only consumed for the few moments it takes to load a web page, and then usage falls to zero.  The only real difference was the online gamer.  Online gamers use a consistent amount of bandwidth for long periods of time, but the total bandwidth used at any given moment is still relatively low, much lower than the available bandwidth.

Times are changing, however.  In the past few years, peer-to-peer applications such as Napster, BitTorrent, Kazaa, and others have become more mainstream, seeing widespread usage across the Internet.  Peer-to-peer applications are used to distribute files, both legal and illegal, amongst users across the Internet.  Files range in size from small music files to large video files.  Modern applications such as video games and even operating systems have incorporated peer-to-peer technology to facilitate rapid deployment of software patches and updates.

Voice and video applications are also becoming more mainstream.  Software applications such as Joost, Veoh, and Youtube allow video streaming over the Internet to the user’s PC.  Skype allows the user to make phone calls via their computer for little or no cost.  Each of these applications uses bandwidth at a constant rate, vastly different from that of web browsing.

Hardware devices such as the XBox 360, AppleTV, and others are helping to bring streaming Internet video to regular televisions within the home.  The average user is starting to take advantage of these capabilities, consuming larger amounts of bandwidth, for extended periods of time.

The end result of all of this is increased bandwidth within the provider network.  Unfortunately, most providers have based their current network architectures on outdated over-subscription models, expecting users to continue their web-browsing patterns.  As a result, many providers are scrambling to keep up with the increased bandwidth demand.  At the same time, they continue releasing new access packages claiming faster and faster speeds.

Some providers are using questionable practices to ensure the health of their network.  For instance, Comcast is allegedly using packet sniffing techniques to identify BitTorrent traffic.  Once identified, they send a reset command to the local BitTorrent client, effectively severing the connection and canceling any file transfers.  This has caught the attention of the FCC who has released a statement that they will step in if necessary.

Other providers, such as Time Warner, are looking into tiered pricing for Internet access.  Such plans would allow the provider to charge extra for users that exceed a pre-set limit.  In other words, Internet access becomes more than the typical 3/6/9 Mbps access advertised today.  Instead, the high speed access is offset by a total transfer limit.  Hopefully these limits will be both reasonable and clearly defined.  Ultimately, though, it becomes the responsibility of the user to avoid exceeding the limit, similar to that of exceeding the minutes on a cell phone.

Pre-set limits have problems as well, though.  For instance, Windows will check for updates at a regular interval, using Internet bandwidth to do so.  Granted, this is generally a small amount, but it adds up over time.  Another example is PPPoE and DHCP traffic.  Most DSL customers are configured using PPPoE for authentication.  PPPoE sends keep-alive packets to the BRAS to ensure that the connection stays up.  Depending on how the ISP calculates bandwidth usage, these packets will likely be included in the calculation, resulting in “lost” bandwidth.  Likewise, DHCP traffic, used mostly by cable subscribers, will send periodic requests to the DHCP server.  Again, this traffic will likely be included in any bandwidth calculations.

In the end, it seems that substantial changes to the ISP structure are coming, but it is unclear what those changes may be.  Tiered bandwidth usage may be making a comeback, though I suspect that consumers will fight against it.  Advances in transport technology make increasing bandwidth a simple matter of replacing aging hardware.  Of course, replacements cost money.  So, in the end, the cost may fall back on the consumer, whether they like it or not.

Microsoft wants to infect your computer?!?

There’s an article over at New Scientist about a “new” technique Microsoft is looking at for delivering patches.  Researchers are looking into distributing patches through a network similar to that of a worm.  These ‘friendly’ worms would use advanced strategies to identify and ‘infect’ computers on a network, and then install the appropriate patches into that system.

On one hand, this looks like it may be a good idea.  In theory, it reduces load on update servers, and it may help to patch computers that would otherwise go un-patched.  Microsoft claims that this technique would spread patches faster and reduce overall network load.

Back in 2003, the now infamous Blaster worm was released.  Blaster took advantage of a buffer overflow in Microsoft’s implementation of RPC.  Once infected, the computer was set to perform a SYN flood attack against Microsoft’s update site, windowsupdate.com.

Shortly after the release of Blaster, a different sort of worm was released, Welchia.  Welchia, like Blaster, took advantage of the RPC bug.  Unlike blaster, however, Welchia attempted to patch the host computer with a series of Microsoft patches.  It would also attempt to remove the Blaster work, if it existed.  Finally, the worm removed itself after 120 days, or January 1, 2004.

Unfortunately, the overall effect of Welchia was negative.  It created a large amount of network traffic by spreading to other machines, and downloading the patches from Microsoft.

The Welchia worm is a good example of what can happen, even when the creator has good intentions.  So, will Microsoft’s attempts be more successful?  Can Microsoft build a bullet-proof worm-like mechanism for spreading patches?  And what about the legality aspect?

In order to spread patches this way, there needs to be some entry point into the remote computer system.  This means a server of some sort must be running on the remote computer.  Is this something we want every Windows machine on the planet running?  A single exploit puts us back into the same boat we’ve been in for a long time.  And Microsoft doesn’t have the best security track record.

Assuming for a moment, however, that Microsoft can develop some sort of secure server, how are the patches delivered?  Obviously a patch-worm is released, likely from Microsoft’s own servers, and spreads to other machines on the Internet.  But, many users have firewalls or NAT devices between themselves and the Internet.  Unless those devices are specifically configured to allow the traffic, the patch-worm will be stopped in it’s tracks.  Corporate firewalls would block this as well.  And what about the bandwidth required to download these patches?  Especially when we’re talking about big patches like service packs.

If the patch-worm somehow makes it to a remote computer, what validation is done to ensure it’s authenticity?  Certificates are useful, but they have been taken advantage of in the past.  If someone with malicious intent can hijack a valid session, there’s no telling what kind of damage can be done.

How will the user be notified about the patch?  Are we talking about auto-install?  Will warning boxes pop up?  What happens when the system needs to be rebooted?

And finally, what about the legal aspects of this?  Releasing worms on the Internet is illegal, and punishable with jail time.  But if that worm is “helpful”, then do the same rules apply?  Network traffic still increases, computer resources are used, and interruptions in service may occur as a result.

 

All I can say is this: This is *my* computer, keep your grubby mitts off it.

Vista… Take Two.

With Windows Vista shipping, Microsoft has turned it’s attention to the next version of Windows.  Currently known as Windows 7, there isn’t a lot of information about this latest iteration.  From the available information, however, it seems that Microsoft *might* be taking a slightly different direction with this version.

Most of the current talk about the next version of Windows has centered around a smaller, more compact kernel known as MinWin.  The kernel of any operating system is the lifeblood of the entire system.  The kernel is responsible for all of the communication between the software and the hardware.

The kernel is arguably the most important part of any operating system and, as such, has resulted in much research, as well as many arguments.  Today, there are two primary kernel types, the monolithic kernel, and the micro kernel.

With a monolithic kernel, all of the code to interface with the various hardware in the computer is built into the kernel.  It all runs in “kernel space,” a protected memory area designated solely to the kernel.  Properly built monolithic kernels can be extremely efficient.  However, bugs in any of the device drivers can cause the entire kernel to crash.  Linux is a good example of a very well built monolithic kernel.

A micro kernel, on the other hand, is a minimalist construct.  It includes only the necessary hooks to implement communication between the software and the hardware in kernel mode.  All other software is run in “user space,”  a separate memory area that can be swapped out to disk when necessary.  Drivers and other essential system software must “ask permission” to interact with the kernel.  In theory, buggy device drivers cannot cause the entire system to fail.  There is a price, however, that of the system call required to access the kernel.  As a result, micro kernels are considered slower than monolithic kernels.  MINIX is a good example of an OS with a micro kernel architecture.

The Windows NT line of operating systems, which includes XP and Vista, uses what Microsoft likes to call a “hybrid kernel.”  In theory, a hybrid kernel combines the best of both monolithic and micro kernels.  It’s supposed to have the speed of a monolithic kernel with the stability of a micro kernel.  I think the jury is still out on this, but it does seem that XP, at least, is much more stable than the Window 9x series of releases which used a monolithic kernel.

So what does all of this mean?  Well, Microsoft is attempting to optimize the core of the operating system, making it smaller, faster, and more efficient.  Current reports from Microsoft indicate that MinWin is functional and has a very small footprint.  The current iteration of MinWin occupies approximately 25 MB of disk space and memory usage of about 40 MB.  This is a considerable reduction in both drive and memory usage.  Keep in mind, however, that MinWin is still being developed and is missing many of the features necessary for it to be comparable with the current shipping kernel.

It seems that Microsoft is hyping this new kernel quite a bit at the moment, but watch for other features to be added as well.  It’s a pretty sure bet that the general theme will change, new flashy gadgets and graphical capabilities, and other such “fluff” will be added.  I’m not sure the market would respond very nicely to a new version of Windows without more flash and shiny…  Windows 7 is supposedly going to ship in 2010, but other reports have it shipping sometime in 2009.  If Vista is any indication, however, I wouldn’t expect Windows 7 until 2011 or 2012.

Meanwhile, it seems that Windows XP is still more popular than Vista.  In fact, it has been reported that InfoWorld has collected over 75,000 signatures on it’s “Save Windows XP” petition.  This is probably nothing more than a marketing stunt, but it does highlight the fact that Vista isn’t being adopted as quickly as Microsoft would like.  So, perhaps Microsoft will fast track Windows 7.  Only time will tell.

A Sweet Breeze

At Macworld this week, Steve Jobs announced a number of new products for Apple.  While most built on existing product lines, one stand out from the crowd as both unique and, quite possibly, daring.  Betting on the ubiquitous presence of wireless access, Jobs announced a new member of the MacBook family, the MacBook Air.

The MacBook Air is Apple’s entry into the so-called Ultra-Light notebook category.  Sporting a 1.6 or 1.8 Ghz Intel Core 2 Duo processor, 2 GB SDRAM, and 802.11n wireless access, this tiny notebook is nothing to scoff at.  Internal storage comes in two flavors, a 4800 RPM 80 GB hard drive, or a 64 GB Solid-State drive.

Conspicuously missing from the Air is an optical drive and an Ethernet jack, though external versions of these are available.  A notebook designed for a wireless world, the Air comes with special software to allow your desktop computer to become a file server, of sorts, so you can install cd and DVD based software over the air.  With the enhanced speed of 802.11n, even large installs should take a relatively short amount of time.

The Air has a few other innovations as well.  The keyboard is backlit, containing an ambient light sensor that automatically lights the keyboard in low-light conditions.  The touchpad now has multi-touch technology, allowing touchpad gestures to rotate, resize, etc.  A micro-DVI port, hidden behind a small hatch on the side, allows the user to connect to a number of different types of external displays ranging from DVI to VGA and even S-Video.  The 13.3″ widescreen display is backlit, reducing power consumption while providing brilliant graphics.  And finally, for the eco-conscious, the entire MacBook Air is built to be environmentally friendly.

But can Apple pull this off?  Will the MacBook Air become as popular as they believe it will?  Has the time come for such a wireless device?  Remember, Palm tried to get into this game with the Foleo.  The problem with the Foleo, of course, was that it was nothing more than a glorified phone accessory, depending heavily on the mobile phone for network access.  And while typing email on a full keyboard with a larger display was nice, there was no real “killer app” for the Foleo.

Critics of the MacBook air point to problems such as the sealed enclosure, or the lack of an Ethernet port.  Being completely sealed, users cannot replace the battery, or switch out to a larger hard drive.  In fact, though not announced, it appears that the Air will suffer the same battery replacement problems that the iPod does.  Is this necessarily a killer, though?  I don’t believe so.  In fact, I think it might be time for a fully wireless device such as this, and I’m eager to see where it leads.