Bandwidth in the 21st Century

As the Internet has evolved, the one constant has been the typical Internet user.  Typical users used the Internet to browse websites, a relatively low-bandwidth activity.  Even as the capabilities of the average website evolved, bandwidth usage remained relatively low, increasing at a slow rate.

In my own experience, a typical Internet user, accessing the Internet via DSL or cable, only uses a very small portion of the available bandwidth.  Bandwidth is only consumed for the few moments it takes to load a web page, and then usage falls to zero.  The only real difference was the online gamer.  Online gamers use a consistent amount of bandwidth for long periods of time, but the total bandwidth used at any given moment is still relatively low, much lower than the available bandwidth.

Times are changing, however.  In the past few years, peer-to-peer applications such as Napster, BitTorrent, Kazaa, and others have become more mainstream, seeing widespread usage across the Internet.  Peer-to-peer applications are used to distribute files, both legal and illegal, amongst users across the Internet.  Files range in size from small music files to large video files.  Modern applications such as video games and even operating systems have incorporated peer-to-peer technology to facilitate rapid deployment of software patches and updates.

Voice and video applications are also becoming more mainstream.  Software applications such as Joost, Veoh, and Youtube allow video streaming over the Internet to the user’s PC.  Skype allows the user to make phone calls via their computer for little or no cost.  Each of these applications uses bandwidth at a constant rate, vastly different from that of web browsing.

Hardware devices such as the XBox 360, AppleTV, and others are helping to bring streaming Internet video to regular televisions within the home.  The average user is starting to take advantage of these capabilities, consuming larger amounts of bandwidth, for extended periods of time.

The end result of all of this is increased bandwidth within the provider network.  Unfortunately, most providers have based their current network architectures on outdated over-subscription models, expecting users to continue their web-browsing patterns.  As a result, many providers are scrambling to keep up with the increased bandwidth demand.  At the same time, they continue releasing new access packages claiming faster and faster speeds.

Some providers are using questionable practices to ensure the health of their network.  For instance, Comcast is allegedly using packet sniffing techniques to identify BitTorrent traffic.  Once identified, they send a reset command to the local BitTorrent client, effectively severing the connection and canceling any file transfers.  This has caught the attention of the FCC who has released a statement that they will step in if necessary.

Other providers, such as Time Warner, are looking into tiered pricing for Internet access.  Such plans would allow the provider to charge extra for users that exceed a pre-set limit.  In other words, Internet access becomes more than the typical 3/6/9 Mbps access advertised today.  Instead, the high speed access is offset by a total transfer limit.  Hopefully these limits will be both reasonable and clearly defined.  Ultimately, though, it becomes the responsibility of the user to avoid exceeding the limit, similar to that of exceeding the minutes on a cell phone.

Pre-set limits have problems as well, though.  For instance, Windows will check for updates at a regular interval, using Internet bandwidth to do so.  Granted, this is generally a small amount, but it adds up over time.  Another example is PPPoE and DHCP traffic.  Most DSL customers are configured using PPPoE for authentication.  PPPoE sends keep-alive packets to the BRAS to ensure that the connection stays up.  Depending on how the ISP calculates bandwidth usage, these packets will likely be included in the calculation, resulting in “lost” bandwidth.  Likewise, DHCP traffic, used mostly by cable subscribers, will send periodic requests to the DHCP server.  Again, this traffic will likely be included in any bandwidth calculations.

In the end, it seems that substantial changes to the ISP structure are coming, but it is unclear what those changes may be.  Tiered bandwidth usage may be making a comeback, though I suspect that consumers will fight against it.  Advances in transport technology make increasing bandwidth a simple matter of replacing aging hardware.  Of course, replacements cost money.  So, in the end, the cost may fall back on the consumer, whether they like it or not.

Microsoft wants to infect your computer?!?

There’s an article over at New Scientist about a “new” technique Microsoft is looking at for delivering patches.  Researchers are looking into distributing patches through a network similar to that of a worm.  These ‘friendly’ worms would use advanced strategies to identify and ‘infect’ computers on a network, and then install the appropriate patches into that system.

On one hand, this looks like it may be a good idea.  In theory, it reduces load on update servers, and it may help to patch computers that would otherwise go un-patched.  Microsoft claims that this technique would spread patches faster and reduce overall network load.

Back in 2003, the now infamous Blaster worm was released.  Blaster took advantage of a buffer overflow in Microsoft’s implementation of RPC.  Once infected, the computer was set to perform a SYN flood attack against Microsoft’s update site, windowsupdate.com.

Shortly after the release of Blaster, a different sort of worm was released, Welchia.  Welchia, like Blaster, took advantage of the RPC bug.  Unlike blaster, however, Welchia attempted to patch the host computer with a series of Microsoft patches.  It would also attempt to remove the Blaster work, if it existed.  Finally, the worm removed itself after 120 days, or January 1, 2004.

Unfortunately, the overall effect of Welchia was negative.  It created a large amount of network traffic by spreading to other machines, and downloading the patches from Microsoft.

The Welchia worm is a good example of what can happen, even when the creator has good intentions.  So, will Microsoft’s attempts be more successful?  Can Microsoft build a bullet-proof worm-like mechanism for spreading patches?  And what about the legality aspect?

In order to spread patches this way, there needs to be some entry point into the remote computer system.  This means a server of some sort must be running on the remote computer.  Is this something we want every Windows machine on the planet running?  A single exploit puts us back into the same boat we’ve been in for a long time.  And Microsoft doesn’t have the best security track record.

Assuming for a moment, however, that Microsoft can develop some sort of secure server, how are the patches delivered?  Obviously a patch-worm is released, likely from Microsoft’s own servers, and spreads to other machines on the Internet.  But, many users have firewalls or NAT devices between themselves and the Internet.  Unless those devices are specifically configured to allow the traffic, the patch-worm will be stopped in it’s tracks.  Corporate firewalls would block this as well.  And what about the bandwidth required to download these patches?  Especially when we’re talking about big patches like service packs.

If the patch-worm somehow makes it to a remote computer, what validation is done to ensure it’s authenticity?  Certificates are useful, but they have been taken advantage of in the past.  If someone with malicious intent can hijack a valid session, there’s no telling what kind of damage can be done.

How will the user be notified about the patch?  Are we talking about auto-install?  Will warning boxes pop up?  What happens when the system needs to be rebooted?

And finally, what about the legal aspects of this?  Releasing worms on the Internet is illegal, and punishable with jail time.  But if that worm is “helpful”, then do the same rules apply?  Network traffic still increases, computer resources are used, and interruptions in service may occur as a result.

 

All I can say is this: This is *my* computer, keep your grubby mitts off it.

Vista… Take Two.

With Windows Vista shipping, Microsoft has turned it’s attention to the next version of Windows.  Currently known as Windows 7, there isn’t a lot of information about this latest iteration.  From the available information, however, it seems that Microsoft *might* be taking a slightly different direction with this version.

Most of the current talk about the next version of Windows has centered around a smaller, more compact kernel known as MinWin.  The kernel of any operating system is the lifeblood of the entire system.  The kernel is responsible for all of the communication between the software and the hardware.

The kernel is arguably the most important part of any operating system and, as such, has resulted in much research, as well as many arguments.  Today, there are two primary kernel types, the monolithic kernel, and the micro kernel.

With a monolithic kernel, all of the code to interface with the various hardware in the computer is built into the kernel.  It all runs in “kernel space,” a protected memory area designated solely to the kernel.  Properly built monolithic kernels can be extremely efficient.  However, bugs in any of the device drivers can cause the entire kernel to crash.  Linux is a good example of a very well built monolithic kernel.

A micro kernel, on the other hand, is a minimalist construct.  It includes only the necessary hooks to implement communication between the software and the hardware in kernel mode.  All other software is run in “user space,”  a separate memory area that can be swapped out to disk when necessary.  Drivers and other essential system software must “ask permission” to interact with the kernel.  In theory, buggy device drivers cannot cause the entire system to fail.  There is a price, however, that of the system call required to access the kernel.  As a result, micro kernels are considered slower than monolithic kernels.  MINIX is a good example of an OS with a micro kernel architecture.

The Windows NT line of operating systems, which includes XP and Vista, uses what Microsoft likes to call a “hybrid kernel.”  In theory, a hybrid kernel combines the best of both monolithic and micro kernels.  It’s supposed to have the speed of a monolithic kernel with the stability of a micro kernel.  I think the jury is still out on this, but it does seem that XP, at least, is much more stable than the Window 9x series of releases which used a monolithic kernel.

So what does all of this mean?  Well, Microsoft is attempting to optimize the core of the operating system, making it smaller, faster, and more efficient.  Current reports from Microsoft indicate that MinWin is functional and has a very small footprint.  The current iteration of MinWin occupies approximately 25 MB of disk space and memory usage of about 40 MB.  This is a considerable reduction in both drive and memory usage.  Keep in mind, however, that MinWin is still being developed and is missing many of the features necessary for it to be comparable with the current shipping kernel.

It seems that Microsoft is hyping this new kernel quite a bit at the moment, but watch for other features to be added as well.  It’s a pretty sure bet that the general theme will change, new flashy gadgets and graphical capabilities, and other such “fluff” will be added.  I’m not sure the market would respond very nicely to a new version of Windows without more flash and shiny…  Windows 7 is supposedly going to ship in 2010, but other reports have it shipping sometime in 2009.  If Vista is any indication, however, I wouldn’t expect Windows 7 until 2011 or 2012.

Meanwhile, it seems that Windows XP is still more popular than Vista.  In fact, it has been reported that InfoWorld has collected over 75,000 signatures on it’s “Save Windows XP” petition.  This is probably nothing more than a marketing stunt, but it does highlight the fact that Vista isn’t being adopted as quickly as Microsoft would like.  So, perhaps Microsoft will fast track Windows 7.  Only time will tell.

A Sweet Breeze

At Macworld this week, Steve Jobs announced a number of new products for Apple.  While most built on existing product lines, one stand out from the crowd as both unique and, quite possibly, daring.  Betting on the ubiquitous presence of wireless access, Jobs announced a new member of the MacBook family, the MacBook Air.

The MacBook Air is Apple’s entry into the so-called Ultra-Light notebook category.  Sporting a 1.6 or 1.8 Ghz Intel Core 2 Duo processor, 2 GB SDRAM, and 802.11n wireless access, this tiny notebook is nothing to scoff at.  Internal storage comes in two flavors, a 4800 RPM 80 GB hard drive, or a 64 GB Solid-State drive.

Conspicuously missing from the Air is an optical drive and an Ethernet jack, though external versions of these are available.  A notebook designed for a wireless world, the Air comes with special software to allow your desktop computer to become a file server, of sorts, so you can install cd and DVD based software over the air.  With the enhanced speed of 802.11n, even large installs should take a relatively short amount of time.

The Air has a few other innovations as well.  The keyboard is backlit, containing an ambient light sensor that automatically lights the keyboard in low-light conditions.  The touchpad now has multi-touch technology, allowing touchpad gestures to rotate, resize, etc.  A micro-DVI port, hidden behind a small hatch on the side, allows the user to connect to a number of different types of external displays ranging from DVI to VGA and even S-Video.  The 13.3″ widescreen display is backlit, reducing power consumption while providing brilliant graphics.  And finally, for the eco-conscious, the entire MacBook Air is built to be environmentally friendly.

But can Apple pull this off?  Will the MacBook Air become as popular as they believe it will?  Has the time come for such a wireless device?  Remember, Palm tried to get into this game with the Foleo.  The problem with the Foleo, of course, was that it was nothing more than a glorified phone accessory, depending heavily on the mobile phone for network access.  And while typing email on a full keyboard with a larger display was nice, there was no real “killer app” for the Foleo.

Critics of the MacBook air point to problems such as the sealed enclosure, or the lack of an Ethernet port.  Being completely sealed, users cannot replace the battery, or switch out to a larger hard drive.  In fact, though not announced, it appears that the Air will suffer the same battery replacement problems that the iPod does.  Is this necessarily a killer, though?  I don’t believe so.  In fact, I think it might be time for a fully wireless device such as this, and I’m eager to see where it leads.

Video Cloning

Slashdot ran a story today about a group of graphic designers who put together a realistic World War II D-Day invasion film.  The original story is here.  When Saving Private Ryan was filmed, it took upwards of 1000 extras to film this scene.  These guys did it with 3.

They used green screens and staged pyrotechnics to create the various explosions, props, and other effects.  In the end, the completed scene is incredibly realistic and lifelike.  Incredible work for a small crew.

So, without further ado :

Internet Toll Booths

Net Neutrality has been a hot topic for some time.  At the heart of the issue is a struggle to increase revenues by charging for content, as well as access.  The term “Net Neutrality” itself refers to the idea that the network should be neutral, or free.  A “free” network would have no access restrictions preventing a user from accessing the content they wanted.  Andrew Odlyzko, Director of the Digital Technology Center at the University of Minnesota, recently published a paper(PDF) concerning Net Neutrality.  He highlights the struggle between big business and market fairness, a struggle that has existed for a very long time.

Think back to the early days of the Internet when providers charged for access by either a transfer or time limit.  This practice gradually gave way to unlimited access for a flat monthly fee.  In more recent times, reports have surfaced about providers who are limiting the total amount of traffic a user can transfer per month.  While providers aren’t coming out and saying it, they have seemingly reverted back to the pay-per-meg days of old.

More concerning, perhaps, is the new practice of throttling specific traffic.  While this seems to be centered around BitTorrent and Peer-to-Peer traffic at the moment, what’s to prevent a provider from throttling site-specific traffic.  In fact, what’s to prevent the provider from creating “Walled Gardens” and charging the end user for access to “extra” content not included in the garden?

Apparently nothing, as some companies have already been doing this, and others have announced plans to.  More recently, the FCC has decided to step in and look into the allegations of data tampering.  Of course, the FCC seems to have problems of it’s own at the moment.

So what is the ultimate answer to this question?  Should the ISP have the right to block and even tamper with data?  Should the end-user have the right to free access to the Internet?  These are tough questions, ones that have been heavily debated for some time, and will likely be debated far into the future.

For myself, my opinion is based on being both a subscriber, as well as an engineer for a service provider.  The provider has built the infrastructure used to access the Internet.  Granted, the funds used to build that infrastructure were provided by the subscribers, but the end result is the same.  The infrastructure is owned by the provider.  As with most property, the owner is generally free to do what they want with it, though this can be a pretty hotly debated topic as well, and perhaps a discussion for a later date.

For now, let’s assume that the owner has the right to modify and use what they own with the only limits being those laws that protect safety.  In other words, I am free to dictate the rules in my own hotel.  Kids can only play in the play room, drinks and food are only allowed in the dining room, and no-one walks through the hall without shoes on.  I will only provide cable TV with CNN and the weather channel, and the pool is only open from 1pm to 5pm on weekdays.  As a hotel owner, I can set these rules, and enforce them by having any guest who violates them removed.  That is my right as a hotel owner.  Of course, if the guests don’t like my rules, they are free to stay at another hotel.

How is this different from an ISP?  An ISP can set the rules however they want, and the subscriber can vote on those rules through the use of their wallet.  Don’t like the rules?  Cancel your subscription and go elsewhere.

Of course, this brings up a major problem with the current state of Internet access.  Unfortunately, there are many areas, even heavily populated ones, where there is no other provider to go to.  In some cases there is a telephone company to provide access, but no alternative such as cable.  In others, cable exists, but the phone company doesn’t have high-speed access yet.  And, in the grand tradition of greed and power, the providers in those areas are able to charge whatever rates they want (with some limitations, as set by the government), and allow or block access in any manner they wish.  And since there are no alternatives, the subscriber is stuck with service they don’t want at a rate they don’t want to pay.

So, my view is somewhat convoluted by the fact that competition between providers is non-existent in some areas.  Many subscribers are stuck with the local carrier and have no choice.  And while I believe that the provider should be able to run their network as they choose, it muddies the waters somewhat because the subscriber cannot vote with their wallet unless they are willing to go without access.

I don’t find the idea of a “walled garden” as that much of a problem, per se.  Look at AOL, for instance.  They flourished for a long time and they were a perfect example of a walled garden at the beginning.  More recent times have led to them allowing full Internet access, but the core AOL client still exists and allows them to feed specific content to the customer.  If providers were willing to lower rates and provide interfaces such as AOLs, I can easily see some users jumping at the opportunity.  Likewise, I see users, such as myself, who are willing to pay a premium for unadulterated access to the Internet.

My hope is that the Internet remains unmolested and open to those who want access.  We can only wait and see what will happen in the end.

A new hairpiece for Mozilla?

Back in October I wrote about a new technology from Mozilla Labs called Prism.  Since then, the team at Mozilla has been working on some newer technology.

First up is something called Personas.  Personas is a neat little extension that lets you modify the Firefox theme on the fly.  You are presented with a small menu, accessible via an icon on the status bar.  From the menu, you can choose from a number of different “themes” that will change the design of the default Firefox theme.

Overall, personas is just a neat little extension with no real purpose other than breaking up the monotony.  You can set it to randomly select a persona, which will cause the persona to change for each instance of the browser.  More options are definitely needed, such as a custom list of personas to choose from, but it’s a decent start.

More interesting, however, is the second technology I’d like to present.  Dubbed Weave, this technology is a bit more on-par with what I’ve been looking forward to for years.  Weave presents the user with a way to record their individual user settings, store them on a remote server, and sync them up with any other installation of Firefox.  In fact, Weave aims to allow the user to sync their preferences with other third-party applications, such as social networks and browsers.

To be honest, I have no real interest whatsoever in social networks.  I avoid MySpace like the plague, and I haven’t bothered to look into Facebook at all.  My on-line collaboration, thus far, has been mostly through traditional means, Instant Message, E-Mail, and the Web.  In fact, I’m not sure any of my online activities fall into the so-called “Social” category.  So, my interest here lies merely in the distribution of my personal metadata between applications that I access.  I would love to be able to “log in” to any computer and immediately download my browser settings, bookmarks, and maybe even my browsing history.  Having all of that information in one central location that can be accessed whenever I need it is a wonderful thought.

I currently use the Bookmark Sync and Sort extension which allows me to upload my bookmarks to my own personal server and synchronize them with other installations of Firefox.  Other such extensions exist to allow you to sync with Google, Foxmarks, and more, but I prefer to have complete control over my data, rather than placing it on a third-party server.

Weave promises to be an open framework for metadata handling, or services integration.  The offer the following view of the process (click for larger image) :

In essence, you access your metadata via a web browser, phone, or some other third-party application.  That application, being Weave-aware, allows you to view and manipulate your metadata.  You can choose to make some of your data available to outside users, such as friends and family, or even make it completely open to the world.  At the same time, any new metadata you create is automatically synchronized with the central servers, updating it instantly wherever you access it.

Weave looks to be a pretty exciting project, one I plan on keeping an eye on.

HERO Returns!

Greetings and welcome to a new year.  Same as the old year, but incremented by one.  Exciting, eh?

I stumbled across an article the other day about an old friend of mine.  I worked on him all through high-school, learning quite a bit about robotics along the way.  His name?  HERO 2000.

HERO had all sorts of cool gadgets including a full robotic arm, speech synthesis, a bunch of sensors to detect light, sound, heat, and more.  You could even write programs, in BASIC, that automated the robot to do different tasks.  I spent quite a bit of time programming him for a variety of tasks, getting him set up for shows, and just playing around with all of the different sensors and other features.  Like I said, I learned a lot.

So, back to the article I mentioned.  Apparently, HeathKit, the original makers of the HERO robot, are at it again.  The HERO robot is coming back, this year!  The new HE-Robot is supposedly available now, according to an article on DeviceGuru, with educational kits coming in January and February.

According to the specifications, the new HERO runs Windows XP Pro on an Intel Core 2 Duo processor.  I’m not impressed with Windows, but I’m sure that can be replaced easily enough.  In fact, with the large OSS crowd out there, I’ll be there’s a full Linux OS install for HERO before the end of the year.

At any rate, the robot comes with a webcam, cd-rom/cd-rw (for on-the-go burning, of course), a bunch of sensors, speakers, and more.  The only thing I see missing is the arm.  And, unfortunately, based on the pictures available, it doesn’t look like the arm will ever be available.  Just not enough room for it.

So, how about price.  Well, it appears that White Box Robots is the manufacturer of this particular machine.  According to their website, the Series 9 PC-Bot, which the HE-RObot is based on, runs a cool $7,995.  Ugh.  At that price, I can research and build my own.  There are less expensive models, including a few that run Linux (which means that drivers already exist), so let’s hope HeathKit sells them for a lower price.  I would love to buy one of these as a kit and build it with my sons, but $5,000 is just way out of my price range…  Anyone want to donate one to me?  :)  Pretty please?

Vista

It’s been a while since Microsoft release their newest OS, Vista, and yet the complaints just haven’t stopped.  I just ran across this humorous piece about “upgrading” to Windows XP and decided it was time to write a little bit about Vista.

I can’t say I’m an expert by any means as I’ve only had limited experience with Vista at this point.  What experience I did have, however, was quite annoying and really turned me away from the thought of installing it.  Overall, Vista has an interesting look.  It’s not that bad, in reality, though it does seem to be a bit of overkill in the eye candy department.  It feels like Microsoft tried to make everything shiny and attractive, but ended up with a shiny, gaudy look instead.

My first experience with Vista involved setting up a Vista machine for network access.  Since setting up networking involves changing system settings, I was logged in as an administrator.  I popped open the control panel to set up the network adapter and spent the next 15 minutes messing around with the settings, prompted time and again to allow the changes I was making.  It was a frustrating experience, to say the least.  Something that takes me less than a minute to accomplish on a Windows XP machine, or even on a Linux machine, takes significantly longer on a Vista machine.

I also noticed a number of pauses, quite noticeable, as I manipulated files.  This happened on more than one machine, making me think there’s something wrong with the file subsystem in Vista.  I’ve heard it explained as a DRM mechanism, checking for various DRM schemes in an attempt to enforce them.  Either way, it’s slow and takes forever to accomplish simple copy and paste tasks.

One of my more recent experiences was an attempt to get Vista to recognize a RAZR phone.  I never did get that working, even with Motorola’s Vista compatible software.  I tried installing, uninstalling, and re-installing the software several times, rebooting in between, enduring the stupid security dialogs all the while.  Vista seems to have recognized the phone, but would not allow the user to interact with it.

They say that first impressions are the most important and, up to this point, Vista has not made a good impression on me at all.  If and when I do move to Vista, it will be with me kicking and screaming the entire way…

Review – Portal (PC)

Anticipation : 10
Expectation : 8
Initial Reaction : 10
Overall : 10
Genre : First Person

Way back in 1995, 3D Realms announced that they were creating a game called Prey.  Key to Prey’s gameplay was the use of portal technology.  Portal technology is a way to create “rips” in space that be moved around in real time.  Portals allow the player to move from area to area by creating artificial doorways between them.  Unfortunately, Prey wasn’t to come out until 11 years later.

In 2005, students from the DigiPen Institute of Technology wrote a game, Narbacular Drop, for their senior game project.  Narbacular Drop revolved around a princess named “No-Knees” who is captured by a demon.  She is placed in a dungeon which turns out to be an intelligent being named “Wally.”  Wally can create portals, which the princess uses to escape the dungeon and defeat the demon.

Valve Software hired the Narbacular Drop programmers in mid-2005, and the team set to work on Portal.  Portal, built on the Source engine, is essentially the spiritual successor to Narbacular Drop.  In Portal, the player, Chell, is placed within the Aperture Science test facility and informed that she must complete a series of tests using the new “Aperture Science Handheld Portal Device.”

I won’t go any further into the plot because you really need to experience this game for yourself.  The commentary from GLaDOS (Genetic Lifeform and Disk Operating System), the computer controlling the facility, is definitely worth checking out.  The computer informs, taunts, cajoles, reassures, and lies to you.  And all with the promise of cake, when you finish!

The game is excellent.  It is exquisitely polished from the environments to the controls.  The game mechanic itself is quite simple, very easy to learn.  Gameplay consists of completing a series of puzzles to find the exit, using portals along the way to move from place to place, move boxes, disarm weapons, and more.  Included are a series of advanced puzzles and challenges that you can complete once you have beaten the main game.

This is definitely a game worth checking out.  Go..  Now..

 

But remember: The cake is a lie.