Who’s Problem Is It Anyway?

This week, Adobe released a security patch for their CS5 product line. While Adobe releasing security patches isn’t really that surprising given their track record with vulnerable products, what is somewhat surprising are the circumstances surrounding the patch. Adobe released the patch somewhat reluctantly.

Sometime in May, possibly earlier, Adobe was made aware of a fairly severe security vulnerability in their CS5 product line. A specially crafted image file was enough to compromise the victim’s computer. Obviously this is a pretty severe flaw and should be fixed ASAP, right? Well, Adobe didn’t really see it that way. Their initial response to the problem was that users who wanted a fixed version would have to pay to upgrade to the CS6 product line, in which the flaw was patched. Eventually they decided to backport the patch to the CS5 version.

Adobe’s initial response and their eventual capitulation leads to a broader discussion. Given any security problem, or even any bug in general, who is responsible for fixing it? The vendor, of course, right? Well… Maybe?

In a perfect world, there would be no bugs, security or otherwise. In a slightly less perfect world, all bugs would be resolved before a product is retired. But neither world exists and bugs seem to prevail. So, given that, who’s problem is it anyway?

There are a lot of justifications vendors make as to when they’ll patch, how they’ll support something, and, of course, excuses. It’s not an easy problem for vendors, though, and some vendors put a lot of thought into their policies. They don’t always get them right, and there’s never a way to make everyone happy.

Patching generally follows a product lifecycle. While the product is supported, patching happens as a normal course of business. When a product is retired, some companies put together a support plan with For instance, when Cisco announces that a product has entered the End-of-Life cycle, they lay out a multi-year plan for support. Typically this involves regular software maintenance for a year, security releases for 2-3 years, and then hardware maintenance for the remainder. This gives businesses ample time to deal with finding a suitable replacement.

Unfortunately, not all vendors act responsibly and often customers are left high and dry when a product is suddenly obsoleted. Depending on the vendor, this sometimes leads to discussions about the possibility of legislation forcing vendors to support products, or to at least address security vulnerabilities. If something like this were to pass, where does it end? Are vendors forced to support products forever? Should they only have to fix severe security problems? And what constitutes a severe security problem?

There are a multitude of reasons that bugs, security or otherwise, are not dealt with. Some justifiable, others not. Working in networking, the primary excuse I’ve heard from hardware vendors over the year is that the management interface of their product is not intended to be on a public network where it can be attacked. Or that the management interfaces should be put behind a firewall where it can’t be attacked. These excuses are garbage, of course, but some vendors just continue to give them. And, unfortunately, you’re not always in a position to drop a vendor and move elsewhere. So, we do what we can to secure the systems and move on.

And sometimes the problem isn’t the vendor, but the customer. How long has it been since Microsoft phased out older versions of it’s Windows operating system? Windows XP is relatively recent, but it’s been a number of years since Windows 2000 was phased out. Or how about Windows 98, 95, and even Windows NT? And customers still have these deployed in their networks. Hell, I know of at least one OS/2 Warp system that’s still deployed in a Telco Central Office!

There is a basis for some regulation, however, and it may affect vendors. When the security of a particular product can significantly impact the public, it can be argued that regulation is necessary. The poster child for this argument are SCADA systems which seem to be perpetually riddled with security holes, mostly due to outdated operating systems.

SCADA systems are what typically control the electrical grid or nuclear power plants. For obvious reasons, security problems with these systems are a deadly serious problem. I often hear that these systems should be air gapped from the Internet, but the lure of easy access and control often pushes users to ignore this advice.

So should SCADA systems be regulated? It’s obvious that the regulations in place already for the industries they are used in aren’t working, so what makes us think that more regulation will help? And if we regulate and force vendors to provide patches for security problems, what makes us think that industries will install them?

This is a complex problem and there are no easy answers. The best we can hope for is a competent administrator who knows how to handle security and deal with threats properly. Until then, let’s hope for incompetent criminals.

Protecting Sources in the 21st Century

Trust is key in many situations. This can be especially true for journalists interested in reporting on sensitive matters. If journalists couldn’t be trusted to protect the identity of their confidential sources, many news items we take for granted would never have been written, or perhaps they wouldn’t have included some of the crucial information they revealed. For instance, much of the critical information about the Watergate scandal was given to reporters by a confidential source who went by the name of Deep Throat.

Until recently, reporters made contact with their sources via anonymous phone calls, often from pay phones, secret meetings, and dead drops. The identify of sources could be kept secret fairly easily, especially if the meetings were carefully conducted in such a manner as to leave little or no trail for anyone to follow. This meant avoiding the use of phones as they were traceable. Additionally, many journalists were willing to risk jail time instead of revealing their sources.

With the advent of the Internet, it became possible to contact sources, both local and distant, quickly and conveniently via email or some form of instant messaging. The ability to reach out to a source and get an almost immediate answer means journalists can quickly deal with rapidly evolving stories. The anonymity of the Internet means that sources stay anonymous. It’s a win-win situation.

Or is it…

I was listening to an On The Media podcast recently and they featured a story about how reporters using the Internet are, in some cases, exposing their contacts without meaning to, often without even knowing it. You can listen to the story below or read the transcript.

Before the Internet, phone conversations were sometimes considered an acceptable risk for contacting sources. After all, tracing a phone call was something it generally took a court order to accomplish. The Internet, however, is a completely different beast. Depending on the communications software used, tracing the owner of an account can be accomplished very easily by just about anyone. Software such as Netglub or Maltego can be used to quickly gather Intel on someone, starting with something as small and simple as a single email address.

Email accounts are generally accessible from anywhere in the world, protected by only a username and password. Brute forcing software can be used to crack a password in a relatively short time allowing someone direct access to the mail stored in the account. And if the mail is sent in clear text, someone trying to identify the source can easily read email sent between the reporter and their source without anyone being the wiser.

Other accounts can be similarly attacked. The end result of identifying the source can be mere embarrassment, or perhaps the source losing their job. Or, as is often the case when foreign news sources are involved, the source can be hunted down and killed.

For a reporter, protecting a source has always been important, but in some cases, it’s a matter of life and death. In the past few years, unrest overseas in places such as Iran, Egypt, Syria, and others has shown that secure communication methods are necessary to help save the lives of those fighting for change. Governments have been ruthless in hunting down and eliminating those who would oppose them. Using secure methods for communication have become lifelines for opposition forces. Likewise, reporters and anyone else who interacts with these sorts of contacts should also be using whatever methods of security they can to ensure that their sources are protected.

Towards Building More Secure Networks

It is no surprise that security is at the forefront of everyone’s minds these days. With high profile breaches, to script kiddies wreaking havoc across the Internet, it is obvious that there are some weaknesses that need to be addressed.

In most cases, complete network redesigns are out of the question. This can be extremely invasive and costly. However, it may be possible to augment the existing network in such a manner as to add additional layers of security. It’s also possible that this may lead to the possibility of being able to make even more changes down the road.

So what do I mean by this? Allow me to explain…

Many networks are fairly simple with only a few subnets, typically a user and a server subnet. Sometimes there’s a bit of complexity on the user side, creating subnets per department, or subnets per building. Often this has more to do with manageability of users rather than security. Regardless, it’s a good practice that can be used to make a network more secure in the long run.

What is often neglected is the server side of things. Typically, there are one, maybe two subnets. Outside users are granted access to the standard web ports. Sometimes more ports such as ssh and ftp are opened for a variety of reasons. What administrators don’t realize, or don’t intend is that they’re allowing outsiders direct access to their core servers, without any sort of security in front of it. Sure, sure, there might be a firewall, but a firewall is there to ensure you only come in on the proper ports, right? If your traffic is destined for port 80, it doesn’t matter if it’s malicious or not, the firewall lets it through anyway.

But what’s the alternative? What can be done instead? Well, what about sending outside traffic to a separate network where the systems being accessed are less critical, and designed to verify traffic before passing it on to your core servers? What I’m talking about is creating a DMZ network and forcing all users through a proxy. Even a simple proxy can help to prevent many attacks by merely dropping illegal traffic and not letting it through to the core server. Proxies can also be heavily fortified with HIDS and other security software designed to look for suspicious traffic and block it.

By adding in this DMZ layer, you’ve put a barrier between your server core and the outside world. This is known as layered defense. You can add additional layers as time and resources allow. For instance, I recommend segmenting away database servers as well as identity management servers. Adding this additional segmentation can be done over time as new servers come online and old servers are retired. The end goal is to add this additional security without disrupting the network as a whole.

If you have the luxury of building a new network from the ground up, however, make sure you build this in from the start. There is, of course, a breaking point. It makes sense to create networks to segregate servers by security level, but it doesn’t make sense to segregate purely to segregate. For instance, you may segregate database and identity management servers away from the rest of the servers, but segregating Oracle servers away from MySQL servers may not add much additional security. There are exceptions, but I suggest you think long and hard before you make such an exception. Are you sure that the additional management overhead is worth the security? There’s always a cost/benefit analysis to perform.

Segregating networks is just the beginning. The purpose here is to enhance security. By segregating networks, you can significantly reduce the number of clients that need to access a particular server. The whole world may need to access your proxy servers, but only your proxy servers need to access the actual web application servers. Likewise, only your web application servers need access to your database servers. Using this information, you can tighten down your firewall. But remember, a firewall is just a wall with holes in it. The purpose is to deflect random attacks, but it does little to nothing to prevent attacks on ports you’ve opened. For that, there are other tools.

At the very edge, simplistic fire walling and generally loose HIDS can be used to deflect most attacks. As you move further within the network, additional security can be used. For instance, deploying an IPS at the very edge of the network can result in the IPS being quickly overwhelmed. Of course, you can buy a bigger, better IPS, but to what end? Instead, you can move the IPS further into the network, placing it where it be more effective. If you place it between the proxy and the web server, you’ve already ensured that the only traffic hitting the IPS is loosely validated HTTP traffic. With this knowledge, you can reduce the number of signatures the IPS needs to have, concentrating on high quality HTTP signatures. Likewise, an IPS between the web servers and database servers can be configured with high quality database signatures. You can, in general, direct the IPS to block any and all traffic that falls outside of those parameters.

As the adage goes, there is no silver bullet for security. Instead, you need to use every weapon in your arsenal and put together a solid defense. By combining all of these techniques together, you can defend against many attacks. But remember, there’s always a way in. You will not be able to stop the most determined attacker, you can only hope to slow him down enough to limit his access. And remember, securing your network is only one aspect of security. Don’t forget about the other low hanging fruit such as SQL injection, cross site scripting, and other common application holes. You may have the most secure network in existence, but a simple SQL injection attack can result in a massive data breach.

Monitoring as a Lifestyle

A few years ago, I wrote a blog entry about losing weight using the Wii Fit. This worked really well for me and I was quite happy with the weight I lost. But I found, over time, that I put at least some of the weight back on. Most of this, I believe, was due to not having a full understanding of how much I was eating.

I’ve since switched from using the Wii Fit to using the XBox Kinect for fitness. I also go to fitness classes outside of home, but that’s a more recent change. But this blog entry isn’t really about fitness alone. It’s about monitoring your lifestyle, keeping track of the data you generate on a daily basis. Right now, I track a lot of personal data about my weight, what I eat, how often I work out, how I sleep, etc.

Allow me to lay out some of the tools I use on a daily basis. First off, my phone. I happen to be an iPhone user at the moment, though any modern smartphone has somewhat similar capabilities. Using my phone, I can view and edit my data whenever I need to, wherever I am. There are literally thousands of applications that can be used to track data about yourself. I’m hoping to be able to aggregate all or most of this data in a single location at some point, but for now, it’s spread across a few different services.

I’m typically fairly private about my data and I tend to avoid most cloud services. However, I have found that it’s virtually impossible to do the type of tracking I want without having to building every single tool myself. So, instead, I use a few online services and provide them with virtually no personal information about myself beyond what is required to make the service work.

So what am I using, anyway? Let’s start with how I track my diet. I’m using a service called My Fitness Pal to track what my daily caloric intake is. This has significantly helped me redefine my dietary habits and helped me to realize how much I should be eating. Previously, I would try to reduce my intake by spreading out meals over the course of the day. While this is a great habit, in the end I believe I was eating more than I should have been, despite my intent. Using the MyFitnessPal application, I get a clear view of where I stand at any point during the day. I’ve been able to significantly reduce my intake without having to shun the foods I love.

On the fitness side of things, I work out every morning before work using XBox Kinect and Your Shape Fitness. I switched over to this when the original Your Shape game came out and I’ve been quite happy. The Wii Fit is a great tool to start with, and it has the benefit of checking your weight every time you play, something I do miss with Your Shape, but the exercises became far too easy to complete. Your Shape pushes a bit harder, bringing a higher level of exercise to my daily routine. And now with the new version, they’ve raised the bar a bit, allowing me to push even harder. There are a few areas I’d like to see improvements in, but overall, I don’t have many complaints.

Using the Your Shape app on my phone, I get a readout of my exercise for the day, as well as an estimate of the calories I burned. I take this information and enter it into the My Fitness Pal application. Doing this allows me to increase my allotment of calories for the day based on how active I have been. In a way, I guess it works like a reward system, granting me the ability to enjoy a little more each day I spend time to work out.

I also wear a Jawbone Up. The Up is a pretty cool little device that tracks your movement during the day and your sleep patterns at night. It can also be used to track your food, though the interface for this is a bit lacking, which is why I use MyFitnessPal. The Up gives me a great view of how active I am during the day, as well as a view of how well I’m sleeping at night. Jawbone has had a bit of a hard time with this particular product, but my personal experience has been pretty positive thus far.

I have a few applications on my phone for tracking runs, though I use them for walking instead.. I’m not much of a runner. These applications are a dime a dozen, and I don’t really have a preference at this point. As long as the application has feedback on distance and route, it’s typically good enough. The application for the Up has this capability as well, though I haven’t had a chance to try it out yet.

And finally, I use an application to track my weight on a daily basis. One of the first things I do in the morning is weigh myself. I’m currently using an application called TargetWeight by Tactio. Basically, this application tracks your weight over time, offering up a few features to help along the way. If you enter a target weight, the application will show you the weight left to lose as part of the icon on your phone. Additionally, it will attempt to predict when you’ll hit your target rate based on the historical date it has collected. There’s a nice graphical view of your weight over time as well. Entering your weight is a quick process each morning and is one of the biggest motivators for me. There’s also an option to use a WiFi enabled Withings scale to wirelessly enter your data.

All together, these various applications and tools allow me to gain better insight into my daily health. This is obviously not for everyone, but for myself it has worked wonders. I’ve lost about 30 pounds or so in the past 2 months, and I’m getting quite close to my current target weight. To each his own, but this is working wonders for me.

MAKE : Mass Monitor Rebuild

A few years ago, I came across a Mass EDI 4-monitor display. The computer system I had just happened to have two dual-display video cards, so it was a perfect match. Last year, one of the displays burned out and had to be replaced. Unfortunately, Mass wanted upwards of $500 for a new display. I did have a number of Dell displays available, though, and decided to look into adding one of those to the mix.

My initial attempt at adding a Dell to the mix was fairly crude, but it worked. I decided to rebuild the entire array this past week and remove the remaining three Mass monitors. There were two main reasons for this. First, the crude setup I had with the first Dell monitor wasn’t an ideal situation. The way the new monitor was mounted, it pressed up against the others and was difficult to adjust. The second reason was that I have a new video card, a Galaxy nVidia GeForce 210, that requires DVI and not VGA. The version of the Mass display I had didn’t support DVI.

And so I started to look at how to better mount a Dell display on a Mass multi-monitor array. The Dell monitor I used initially was a 1907FP. The general size was about right, it just needed to be lifted up away from the lower monitor a bit. The main problem I had with the current mount was that in order to couple the Mass mounting bracket to the Dell mounting bracket, there was really only one location that it could be placed without adding additional hardware. The Dell monitor has a small button on the back to remove it from its mounting, and the Mass has a lever of sorts that does the same. The coupling had to take both of these removal mechanisms into consideration. I spoke with a colleague about the problem and we came up with a small coupling plate that would raise the dell monitor up, keep both removal mechanisms clear, and allow for much better adjustment of the resulting monitor array.

Assembly was pretty straightforward. In order to attach the coupling plate to the Dell monitor, the Dell mount had to be removed from the original stand, lined up with the coupling plate, and holes were drilled to match.

Once the Dell side was finished, the Mass mount was removed from the original monitor and paired up with the augmented Dell mount.

And finally, the new augmented mounting brackets are attached to both the Dell monitor and the Mass monitor array. The dangling VGA cable was for testing prior to the installation of the new video card.

All that remains now is general adjustment of the new monitors. There’s a single Hex screw on the Mass array behind each monitor that can be used to adjust the monitors up and down, as well as some angled movement. This should allow me to adjust the display to exactly what I need. And it now works with the new video card, which was a breeze to install and get running in Fedora.

I love it when a plan comes together.

Contemplating the Future

In 2005 I obtained a job at a regional ILEC as a Data Operations Technician. As part of this job, I took over development of one of the tools we used to diagnose customer DSL connections. Problem was, this tool was written in PHP, a programming language I was, as yet, unfamiliar with.

At the same time, I was also looking for a web-based tool I could use to keep track of various tasks. While there were a few open-source tools I could use, none had the features I was looking for. So I decided to write one myself, and to write it in PHP so I could learn the language better. In the end, I’m glad I did as PHP has become indispensable for writing web-based tools.

The tool I wrote was a web-based todo manager called phpTodo. Since the alpha release in 2005, I have released 7 more versions. Work on phpTodo has ebbed and lowed with time, often interrupted by work and life in general. In fact, the last formal release was made almost 5 years ago, bringing the current version up to 0.8.1. In 2009, I found out that phpTodo was being packaged and released with Fedora as well.

After releasing 0.8.1, I decided to switch from using categories to using tags, similar to how the blogging system I use, Serendipity, uses them. This required rewriting a good deal of the back end of the system, as well as making extensive changes to the front end. I also started using the Prototype and Scriptaculous Javascript frameworks, and then later switched to jQuery. In all, a great deal of code has been rewritten.

I’m quite happy with the general feel of the new version I’ve been working on. While there is a good deal more code to be written, I’m confident there will be a code release soon enough.

I’ve been thinking a lot about the future of phpTodo and where I want to take it. When I originally started, I wrote the system such that I could see my todo list items via an RSS feed. At the time, I had a Blackberry phone and this worked brilliantly. Of course, this was purely a one-way feed with no way to update any todo items on the go. Since that time, I started working on a mobile view for the system, but stopped quickly after I realized how horrible working with WAP was. Fortunately, technology has progressed quickly since that time and WAP is no longer necessary. So, I’m considering working on a mobile version again.

A mobile version brings new challenges, however. It should be trivial to develop a mobile view that can be used while online, but my hope was to have an offline version as well that can be synchronized with the online version. One possibility is to develop an app that can be loaded onto a phone. That, of course, severely limits the platforms it can be run on. Another possibility is an HTML5 version, though that brings challenges of its own.

Another thought was to build a web service into phpTodo. The basic premise is an XML generator that, given a set of parameters, can supply an XML feed for external systems to use as input. And an XML parser that can receive data from external systems in order to update phpTodo data. I believe this can be used as the interface for the mobile view.

A web service can also be used to power another idea I had. I stumbled across the website of Brett Terpstra a while back and found a treasure trove of interesting ideas and useful code snippets. Among these is an obsession for recording notes to keep track of projects, interesting ideas, and helpful code snippets. Brett uses a number of custom scripts and software packages, most of which are exclusive to his platform of choice, OS X. To be honest, I find this incredibly intriguing, and potentially useful. So, I’ve been thinking about developing a command-line tool I can use to interact with phpTodo. A web service could make this a great deal easier.

I have no plans to stop working on the project, and, in fact, I’m eager to keep moving forward. As I continue to rely on phpTodo itself for my daily work, I rely on improvements I can make to the system. So overall, the future of phpTodo is bright.

Mega Fail

So this happened :

Popular file-sharing website Megaupload shut down
Megaupload shut down by feds, seven charged, four arrested
Megaupload assembles worldwide criminal defense
Department of Justice shutdown of rogue site MegaUpload shows SOPA is unnecessary
And then.. This happened :

Megaupload Anonymous hacker retaliation, nobody wins

And, of course, the day before all of this happened was the SOPA/PIPA protest.

Wow.. The government, right? SOPA/PIPA isn’t even on the books, people are up in arms over it, and then they go and seize one of the largest file sharing websites on the planet! We should all band together and immediately protest this illegal seizure!

But wait.. hang on.. Since when does jumping to conclusions help? Let’s take a look and see what exactly is going on here.. According to the indictment, this case went before a grand jury before any takedown was performed. Additionally, this wasn’t an all-of-a-sudden thing. Megaupload had been contacted in the past about copyright violations and failed to deal with them as per established law.

There are a lot of people who are against this action. In fact, the hacktivist group, Anonymous, decided to display their dictate by performing DDoS attacks against high profile sites such as the US DoJ, MPAA, and RIAA. This doesn’t help things and may actually hurt the SOPA/PIPA protest in the long run.

Now I’m not going to say that the takedown was right and just, there’s just not enough information as of yet, and it may turn out that the government was dead wrong with this action. But at the moment, I have to disagree with those that point at this as an example of an illegal takedown. As a friend of mine put it, if the corner market is selling illegal bootleg videos, when they finally get raided, the store gets closed. Yes, there were legal uses of the services on the site, but the corner store sold milk too.

There are still many, many copyright and piracy issues to deal with. And it’s going to take a long time to deal with them. We need to be vigilant, and protesting when necessary does work. But jumping to conclusions like this, and then attacking sites such as the DoJ are not going to help the cause. There’s a time and a place for that, and I don’t believe we’re there yet.

Who turned the lights out?

You may have noticed that a number of websites across the Internet today have modified their look a bit. In many cases, the normal content of that site is unreachable. Why would they do such a thing, you may ask? Well, there are two proposed laws, SOPA and PIPA, that threaten what we, today, enjoy as the Internet. The short version of these laws is that, basically, if you’re found to have any material on your website that infringes copyright, you face having your website shut down, without due process, all of your advertising pulled, being stricken from search engines, and possible jail time. Pretty draconian. There are a number of places that can explain, in more detail, what the full text of the legislation says. If you’re interested, check out americancensorship.org or eff.org.

Or, you can check out this video, from ted.com, that explains the legislation and why it’s so bad.

e

If you’re coming here after the 18th of January, here are some images of the protesting.

Google

 

Wikipedia

 

Wired.com

Blacklisted!

Back in October of 2011, a bill was introduced in the House of Representatives called HR.3261, or the “Stop Online Privacy Act (SOPA).” Go take a look, I’ll wait. It’s a relatively straightforward bill, especially compared to others I’ve looked at. Hell, it’s only 15 pages long! And it’s going to kill the Internet.

Ok,ok.. It won’t *KILL* the Internet, but it has the potential to ruin what we consider to be the Internet. Personally, I believe that if this passes, it has the potential to turn the Internet into nothing more than a collection of business websites, at least in the US.

So how does this thing work? Well, it’s actually pretty straightforward. If your website is suspected of infringing on copyrighted material, your website is taken down, any advertising you have on your site is cut, and you are removed from search engines. But so what, you deserve it! You were breaking copyright law!

Not so fast. This applies to *any* content on your website. So if someone comments on a blog entry, or you innocently link to a website that infringes copyright, or other situations out of your control, you’re responsible. Basically, you have to police every single comment, link, etc. that appears on your website.

It’s even worse for service providers since they have to do the blocking. Every infringing site is blocked via DNS. And since the US doesn’t have control of all of DNS, and some infringing sites are not located in the US, this means we move into the realm of having DNS blacklist files. The ISP becomes the responsible party if they fail to block these sites, which in turn means more overhead for the ISP. Think you pay a lot for Internet access now?

So what can you do? Well, for one, you can contact your representative and tell them how insane this whole idea is. And you can protest SOPA itself by putting up a protest overlay on your site. There’s a github project with all of the source code you need to add an overlay to your website. Or, if you have a Serendipity web blog, you can download the Stop SOPA plugin I’ve written.

Get out there and protest!

Bringing Social To The Kernel

Imagine a world where you can login to your computer once and have full access to all of the functionality in your computer, plus seamless access to all of the web sites you visit on a daily basis. No more logging into each site individually, your computer’s operating system takes care of that for you.

That world may be coming quicker than you realize. I was listening to a recent episode of the PaulDotCom security podcast today. In this episode, they interviewed Jason Fossen, a SANS Security Faculty Fellow and instructor for SEC 505: Securing Windows. During the conversation, Jason mentioned some of the changes coming to the next version of Microsoft’s flagship operating system, Windows 8. What he described was, in a word, horrifying…

Not much information is out there about these changes yet, but it’s possible to piece together some of it. Jason mentioned that Windows 8 will have a broker system for passwords. Basically, Windows will store all of the passwords necessary to access all of the various services you interact with. Think something along the lines of 1Password or LastPass. The main difference being, this happens in the background with minimal interaction with the user. In other words, you never have to explicitly login to anything beyond your local Windows workstation.

Initially, Microsoft won’t have support for all of the various login systems out there. They seem to be focusing on their own service, Windows Live, and possibly Facebook. But the API is open, allowing third-parties to provide the necessary hooks to their own systems.

I’ve spent some time searching for more information and what I’m finding seems to indicate that what Jason was talking about is, in fact, the plan moving forward. TechRadar has a story about the Windows 8 Credential Vault, where website passwords are stored. The credential vault appears to be a direct competitor to 1Password and LastPass. As with other technologies that Microsoft has integrated in the past, this may be the death knell for password managers.

ReadWriteWeb has a story about the Windows Azure Access Control Service that is being used for Windows 8. Interestingly, this article seems to indicate that passwords won’t be stored on the Windows 8 system itself, but in a centralized “cloud” system. A system called the Access Control Service, or ACS, will store all of the actual login information, and the Windows 8 Password Broker will obtain tokens that are used for logins. This allows users to access their data from different systems, including tablets and phones, and retain full access to all of their login information.

Microsoft is positioning Azure ACS as a complete claims-based identity system. In short, this allows ACS to become a one-stop shop for single sign-on. I log into Windows and immediately have access to all of my accounts across the Internet.

Sounds great, right? In one respect, it is. But if you think about it, you’re making things REALLY easy for attackers. Now they can, with a single login and password, access every system you have access to. It doesn’t matter that you’ve used different usernames and passwords for your bank accounts. It doesn’t matter that you’ve used longer, more secure passwords for those sensitive sites. Once an attacker gains a foothold on your machine, it’s game over.

Jason also mentioned another chilling detail. You’ll be able to login to your local system using your Windows Live ID. So, apparently, if you forget your password for your local user, just login with your Windows Live ID. It’s all tied together. According to the TechRadar story, “if you forget your Windows password you can reset it from another PC using your Windows Live ID, so you don’t need to make a password restore USB stick any more.” They go on to say the following :

You’ll also have to prove your identity before you can ‘trust’ the PC you sync them to, by giving Windows Live a second email address or a mobile number it can text a security code to, so anyone who gets your Live ID password doesn’t get all your other passwords too – Windows 8 will make you set that up the first time you use your Live ID on a PC.

You can always sign in to your Windows account, even if you can’t get online – or if there’s a problem with your Live ID – because Windows 8 remembers the last password you signed in with successfully (again, that’s encrypted in the Password Vault).

With this additional tidbit of information, it would appear that an especially crafty attacker could even go as far as compromising your entire system, without actually touching your local machine. It may not be easy, but it looks like it’ll be significantly easier than it was before.

Federated identity is an interesting concept. And it definitely has its place. But, I don’t think tying everything together in this manner is a good move for security. Sure, you can use your Facebook ID (or Twitter, Google, OpenID, etc) already as a single login for many disparate sites. In fact, these companies are betting on you to do so. This ties all of your activity back to one central place where the data can be mined for useful and lucrative bits. And perhaps in the realm of a social network, that’s what you want. But I think there’s a limit to how wide a net you want to cast. But if what Jason says is true, Microsoft may be building the equivalent of the One Ring. ACS will store them all, ACS will verify them, ACS will authenticate them all, and to the ether supply them.