Hacker is not a dirty word

Have you ever had to fix a broken item and you didn’t have the right parts? Instead of just giving up, you looked around and found something that would work for the time being. Occasionally, you come back later and fix it the right way, but more often than not, that fix stays in place indefinitely. Or, perhaps you’ve found a novel new use for a device. It wasn’t built for that purpose, but you figured out that it fit the exact use you had in mind.

Those are the actions of a hacker. No, really. If you look up the definition of a hacker, you get all sort of responses. Wikipedia has three separate entries for the word hacker in relation to technology :

Hacker – someone who seeks and exploits weaknesses in a computer system or computer network

Hacker – (someone) who makes innovative customizations or combinations of retail electronic and computer equipment

Hacker – (someone) who combines excellence, playfulness, cleverness and exploration in performed activities

Google defines it as follows :

1. a person who uses computers to gain unauthorized access to data.

(informal) an enthusiastic and skillful computer programmer or user.

2. a person or thing that hacks or cuts roughly.

And there are more. What’s interesting here is that depending on where you look, the word hacker means different things. It has become a pretty contentious word, mostly because the media has, over time, used it to describe the actions of a particular type of person. Specifically, hacker is often used to describe the criminal actions of a person who gains unauthorized access to computer systems. But make no mistake, the media is completely wrong on this and they’re using the word improperly.

Sure, the person who broke into that computer system and stole all of that data is most likely a hacker. But, first and foremost, that person is a criminal. Being a hacker is a lifestyle and, in many cases, a career choice. Much like being a lawyer or a doctor is a career choice. Why then is hacker used as a negative term to identify criminal activity and not doctor or lawyer? There are plenty of instances where doctors, lawyers, and people from a wide variety of professions have indulged in criminal activity.

Keren Elazari spoke in 2014 at TED about hackers, and their importance in our society. During her talk she discusses the role of hackers in our society, noting that there are hackers who use their skills for criminal activity, but many more who use their skills to better the world. From hacktivist groups like Anonymous to hackers like Barnaby Jack, these people have changed the world in positive ways, helping to identify weaknesses in systems to weaknesses in governments and laws. In her own words :

My years in the hacker world have made me realize both the problem and the beauty about hackers: They just can’t see something broken in the world and leave it be. They are compelled to either exploit it or try and change it, and so they find the vulnerable aspects in our rapidly changing world. They make us, they force us to fix things or demand something better, and I think we need them to do just that, because after all, it is not information that wants to be free, it’s us.

It’s time to stop letting the media use this word improperly. It’s time to take back what is ours. Hacker has long been a term used to describe those we look up to, those we seek to emulate. It is a term we hold dear, a term we seek to defend. When Loyd Blankenship was arrested in 1986, he wrote what has become known as the Hacker’s Manifesto. This document, often misunderstood, describes the struggle many of us went through, and the joy of discovering something we could call our own. Yes, we’re often misunderstood. Yes, we’ve been marginalized for a long time. But times have changed since then and our culture is strong and growing.

Network Enhanced Telepathy

I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy. This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years.

According to Wikipedia, telepathy is “the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.“ In other words, you can think *at* someone and communicate. The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting. In this case, technology brings fantasy into reality.

Scientists have already developed methods to “read” thoughts from the human mind. These methods are by no means perfect, but they are a start. As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace. What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution.

What Singer describes in the book is one step further. If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious. Thus we have Network Enhanced Telepathy. And, of course, with that comes all of the baggage we associate with networks today. Everything from connectivity issues and lag to security problems.

The security issues associated with something like this range from inconvenient to downright horrifying. If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain. Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property. These same issues may exist with this new technology as well.

Stolen data is pretty straightforward. Could an exploit allow an attacker to arbitrarily read data from someone’s mind? How would this work? Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted? While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc. Pinpointing exact data could be absolutely devastating. Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind. Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror.

I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions. Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind? If so, what access would the attacker then attain? Could we be looking at the next big thing in brainwashing? Merely insert the new programming, directly into the user.

How about Denial of Service attacks or physical destruction? Could an attacker cause physical damage in their target? Is a connection to the mind enough access to directly modify the cognitive functions of the target? Could an attacker induce something like Locked-In syndrome in a user? What about blocking specific functions, preventing the user from being able to move limbs, or speak? Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target? These are truly scary scenarios and warrant serious thought and discussion.

Technology is racing ahead at breakneck speeds and the future is an exciting one. These technologies could allow humans to take that next evolutionary step. But as with all technology, we should be looking at it with a critical eye. As technology and biology become more and more intertwined, it is essential that we tread carefully and be sure to address potential problems long before they become a reality.

SSL MitM Appliance

SSL has been used for years to protect against Man in the Middle attacks. It has worked quite well and kept our secret transactions secure. However, that sense of security is starting to crumble.

At Black Hat USA 2009, Dan Kaminsky, security researcher, presented a talk outlining flaws in x.509 SSL certificates. In short, it is possible to trick a certificate authority into certifying a site as legitimate when the site may, in fact, be malicious. It’s not the easiest hack to pull off, but it’s there.

Once you have a legitimate certificate, pulling off a MitM attack is as simple as proxying the traffic through your own system. If you can trick the user into hitting your server instead of the legitimate server, *cough*DNSPOISONING*cough*, you can impersonate the legitimate server via proxy, and log everything the user does. And the only way the user can tell is if they actually look at the IP they’re hitting. How many people do you know that keep track of the IP of the server they’re trying to get to?

Surely there’s something that will prevent this, right? I mean, the fingerprint of the certificate has changed, so the browser will tell me that something is amiss, right? Well, actually, no. In fact, if you replace a valid certificate from one CA with a valid certificate from another CA, the end user typically sees no change at all. There may be options that can be set to alter this behavior, but I know of no browsers that will detect this by default. Ultimately, this means that if an attacker can obtain a valid certificate and redirect your traffic, he will own everything you do without you being the wiser.

And now, just to make things more interesting, we have this little beauty.

This is an SSL interception device sold by Packet Forensics. In short, you provide the fake certificate and redirect the user traffic and the box will take care of the rest. According to Packet Forensics, this box is sold exclusively to law enforcement agencies, though I’m sure there are ways to get a unit. For “testing,” of course.

The legal use of this device is actually unknown. In order to use it, Law Enforcement Organizations (LEO) will need to obtain legitimate certificates to impersonate the remote website, as well as obtain access to insert the device into a network. If the device is not placed directly in-line with the user, then potentially illegal hacking has to take place in order to redirect the traffic instead. Regardless, once these are obtained, the LEO has full access to the user’s traffic to and from the remote server.

The existence of this device merely drives home the ease with which MitM attacks are possible. In fact, in a paper published by two researchers from the EFF, this may already be happening. To date, there are no readily available tools to prevent this sort of abuse. However, the authors of the aforementioned paper are planning on releasing a Firefox plugin, dubbed CertLock, that will track SSL certificate information and inform the user when it changes. Ultimately, however, it would be great if browser manufacturers would incorporate these checks into the main browser logic.

So remember kiddies, just because you see the pretty lock icon, or the browser bar turns green, there is no guarantee you’re not being watched. Be careful out there, cyberspace is dangerous.

 

Scanning the crowd

RFID, or Radio Frequency Identifier, chips are used throughout the commercial world to handle inventory, identify employees, and even more. Ever heard of EZPass? EZPass is an RFID tag you attach to your car window. When you pass near an EZPass receiver, the receiver records your ID and decrements your EZPass account an amount equivalent to the toll of the road or bridge you have just used.

One of the key concepts here is that the RFID tag can be read remotely. Depending on the type of RFID tag, remotely can be a few feet to several yards. In fact, under certain conditions, it may be possible to read RFID tags at extreme distances of hundreds and possibly thousands of feet. It is this “feature” of RFID that attracts the attention of security researchers, and others with more… nefarious intentions.

Of course, if we want to talk security and hacking, we need to talk about Defcon. Defcon, one of the largest, and most electronically hostile, hacking conventions in the US, and possibly the world, took place this past weekend, July 30th – August 2nd. Events range from presentations about security and hacking, to lock picking contests, a jeopardy-style event, and more. One of the more interesting panels this year was “Meet The Feds,” where attendees were given the opportunity to ask questions of the various federal representatives.

Of course, the “official” representatives on the panel weren’t the only feds attending Defcon. There are always others in the crowd, both out in the open and undercover. And while there is typically a “Spot the Fed” event at Defcon, there are other reasons Feds may be around that don’t want their identities known. Unfortunately, the federal government is one of the many organizations that have jumped on the RFID bandwagon, embedding tags into their ID cards.

So, what do you get when you combine a supposedly secure technology with a massive gathering of technology-oriented individuals who like to tinker with just about everything? In the case of Defcon 17, you get an RFID scanner, complete with camera.

According to Threat Level, a column in Wired magazine, a group of researchers set up a table at Defcon where they captured and displayed credentials from RFID chips. When the data was captured, they snapped a picture of the person the data was retrieved from, essentially tying a face to the data stream.

So what’s the big deal? Well, the short story is that this data can be used to impersonate the individual it was captured from. In some cases, it may be possible to alter the data and gain increased privileges at the target company. In other cases, the data can be used to identify the individual, which was the fear at Defcon. Threat Level goes into detail about how it works, but to summarize their report, it is sometimes possible to identify where a person works by the identification strings sent from an RFID chip. Thus, it might be possible to identify undercover federal agents, purely based on the data captured in a passive scan.

The Defcon organizers asked the researchers to destroy the captured data, but this incident just goes to prove how insecure this information truly is. Even with encryption and other security measures in place, identification of an individual is still possible. And to make matters worse, RFID chips are popping up everywhere. They’re attached to items you buy at the store, embedded into identification cards for companies, and even embedded in passports. And while the US government has gone so far as to provide special passport covers that block the RFID signal, there are still countless reasons to open your passport, even for a few moments. Remember, it only takes a few seconds to capture all the data.

Beware… You never know who’s watching …

 

Digital Armageddon

April 1, 2009. The major media outlets are all over this one. Digital Armageddon. The end of computing as we know it. Again. But is it? Should we all just “Chill Out?”

So what happens April 1, 2009? Well, Conficker activates. Well, sort of. It activates the latest revision of its auto-update algorithm, switching the number of domains it can find updates on from 250 per day to 50,000 per day. Conficker, in its current form, isn’t really malicious beyond techniques to prevent detection. In order to become malicious, it will need to download an update to the base code.

There are two methods by which Conficker will update its base code. The first method is to download the code via a connection to one of the 50,000 domains it generates. However, it does not scan all 50,000 domains at once. Instead, it creates a random list of 500 of the 50,000 generated domains and scans them for an update. If no update is found, Conficker sleeps for 24 hours and starts over by generating a new list of 50,000 domains, randomly picking 500, and contacting them for an update. The overall result of this is that it becomes nearly impossible to block all of the generated domains, increasing the likelyhood that an update will get through. On the flip side, this process appears that it would result in a very slow spread of updates. It can easily take days, weeks, or months for a single machine to finally stumble upon a live domain.

The second method is to download the code via a peer-to-peer connection between infected hosts. As I understand it, the peer-to-peer mechanism has been active since revision C of Conficker has been in the wild. This mechanism allows an update to spread from system to system in a very rapid manner. Additionally, based on how the peer-to-peer mechanism works, it appears that blocking it is difficult, at best.

So what is the risk here? Seriously, is my computer destined to become a molten heap of slag, a spam factory, or possibly a zombie soldier in a botnet attack against foreign governments? Is all hope lost? Oh my , are we all going to die!

For the love of all things digital, pull it together! It’s not as bad as it looks! First off all, if you consistently update your machines and keep your anti-virus up to date, chances of you being infected are very low. If you don’t keep up to date, then perhaps you should start. At any rate, fire up a web browser and search for a Conficker scanner. Most of the major anti-virus vendors have one. Make sure you’re familiar with the company you’re downloading the scanner from, though, a large number of scam sites have popped up since Conficker hit the mainstream media.

If you’re a network admin, you have a bigger job. First, I’d recommend any windows machines you are responsible for are patched. Yes, that includes those machines on that private network that is oh-so impossible to get to. Conficker can spread via samba shares and USB keys as well. Next, try scanning your network for infections. There are a number of Conficker scanners out there now thanks to the Honeynet Project and Dan Kaminsky. I have personally used both the proof-of-concept python scanner, as well as the latest version of nmap.

If you’re using nmap, the following command line works quite well and is incredibly fast :

nmap -sC –script=smb-check-vulns –script-args=safe=1 -p139,445 \
-d -PN -n -T4 –min-hostgroup 256 –min-parallelism 64 \
-oA conficker_scan

Finally, as a network admin, you should probably have some sort of Intrusion Detection System (IDS) in place. Snort is an open source IDS that works quite well and has a large community following. IDS signatures exist to detect all known variants of Conficker.

So calm down, take a deep breath, and don’t worry. I find it extremely unlikely that April 1 will result in anything more than a blip in network activity. Instead, concentrate on detection and patching. Conficker isn’t Skynet…. Yet.

 

Detecting DNS cache poisoning

I spoke with a good friend of mine last week about his recent trip to NANOG. While he was there, he listened to a talk about detecting DNS cache poisoning. However, this was detection at the authoritative server, not at the cache itself. This is a bit different than detection at a cache because most cache poisoning will happen outside of your domain.

I initially wrote about the Kaminsky DNS bug a while back, and this builds somewhat on that discussion. When a cache poisoning attack is underway, the attacker must spoof the source IP of the DNS response. From what I can tell, this is because the resolver is told by the root servers who the authoritative server is for the domain. Thus, if a response comes back from a non-authoritative IP, it won’t be accepted.

So let’s look at the attack briefly. The attacker starts requesting a large number of addresses, something to the tune of a.example.com, b.example.com, etc. While those packets are being sent, the attacker sends out the responses with the spoofed headers. Since we are now guessing both the QID *and* the port, we miss a lot because the port is incorrect.

When the server receives a packet on a port that is not expecting data, it responds with an ICMP message, “Destination Port Unreachable.” That ICMP message is sent to the source IP of the packet, which is the spoofed authoritative IP. This is known as ICMP backscatter.

Administrators of authoritative name servers can monitor for ICMP backscatter and identify possible cache poisoning attacks. In most cases, there is nothing that can be done directly to mitigate these attacks, but it is possible to identify the cache being attacked and notify the admin. Cooperation between administrators can lead to a complete mitigation of the attack and protection of clients who may be harmed.

This is an excellent example of the type of data you can identify simple through passive monitoring on your local network.

Steal the Net’s Identity

Imagine this. You wake up in the morning, go about your daily chores, and finally sit down to surf the web, read some news, check your mail, etc. A some point, you decide to log in to your bank to check your accounts. You get there, login, and you’re greeted with a page explaining that the site is down for maintenance. Oh well, you’ll come back later. In the meantime, someone drains your account using the username and password that you just graciously handed them, not realizing that the site you went to was not where you intended to go.

Sound familiar? Yeah, I guess it sounds a bit like a phishing attack, though a tad more sophisticated. I mean, you did type in the address for the bank yourself, didn’t you? It’s not like you clicked on a link in a email or something. But in the end, you arrived at the wrong site, cleverly designed, and gave them your information.

So how the hell did this happen? How could you end up at the wrong site when you personally put in the address, your computer has all the latest in virus scanning, firewalling, etc? You spelled it right, too! It’s almost as if someone took over the bank’s computer!

Well, they did. Sort of. But they did it without touching the bank’s computers at all. They used the DNS system to inject a false address for the bank website, effectively re-directing you to their site. How is this possible? Well, it’s a flaw in the DNS protocol itself that allows this. The Matasano Security blog posted about this on Monday, though the post was quickly removed. You may still be able to see the copy that Google has cached.

Let me start from the beginning. On July 8th, Dan Kaminsky announced that he had discovered a flaw in the DNS protocol and had been working, in secret, with vendors to release patches to fix this problem. This was a huge effort, one of the very first the world has ever seen. In the end, patches were released for Bind, Microsoft DNS, and others.

The flaw itself is interesting, to say the least. When a user requests an address for a domain, it usually goes to a local DNS cache for resolution. If the cache doesn’t know the answer, it follows a set of rules that eventually allow it to ask a server that is authoritative for that domain. When the cache asks the authoritative server, the packet contains a Query ID (QID). Since caches usually have multiple requests pending at any given time, the QID helps distinguish which response matches which request. Years ago, there was a way to spoof DNS by guessing the QID. This was pretty simple to do because the QID was sequential. So, the attacker could guess the QID and, if they could get their response back to the server faster than the authoritative server could, they would effectively hijack the domain.

So, vendors patched this flaw by randomizing the QID. Of course, if you have enough computing power, it’s still possible to guess the QID by cracking the random number generator. Difficult, but possible. However, the computing power to do this in a timely manner wasn’t readily available back then. So, 16-bit random QIDs were considered secure enough.

Fast forward to 2008. We have the power, and almost everyone with a computer has it. It is now possible to crack something like this in just a few seconds. So, this little flaw rears its ugly head once again. But there’s a saving grace here. When you request resolution for a domain name, you also receive additional data such as a TTL. The TTL, or Time To Live, defines how long an answer should be kept in the cache before asking for resolution again. This mechanism greatly reduces the amount of DNS traffic on the network because, in many cases, domain names tend to use the same IP address for weeks, months, and, in many cases, years. So, if the attacker is unsuccessful in his initial attack, he has to wait for the TTL to expire until he can try again.

There was another attack, back in the day, that allowed an attacker to overwrite entries in the cache, regardless of the TTL. As I mentioned before, when a DNS server responds, it can contain additional information. Some of this information is in the form of “glue” records. These are additional responses, included in the original response, that helps out the requester.

Let’s say, for instance, that you’re looking for the address for google.com. You ask your local cache, which doesn’t currently know the answer. It forwards that request on to the root servers responsible for .com domains using a process known as recursion. When the root server responds, the response will be the nameserver responsible for google.com, such as ns1.google.com. The cache now needs to contact ns1.google.com, but it does not know the address for that server, so it would have to make additional requests to the root servers to determine this. However, the root server already includes a glue record that gives the cache this information, without the cache asking for it. In a perfect world, this is wonderful because it makes the resolution process faster and reduces the amount of DNS traffic required. Unfortunately, this isn’t a perfect world. Attackers could exploit this by including glue records for domains that they were not authoritative for, effectively injecting records into the cache.

Again, vendors to the rescue! The concept of a bailiwick was introduced. In short, if a cache was looking for the address of google.com, and the response included the address for yahoo.com, it would ignore the yahoo.com information. This was known as a bailiwick check.

Ok, we’re safe now, right? Yeah, no. If we were safe, there wouldn’t be much for me to write about. No, times have changed… We now have the power to predict 16-bit random numbers, overcoming the QID problem. But TTL’s save us, right? Well, yes, sort of. But what happens if we combine these two attacks? Well, interesting things happen, actually.

What happens if we look up a nonexistent domain? Well, you get a response of NXDOMAIN, of course. Well yeah, but what happens in the background? Well, the cache goes through the exact same procedure it would normally go through for a valid domain. Remember, the cache has no idea that the domain doesn’t exist until it asks. Once it receives that NXDOMAIN, though, it will cache that response for a period of time, usually defined by the owner of the root domain itself. However, since it does go through the same process of resolving, there exists an attack vector that can be exploited.

So let’s combine the attacks. We know that we can guess the QID given enough guessing. And, we know that we can inject glue records for domains, provided they are within the same domain the response is for. So, if we can guess the QID, respond to a non-existent domain, and include glue records for a real domain, we can poison the cache and hijack the domain.

So now what? We already patched these two problems! Well, the short-term answer is another patch. The new patch adds additional randomness to the equation in the form of the source port. So, when a DNS server makes a request, it randomizes the QID and the source port. Now the attacker needs to guess both in order to be successful. This basically makes it a 32-bit number that needs to be guessed, rather than a 16-bit number. So, it takes a lot more effort on the part of the attacker. This helps, but, and this is important, it is still possible to perform this attack given enough time. This is NOT a permanent fix.

That’s the new attack in a nutshell. There may be additional details I’m not aware of, and Dan will be presenting them at the Blackhat conference in August. In the meantime, the message is to patch your server! Not every server is vulnerable to this, some, such as DJBDNS, have been randomizing source ports for a long time, but others are. If in doubt, check with your vendor.

This is pretty big news, and it’s pretty important. Seriously, this is not a joke. Check your servers and patch. Proof of concept code is in the wild already.

Ooh.. Bad day to be an IIS server….

Web-based exploits are pretty common nowadays.  It’s almost daily that we heard of sites being compromised one way or another.  Today, it’s IIS servers.  IIS is basically a web-server platform developed by Microsoft.  It runs on Windows-based servers and generally serves ASP, or Active Server Pages, dynamic content similar to that of PHP or Ruby.  There is some speculation that this is related to a recent security advisory from Microsoft, but this has not been confirmed.

Several popular blogs, including one on the Washington Post, have posted information describing the situation.  There is a bit of confusion, however, as to what exactly the attack it.  It appears that the IIS servers were infected by using the aforementioned vulnerability.  Other web servers are being infected using SQL injection attacks.  So it looks like there are several attack vectors being used to spread this particular beauty.

Many of the reports are using Google searches to estimate the number of infected systems.  Estimates put that figure at about 500,000, but take that figure with a grain of salt.  While there are a lot affected, using Google as the source of this particular metric is somewhat flawed.  Google reports the total number of links found referring to a particular search string, so there may be duplicated information.  It’s safe to say, however, that this is pretty widespread.

Regardless of the method of attack, and which server is infected, an unsuspecting visitor to the exploited site is exposed to a plethora of attacks.  The malware uses a number of exploits in popular software packages, such as AIM, RealPlayer, and iTunes, to gain access to the visitor’s computer.  Once the visitor is infected, the malware watched for username and password information, reporting that information back to a central server.  Both ISC and ShadowServer have excellent write-ups on both the server exploit as well as the end-user exploit.

Be careful out there, kids…

Stop Car Thief!

Technology is wonderful, and we are advancing by leaps and bounds every day.  Everything is becoming more connected, and in some cases, smarter.  For instance, are you aware that almost every modern vehicle is microprocessor controlled?  Computers inside the engine control everything from the fuel mixture to the airflow through the engine.

Other computer-based systems in your car can add features such as GPS navigation, or even connect you to a central monitoring station.  GM seems to have cornered the market on mobile assistance with it’s OnStar service.  OnStar is an in-vehicle system that allows the owner to report emergencies, get directions, make phone calls, and even remotely unlock your car doors.

Well, OnStar recently announced plans to add another feature to it’s service.  Dubbed “Stolen Vehicle Slowdown,” this new service allows police to remotely stop a stolen vehicle.  The service is supposed to start in 2009 with about 20 models.  Basically, when the police identify a stolen vehicle, they can have the OnStar technician automatically disable the vehicle, leaving the thief with control over the steering and brakes, only.  OnStar also mentions that they may issue a verbal warning to the would-be thief, prior to disabling the car.

 

But is this too much power?  What are the implications here?  OnStar is already a wireless system that allows remote unlocking of your car doors.  It reports back vehicle information to OnStar who can then warn you about impending vehicle problems.  Remote diagnostics can be run to determine the cause of a malfunction.  And now, remotely, the vehicle can be disabled?

As history has shown us, nothing is unhackable.  How long will it be until hackers identify a way around the OnStar security and find a way to disable vehicles at-will?  It will likely start as a joke, disabling vehicles on the highway, causing havoc with traffic, but in a relatively safe manner.  How about disabling the boss’s car?  But then someone will use this new power for evil.  Car jackers will start disabling cars to make it easier to steal them.  Criminals will disable vehicles so they can rob or harm the driver.

So how far is too far?  Is there any way to make services such as this safe?

Busted…

Imagine this. You turn on your computer and, unbeknownst to you, someone starts changing your files. Ok, so maybe it’s not so tough to imagine these days with all of the viruses, trojans, and hackers out there. But what if the files were being changed by someone you trusted? Well, maybe not someone you trust, but someone that should know better.

On August 24th, this exact scenario played out. All across the globe, files in Windows XP and Vista installations were modified with no notice, and no permission. But, this can easily be explained by the Windows Automatic Update mechanism, right? Wrong. The problem here, is that these updates were installed, regardless of the Automatic Update setting. Yep, you heard that right. These files were updated, even if you did not have automatic updates set to download or install updates.

This story was first broken by Windows Secrets on September 13th. The update seems to center around the Automatic Update feature itself. Nate Clinton, Program Manager for Microsoft’s Windows Update group wrote a blog entry about how and why Windows Update updates itself. Basically, the claim is that these updates are installed automatically because without them, Automatic Updates would cease to work, leaving the user with a false sense of security. He goes on to say that this type of stealth updating has been occurring since Automatic Updates was introduced. Finally, he mentions that these files are not updated if Automatic Updates are disabled.

This type of stealth updating is very disconcerting as it means that Microsoft is willing to update files without notifying the user. And while they state that Windows Update is the only thing being updated in this fashion, how can we believe them? What’s to prevent them from updating other files? Are we going to find in the future that our computers are automatically updated with new forms of DRM?

While I applaud Microsoft for wanting to keep our computers safe, and trying to ensure that the user doesn’t have a false sense of security, I disagree strongly with the way they are going about it. This is a very slippery slope, and can lead quickly into questionably legal territory. Should Microsoft have the right to change files on my computer without permission? Have they received permission already because I am using the update software? Unfortunately, there are no clear cut answers to these questions.

It’ll be interesting to see what happens from here as this has become somewhat of a public issue. Will Microsoft become more forthcoming with these updates, or will they proceed with stealth installations? Regardless, I don’t expect to see much of a reprisal because of this issue. It’s unfortunate, but for the most part, I don’t think most users actually care about issues such as this. In fact, most of them probably aren’t aware. Thankfully for those of us that do care, there are people out there keeping an eye out for issues like this.