The Zero-Day Conundrum

Last week, another “zero-day” vulnerability was reported, this time in Adobe’s Acrobat PDF reader. Anti-virus company, Symantec, reports that this vulnerability is being used as an attack vector against defense contractors, chemical companies, and others. Obviously, this is a big deal for all those being targeted, but is it really something you need to worry about? Are “zero-days” really something worth defending against?

What is a zero-day anyway? Wikipedia has this to say:

A zero-day (or zero-hour or day zero) attack or threat is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.

So, in short, a zero-day is an unknown vulnerability in a piece of software. Now, how do we defend against this? We have all sorts of tools on our side, surely there’s one that will catch these before they become a problem, right? IDS/IPS systems have heuristic filters for detecting anomalous activity. Of course, you wouldn’t want your IPS blocking arbitrary traffic, so that might not be a good idea. Anti-virus software also has heuristic filters, so that should help, right? Well… When’s the last time your heuristic filter caught something that wasn’t a false positive? So yeah, that’s probably not going to work either. So what’s a security engineer to do?

My advice? Don’t sweat it. Don’t get me wrong, zero-days are dangerous and can cause all sorts of problems, but unless you have an unlimited budget with an unlimited amount of time, trying to defend against an unknown attack is a pointless exercise in futility. But don’t despair, there is hope.

Turns out, if you spend your time securing your network properly, you’ll defend against most attacks out there. Let’s look at this latest attack, for instance. Let’s assume you’ve spent millions and have the latest and greatest hardware with all the cutting edge signatures and software. Someone sends the CEO’s secretary an innocuous PDF, which she promptly opens, and all that hard work goes out the window.

On the other hand, let’s assume you spent the small budget you have defending the critical data you store and spend the time you’ve saved not decoding those advanced heuristics manuals on training the staff. This time the CEO’s secretary looks twice, realizes this is an unsolicited email, and doesn’t open the PDF. No breach, the world is saved.

Seriously, though, spending your time and effort safe-guarding your data and training your staff will get you much further than worrying about every zero-day that comes along. Of course, you should be watching for these sorts of reports. In this case, for instance, you can alert your staff that there’s a critical flaw in this particular software and that they need to be extra careful. Or, if the flaw is in a web application, you can add the necessary signatures to look for it. But in the end, it’s very difficult, if not impossible, to defend against something you’re not aware of. Network and system security is complex and difficult enough without having to worry about the unknown.

In Memorium – Steve Jobs – 1955-2011

Somewhere in the early 1980’s, my father took me to a bookstore in Manhattan. I don’t remember why, exactly, we were there, but it was a defining moment in my life. On display was a new wonder, a Macintosh computer.

Being young, I wasn’t aware of social protocol. I was supposed to be awed by this machine, afraid to touch it. Instead, as my father says, I pushed my way over, grabbed the mouse, and went to town. While all of the adults around me looked on in horror, I quickly figured out the interface and was able to make the machine do what I wanted.

It would be over 20 years before I really became a Mac user, but that first experience helped define my love of computers and technology.

Thank you, Steve.

Audit Insanity


It’s amazing, but the deeper I dive into security, the more garbage security theater I uncover. Sure, there’s insanity everywhere, but I didn’t expect to come across some of this craziness…

One of the most recent activities I’ve been party to has been the response to an independent audit. When I inquired as to the reasoning behind the audit, the answer I’ve received has been that this is a recommended yearly activity. It’s possible that this information is incorrect, but I suspect that it’s truer than I’d like to believe.

Security audits like this are standard practice all over the US and possibly the world. Businesses are led to believe that getting audited is a good thing and that they should be repeated often. My main gripe here is that while audits can be good, they need to be done for the right reasons, not just because someone tells you they’re needed. Or, even better, the audits that are forced on a company by their insurance company, or their payment processor. These sorts of audits are there to pass the blame if something bad happens.

Let’s look a little deeper. The audit I participated in was a typical security audit. An auditor contacts you with a spreadsheet full of questions for you to answer. You will, of course, answer them truthfully. Questions included inquiries about the password policy, how security policies are distributed, and how logins are handled. They delve into areas such as logging, application timeouts, IDS/IPS use, and more. It’s fairly in-depth, but ultimately just a checklist. The auditor goes through their list, interpreting your answers, and applying checkmarks where appropriate. The auditor then generates a list of items you “failed” to comply with and you have a chance to respond. This is all incorporated into a final report which is presented to whoever requested the audit.

Some audits will include a scanning piece as well. The one I’m most familiar with in this aspect is the SecurityMetrics PCI scan. Basically, you fill out a simplified yes/no questionnaire about your security and then they run a Nessus scan against whatever IP(s) you provide to them. It’s a completely brain-dead scan, too. Here’s a perfect example. I worked for a company who processed credit cards. The system they used to do this was on a private network using outbound NAT. There were both IDS and firewall systems in place. For the size of the business and the frequency of credit card transactions, this was considerable security. But, because there was a payment card processor in the mix, they were required to perform a quarterly PCI scan. The vendor of choice, SecurityMetrics.

So, the security vendor went through their checklist and requested the IP of the server. I explained that it was behind a one-way NAT and inaccessible from the outside world. They wanted the IP of the machine, which I provided to them. Did I mention that the host in question was behind a NAT? These “security professionals” then loaded that IP into their automated scanning system. And it failed to contact the host. Go figure. Again, we went around and around until they finally said that they needed the IP of the device doing the NAT. I explained that this was a router and wouldn’t provide them with any relevant information. The answer? We don’t care, we just need something to scan. So, they scanned a router. For years. Hell, they could still be doing it for all I know. Like I said, brain dead security.

What’s wrong with a checklist, though? The problem is, it’s a list of “common” security practices not tailored to any specific company. So, for instance, the audit may require that a company uses hardware-based authentication devices in addition to standard passwords. The problem here is that this doesn’t account for non-hardware solutions. The premise here is that two-factor authentication is more secure than just a username and password. Sure, I whole-heartedly agree. But, I would argue that public key authentication provides similar security. It satisfies the “What You Have” and “What You Know” portions of two-factor authentication. But it’s not hardware! Fine, put your key on a USB stick. (No, really, don’t. That’s not very secure.)

Other examples include the standard “Password Policy” crap that I’ve been hearing for years. Basically, you should expire passwords every 90 days or so, passwords should be “strong”, and you should prevent password reuse by remembering a history of passwords. So let’s look at this a bit. Forcing password changes every 90 days results in bad password habits. The reasoning is quite simple, and there have been studies that show this. This paper (pdf) from the University of North Carolina is a good example. Another decent write up is this article from Cryptosmith. Allow me to summarize. Forcing password expiration results in people making simpler passwords, writing passwords down, or using simplistic algorithms to generate “complex” passwords. In short, cracking these “fresh” passwords is often easier than well thought out ones.

The so-called “strong” password problem can be summarized by a rather clever XKCD comic. The long and short here is that truly complex passwords that cannot be easily cracked are either horribly complex mishmashes of numbers, letters, and symbols, or they’re long strings of generic words. Seriously, “correct horse battery staple” is significantly stronger than using a completely random 11 digit string.

And, of course, password history. This sort of goes hand-in-hand with password expiration, but not always. If it’s used in conjunction with password expiration, then it generally results in single character variation in passwords. Your super-secure “complex” password of “Password1” (seriously, it meets the criteria.. Uppercase, lowercase, number) becomes a series of passwords where the 1 is changed to a 2, then 3, then 4, etc. until the history is exceeded and the user can return to 1 again. It’s easier to remember that way and the user doesn’t have to do much extra work.

So even the standard security practices on the checklist can be questioned. The real answer here is to tweak each audit to the needs of the requestor of the audit, and to properly evaluate the responses based on the security posture of the responder. There do need to be baselines, but they should be sane baselines. If you don’t get all of the checkmarks on an audit, it may not mean you’re not secure, it may just mean you’re securing your network in a way the auditor didn’t think of. There’s more to security than fancy passwords and firewalls. A lot more.


Much Ado About Lion

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Fixing the Serendipity XMLRPC plugin

A while ago I purchased a copy of BlogPress for my iDevices.. It’s pretty full-featured, and seems to work pretty well. Problem was, I couldn’t get it to work with my Serendipity-based blog. Oh well, a wasted purchase.

But not so fast! Every once in a while I go back and search for a possible solution. This past week I finally hit paydirt. I came across this post on the s9y forums.

This explained why BlogPress was crashing when I used it. In short, it was expecting to see a categoryName tag in the resulting XML from the Serendipity XMLRPC plugin. Serendipity, however, used description instead, likely because Serendipity has better support for the MetaWeblog API.

Fortunately, fixing this problem is very straightforward. All you really need to do is implement both APIs and return all of the necessary data for both APIs at the same time. To fix this particular problem, it’s a single line addition to the file located in $S9YHOME/plugins/serendipity_event_xmlrpc. That addition is as follows :

if ($cat['categoryid']) $xml_entries_vals[] = new XML_RPC_Value(
      'description'   => new XML_RPC_Value($cat['category_name'], 'string'),
      // XenoPhage: Add 'categoryName' to support mobile publishing (Thanks PigsLipstick)
      'categoryName'  => new XML_RPC_Value($cat['category_name'], 'string'),
      'htmlUrl'       => new XML_RPC_Value(serendipity_categoryURL($cat, 'serendipityHTTPPath'), 'string'),
      'rssUrl'        => new XML_RPC_Value(serendipity_feedCategoryURL($cat, 'serendipityHTTPPath'), 'string')

And poof, you now have the proper category support for Movable Type.

Evaluating a Blogging Platform

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.

Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Hey KVM, you’ve got your bridge in my netfilter…

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”


Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC= DST= LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.


Back when the Chernobyl nuclear reactor in the Ukraine melted down, I was in grade school. That disaster absolutely fascinated me and I spent a bit of time researching nuclear power, drawing diagrams of reactor designs, and dreaming about being a nuclear scientist.

One thing that stuck with me about that disaster was the sheer power involved. I remember hearing about the roof of the reactor, a massive slab of concrete, having been blown off the building. From what I remember it was tossed many miles away, though I’m having trouble actually confirming that now. No doubt there was a lot of misreporting done at the time.

The reasons behind the meltdown at Chernobyl are still a point of contention ranging from operator error to design flaws in the reactor. Chances are it is more a combination of both. There’s a really detailed report about what happened here. Additional supporting material can be found on Wikipedia.


Today we have the disaster at the Fukushima power plants in Japan. Of course the primary difference from the get-go is that this situation was caused by a natural disaster rather than design flaws or operator error. Honestly, when you get hit with a massive earthquake immediately followed by a devastating tsunami, you’re pretty much starting at screwed.

From what I understand, there are 5 reactors at two plants that are listed as critical. In two instances, the containment structure has suffered an explosion. Whoa! An explosion? Yes, yes, calm down. It’s not a nuclear explosion as most people know it. Most people equate a nuclear explosion with images of mushroom clouds, thoughts of nuclear fallout, and radiation sickness. The explosion we’re talking about in this instance is a hydrogen explosion resulting from venting the inner containment chamber. Yes, it’s entirely possible that radiation was released, but nothing near the high dosages most people equate with a nuclear bomb.

And herein lies a major problem with nuclear power. Not many people understand it, and a large majority are afraid of the consequences. Yes, we have had a massive meltdown as is the case with Chernobyl. We’ve also had a partial meltdown as is the case with Three Mile Island. Currently, the disaster in Japan is closer to Three Mile Island than it is to Chernobyl. That, of course, is subject to change. It’s entirely possible that the reactor in Japan will go into a full core meltdown.

But if you look at the overall effects of nuclear power, I believe you can argue that it is cleaner and safer than many other types of power generation have been. Coal power massively pollutes the atmosphere and leaves behind some rather nasty byproducts that we just don’t have a method of dealing with. Oil and gas also cause pollution in both the atmosphere as well as the area surrounding where the oil and gas are mined. Water, wind, and sun power are, generally speaking, clean, but you have to have massive amounts of each to generate sufficient power.

Nuclear power has had such a negative stigma for such a long period of time that research dollars are not being spent on improving the technology. There are severe restrictions on what scientists can research with respect to nuclear power. As a result, we haven’t advanced very far as compared to other technologies. If we were to open up research we would be able to develop reactors that are significantly safer.

Unfortunately, I think this disaster will make things worse for the nuclear power industry. Despite the fact that this disaster wasn’t caused by design flaws, nor was there operator error, the population at large will question the validity of this technology they know nothing about. Personally, I believe we could make the earth a much cleaner, safer place to live if we were to switch to nuclear power and spend time and effort on making it safer and more efficient.

And finally, a brief note. I’m not a nuclear physicist or engineer, but I have done some background research. I strongly encourage you to do your own research if you’re in doubt about anything I’ve stated. And if I’m wrong about something, please, let me know! I’ll happily make edits to fix incorrect facts.

Computers as Ethical Machines

It’s amazing how busy life gets sometimes… Here’s the third and final paper. You can find the first here, and the second here. Enjoy!

Throughout recent history, we have grown ever more dependent on computers as they have become an integral part of everyday life. Since their successful use in World War II, computers have been constantly improved, making them capable of a variety of tasks. Computers are used to automate menial and sometimes dangerous tasks, control high tech weaponry such as robots and rockets, and provide entertainment through games and movies. As computer technology improves, computers are even being used to teach moral and ethical lessons. In the hands of the nefarious, computers can be used to cause mischief and destruction. Computers are blamed for the loss of jobs, dehumanization of society, and even negatively influencing children. Computers can be used to help or harm, directed purely by the whim of the user. Despite these shortcomings, this paper will show that computers have had an advantageous affect on society.

When computers came on the scene in the 1940’s, they were mostly limited to scientific and mathematical functions. Early computers were used to help break ciphers during World War II. In the 1950’s, computers found their way into colleges across the United States, destined to be used as research tools. However, students at MIT had other plans. [1] Members of the Tech Model Railroad Club were fascinated by these new devices and aimed to learn all they could about them. Over time, they helped transform computers from simple research tools into general purpose devices that could be used for a myriad of tasks. But despite these breakthroughs, society still held a negative view of computers and computer technology.

Resistance to technological advancement is not a new phenomenon. It is not uncommon for new laws to be crafted specifically to limit the use of new technologies. For instance, after the invention of the car, a law was passed that required “any motorist who sighted a team of horses coming toward them to pull well off the road, cover their car with a blanket or canvas that blended with the countryside, and let the horses pass.” [2] While ridiculous by today’s standards, this law was passed in order to make owning and driving a car difficult. Over time, cars became an accepted and beneficial part of society and laws impeding their use were slowly rescinded.

Computers have faced similar resistance through their history. While computers were initially used as nothing more than fancy calculation devices, visionaries saw a myriad of potential uses. Combining computes with mechanical devices, researchers were able to create automated machinery capable of completing menial tasks. The first such robotic device, designed by the Unimation company and called the Unimate, was installed in 1961. [3] The Unimate was a robotic arm used by automotive manufacturers in a die casting machine. It automated what was generally considered to be a dangerous task, that of moving die castings into position and welding them to the body of a vehicle. Human workers were at risk of inhaling deadly exhaust fumes or losing limbs if there were an accident. But despite being a capable device, adoption was slow due to a general resistance to change within the manufacturing industry.

Perception of automated machinery was different in Japan, however. After the introduction of the Unimate, Japanese interest in robotics blossomed. By 1968, Kawasaki Heavy Industries, a Japanese company, licensed all of Unimation’s technology. Japan’s keen interest in robotics may be one of the reasons that Japanese manufacturing advanced so far ahead of the rest of the world and continues to remain there. One reason for this interest may have to do with the exacting standards that most Japanese businesses subscribe to. In the Japanese culture, failure is frowned upon to such a degree that suicide is often chosen over shame. [4]

Japan’s interest in robotics sparked a general interest throughout the rest of the industrialized world. Robotic machinery began appearing in businesses throughout the United States. With this came outrage that machinery was replacing human workers. Over time, however, resistance to robotics quelled as the potential benefits of robotic workers were realized. Workers were encouraged to learn new skills such as maintaining and operating their robotic replacements. Overall, while some jobs were lost, it was not nearly the catastrophic loss that many predicted.

In the years since the introduction of the Unimate, the robotics industry has blossomed. Robots can be found in many industrial plants handling dangerous or labor intensive jobs. Jobs lost to robotic replacement have morphed into other positions, often with the same company. Robots have helped to both increase output and reduce loss due to mistakes and injuries.

Robots have also found a place in our everyday lives. iRobot, one of the first successful commercial manufacturers of household robots, created the Roomba line of household robots. [5] The Roomba is a small circular robot with two drive wheels and three brushes. The Roomba’s primary purpose is to drive itself around a room and vacuum up dirt and debris. It contains a sophisticated computer system that maps the room as it moves, ensuring that every part of the room is vacuumed. It has a host of sensors used to prevent collisions and even avoid stairways. Currently, iRobot has a complete line of household robots including robots that mop floors, clean gutters, and even clean pools.

After the 9/11 attacks, iRobot, and competitor Foster-Miller, used their robots to search for survivors. Serving as a sort-of test ground, the success of these robots during the 9/11 tragedy provided the military with the incentive they needed to offer both companies military contracts. [6] Since that time, both iRobot and Foster-Miller have provided the military with thousands of robots. These robots serve purposes ranging from disarming IEDs to full-on attack vehicles complete with weaponry.

Robotic weaponry brings with it a number of ethical and moral dilemmas. For starters, ethicists worry that robots can not be trusted to make proper ethical decisions. Robots are notorious for misinterpreting sensory data and making improper decisions based on faulty input. On the other hand, if a robot has the correct data, it has no problem quickly making a decision. Unfortunately, there aren’t always clear-cut right and wrong answers. It remains to be seen whether roboticists will be able to create an autonomous system capable of adapting to any given situation and making ethically supportable decisions.

The manufacturing industry has not been the only realm to benefit from computer innovation and creativity. Computers also found a place within the entertainment industry. Steve Russell, a hacker at the MIT computer lab, created the first video game, Spacewar, in 1962. During the same time period, Ivan Sutherland, another MIT hacker, developed a graphics program called Sketchpad which allowed the user to draw shapes on the computer screen using a light pen. Ivan went on to become a professor of computer graphics at the University of Utah, and is considered to be the creator of computer graphics.

The University of Utah quickly made a name for itself as the premiere school for computer graphics research. Many of the techniques currently used in computer graphics were invented by students studying there. For instance, Ed Catmull discovered texture mapping, a method for applying a graphical image to a 3D object. Texture mapping allowed computer scientists to add a new layer of realism to their creations. Ed Catmull went on to become president of Walt Disney Animation Studios.

Computer graphics capabilities have increased over the years. In the 1970’s, state of the art was computer wireframe images. While the 1973 film Westworld included scenes that were post-processed by computers, it wasn’t until 1976 that 3D wireframe images were used for the first time in a movie, Futureworld. [7] In the coming years, computer aided movie design became more and more common.

Using computer generated images, CGI, in movies allows a filmmaker to take their vision further than they could otherwise. For example, prior to using computers, techniques for superimposing an actor onto an artificial background, a process known as bluescreening, was a painstaking process. Using computers, this process can be done with relative ease. The use of computers saves filmmakers both time and money in addition to being able to create realistic scenes such as people flying, or exploration of alien landscapes.

With the advent of cheaper, faster computers, CGI sequences are becoming more commonplace in both movies and television. CGI sequences can be used in place of elaborate sets and backdrops, often removing the need to travel to exotic locations. Filmmakers can make changes to the sequences, even after filming has been completed, adding or removing minute details that would have otherwise had to remain. This allows filmmakers a large amount of flexibility when bringing their vision to life.

In addition to CGI, computers are also used for post-processing. Post processing allows a filmmaker to add and remove elements of a scene, even non-CGI scenes, and adjust various details. For instance, lighting can be adjusted and special effects such as the glow of a lightsaber can be added. Through the use of computers, almost any image adjustment is possible, even those of a questionable nature. Take, for instance, the following two examples.

In July of 2008, Iran was set to meet with the United Nations about its nuclear program. Prior to the meeting, Iran, in what was considered a show of power, announced that it had successfully launched a number of medium range missiles. The Iranian Revolutionary Guard Corps posted a photo of the launch on their website. At the same time, photos were released to various news organizations around the world. After the photos were released, however, several experts began to have doubts about the authenticity of the photos. In fact, it appeared that the photos were altered, possibly to cover up a malfunction in one of the missile systems. [8]

In November of 2008, North Korea released a photo of their leader, Kim Jong Il, standing with a company of soldiers. There were scattered reports that Kim Jong Il had suffered a stroke in previous months and was not in good health. It was believed that this image was released by the North Korean government as proof of their leaders health. Upon closer inspection, however, experts believe that the image of Kim Jong Il was added into the photo using photo editing software. As with the Iranian incident, this photo was believed to be a political maneuver. [9]

In both of these situations, photo manipulation was used for political purposes. Both Iran and North Korea are countries with a questionable government, often at odds with many of the members of the United Nations. Each heavily censors the media, seeking to control what its population knows. Despite this, technology is often used by the populace to fight against the government.

During the 2009 elections in Iran, Iranians used online services such as Twitter and YouTube to post information about demonstrations and protests being held against the government. In fact, Iranian use of Twitter was considered so important that the US State Department urged Twitter to reschedule a maintenance so Iranians would have access during one of the demonstrations. [10] Iranian Twitter users posted first-hand accounts of protests, thoughts and feelings about the election, and, in some cases, links to videos showing alleged violence by government agents. One video showed a young woman, Neda Agha-Soltan, die on the road after being shot. [11] This video quickly went “viral,” becoming one of the most viewed videos of the moment, despite showing a rather graphic scene.

Technology helped the Iranian people show the world what was happening to their country. Some supporters helped by setting up Internet proxy sites, places where Iranians could gain Internet access outside of their country. Other supporters helped by attacking Iranian governmental websites. Both are examples of cyber-warfare, a relatively new way of fighting against opposing forces. The ethics behind such attacks are often muddied by the circumstances of the situation. In the case of Iran, hackers justified their actions by pointing out the real violence occurring in the country as well as the censorship being used to prevent Iranians from speaking to the outside world. Regardless of such justifications, hacking for the purposes of denying access or defacing property is generally frowned upon and is often illegal.

Hacking was not originally a negative activity. In the early days of computing, and even somewhat before, hacking was viewed as a positive activity by an eclectic group of individuals. To hack something was to modify it in a useful way. Hacking was often seen as a way to learn about a new device or process while simultaneously improving upon it. Early hackers went on to develop technologies such as those that run the Internet today.

As computers became more mainstream and began to appear in homes, a new breed of hacker was born. Computers were seen as both business and entertainment devices and companies had formed to offer software for both categories. Computer game companies quickly grew into large corporations which quickly fell into the routine of offering up the same old game in a shiny new wrapper. Seeking something new, some young hackers began learning how to build their own games.

John Carmack was one of those hackers. John started out hacking on an Apple II computer, creating simple games before moving on to work for a small software publishing company. After releasing a few simple games, he helped form his own company, Id Software. Id Software’s first product was a 3D first-person shooter called Doom. Doom was a breakthrough in computer gaming, offering one of the first 3D experiences ever seen on a personal computer. It was also an extremely violent game, pitting the player against a host of enemies depicted as creatures from hell. [12]

The violent nature of Doom and other games was blamed for the 1999 Columbine high school shootings. Opponents of violent games argue that video games desensitize children. They also argue that games such as Doom train them to use weapons and teach them that killing is OK. While lawsuits against game companies were filed, they were ultimately dismissed. [13] Despite this, debates continue today as to the relative merit of games, especially those with violent content.

Violence in games seems to be taking on a new role, however. Some games are beginning to include deep storylines, including moral choices that the player must make. One simple example of this is a game called Passage. [14] Passage is a very low-tech game using very simple graphics written as an entry in a game programming contest. What sets Passage apart is that while it is simple, it seems to contain a powerful message. The game consists of wandering around in a small world. As you move about the world you’re in, you encounter obstacles which you must maneuver around. If you encounter the female character in the game, you become a larger pair which limits your movement, effectively blocking off some areas of the world. Finally, the game only lasts 5 minutes during which your character ages and eventually dies. According to the developer, Passage was written to be a game about life.

Passage is a very low budget, very simplistic game, however, and not many people get a chance to see or play it. For better or worse, it’s the high-budget, mainstream games that get the most attention. But even here, things are beginning to change. In 2009, leaked footage from a high-profile game, Modern Warfare 2, was released. In the footage, the player’s character moved through a highly detailed airport, complete with hundreds of people going through the motions of coming and going. The player held a fully automatic weapon and was traveling through the airport with a number of other companions, all dressed in military garb. Most shocking of all, the player and his companions were shooting into the crowds, tossing grenades, and wreaking havoc. [15]

This footage caused an immediate uproar from the public. The developers defended their position saying that the scenario made sense within the universe of the game. Within the storyline, the player is an undercover agent who has been placed within a terrorist group. The airport scene is played out as an act of terrorism perpetrated by that group. Players are faced with a moral dilemma, having to decide whether the end mission is worth turning a blind eye, or whether they should break cover and attack the terrorists. In the end, the decision is ultimately with the player. It forces the player to think about the situation, often making them feel uncomfortable.

That a set of colored pixels on a screen can make a player feel uncomfortable about a fictional moral dilemma is truly interesting. Technology is being used to provide an ethical situation for someone to solve. If the player makes a “wrong” decision, the computer can help play out that scenario, providing instant feedback for the player without actually harming anyone. Computers can be used, effectively, to teach a player about ethics.

Computers continue to have a wide ranging effect on daily life. They help to make our lives easier in more ways than the average person realizes. And while there are instances where computers and technology in general can be used in negative ways, computers remain an important part of society. Ultimately, computers have provided us with the convenience and comfort we have grown used to having. They have had an overwhelmingly positive effect on society making them a true asset.

[1] . Levy, Hackers : Heroes of the Computer Revolution. London: Penguin, 1994.
[2] (2010, April 29) Dumb Laws in Pennsylvania. [Online]. Available:
[3] S. Nof, Handbook of Industrial Robotics. New York: Wiley, 1999.
[4] E. Petrun. (2010, April 29) Suicide in Japan. [Online]. Available:
[5] L. Kahney. (2010, May 3) Forget a Maid, This Robot Vacuums. [Online]. Available:
[6] P. Singer, Wired for War : the Robotics Revolution and Conflict in the Twenty-first Century. New York: Penguin Press, 2009.
[7] C. Machover, “Springing into the Fifth Decade of Computer Graphics – Where We’ve Been and Where We’re Going!” Siggraph, 1996.
[8] A. Kamen. (2010, May 3) Iran Apparently in Possession of Photoshop. [Online]. Available:
[9] N. Hines. (2010, May 3) Photoshop Failure in Kim Jong Il Image? [Online]. Available:
[10] L. Grossman. (2010, May 4) Iran Protests: Twitter, the Medium of the Movement. [Online]. Available:,8599,1905125,00.html
[11] (2010, May 4) ‘Neda’ Becomes Rallying Cry for Iranian Protests. [Online]. Available:
[12] L. Grossman. (2010, May 4) The Age of Doom. [Online]. Available:,9171,1101040809-674778,00.html
[13] M. Ward. (2010, May 4) Columbine Families Sue Computer Game Makers. [Online]. Available:
[14] C. Thompson. (2010, May 4) Poetic Passage Provokes Heavy Thoughts on Life, Death. [Online]. Available:
[15] T. Kim. (2010, May 4) Modern Warfare 2: Examining the Airport Level. [Online]. Available: