Computers as Ethical Machines

It’s amazing how busy life gets sometimes… Here’s the third and final paper. You can find the first here, and the second here. Enjoy!

Throughout recent history, we have grown ever more dependent on computers as they have become an integral part of everyday life. Since their successful use in World War II, computers have been constantly improved, making them capable of a variety of tasks. Computers are used to automate menial and sometimes dangerous tasks, control high tech weaponry such as robots and rockets, and provide entertainment through games and movies. As computer technology improves, computers are even being used to teach moral and ethical lessons. In the hands of the nefarious, computers can be used to cause mischief and destruction. Computers are blamed for the loss of jobs, dehumanization of society, and even negatively influencing children. Computers can be used to help or harm, directed purely by the whim of the user. Despite these shortcomings, this paper will show that computers have had an advantageous affect on society.

When computers came on the scene in the 1940’s, they were mostly limited to scientific and mathematical functions. Early computers were used to help break ciphers during World War II. In the 1950’s, computers found their way into colleges across the United States, destined to be used as research tools. However, students at MIT had other plans. [1] Members of the Tech Model Railroad Club were fascinated by these new devices and aimed to learn all they could about them. Over time, they helped transform computers from simple research tools into general purpose devices that could be used for a myriad of tasks. But despite these breakthroughs, society still held a negative view of computers and computer technology.

Resistance to technological advancement is not a new phenomenon. It is not uncommon for new laws to be crafted specifically to limit the use of new technologies. For instance, after the invention of the car, a law was passed that required “any motorist who sighted a team of horses coming toward them to pull well off the road, cover their car with a blanket or canvas that blended with the countryside, and let the horses pass.” [2] While ridiculous by today’s standards, this law was passed in order to make owning and driving a car difficult. Over time, cars became an accepted and beneficial part of society and laws impeding their use were slowly rescinded.

Computers have faced similar resistance through their history. While computers were initially used as nothing more than fancy calculation devices, visionaries saw a myriad of potential uses. Combining computes with mechanical devices, researchers were able to create automated machinery capable of completing menial tasks. The first such robotic device, designed by the Unimation company and called the Unimate, was installed in 1961. [3] The Unimate was a robotic arm used by automotive manufacturers in a die casting machine. It automated what was generally considered to be a dangerous task, that of moving die castings into position and welding them to the body of a vehicle. Human workers were at risk of inhaling deadly exhaust fumes or losing limbs if there were an accident. But despite being a capable device, adoption was slow due to a general resistance to change within the manufacturing industry.

Perception of automated machinery was different in Japan, however. After the introduction of the Unimate, Japanese interest in robotics blossomed. By 1968, Kawasaki Heavy Industries, a Japanese company, licensed all of Unimation’s technology. Japan’s keen interest in robotics may be one of the reasons that Japanese manufacturing advanced so far ahead of the rest of the world and continues to remain there. One reason for this interest may have to do with the exacting standards that most Japanese businesses subscribe to. In the Japanese culture, failure is frowned upon to such a degree that suicide is often chosen over shame. [4]

Japan’s interest in robotics sparked a general interest throughout the rest of the industrialized world. Robotic machinery began appearing in businesses throughout the United States. With this came outrage that machinery was replacing human workers. Over time, however, resistance to robotics quelled as the potential benefits of robotic workers were realized. Workers were encouraged to learn new skills such as maintaining and operating their robotic replacements. Overall, while some jobs were lost, it was not nearly the catastrophic loss that many predicted.

In the years since the introduction of the Unimate, the robotics industry has blossomed. Robots can be found in many industrial plants handling dangerous or labor intensive jobs. Jobs lost to robotic replacement have morphed into other positions, often with the same company. Robots have helped to both increase output and reduce loss due to mistakes and injuries.

Robots have also found a place in our everyday lives. iRobot, one of the first successful commercial manufacturers of household robots, created the Roomba line of household robots. [5] The Roomba is a small circular robot with two drive wheels and three brushes. The Roomba’s primary purpose is to drive itself around a room and vacuum up dirt and debris. It contains a sophisticated computer system that maps the room as it moves, ensuring that every part of the room is vacuumed. It has a host of sensors used to prevent collisions and even avoid stairways. Currently, iRobot has a complete line of household robots including robots that mop floors, clean gutters, and even clean pools.

After the 9/11 attacks, iRobot, and competitor Foster-Miller, used their robots to search for survivors. Serving as a sort-of test ground, the success of these robots during the 9/11 tragedy provided the military with the incentive they needed to offer both companies military contracts. [6] Since that time, both iRobot and Foster-Miller have provided the military with thousands of robots. These robots serve purposes ranging from disarming IEDs to full-on attack vehicles complete with weaponry.

Robotic weaponry brings with it a number of ethical and moral dilemmas. For starters, ethicists worry that robots can not be trusted to make proper ethical decisions. Robots are notorious for misinterpreting sensory data and making improper decisions based on faulty input. On the other hand, if a robot has the correct data, it has no problem quickly making a decision. Unfortunately, there aren’t always clear-cut right and wrong answers. It remains to be seen whether roboticists will be able to create an autonomous system capable of adapting to any given situation and making ethically supportable decisions.

The manufacturing industry has not been the only realm to benefit from computer innovation and creativity. Computers also found a place within the entertainment industry. Steve Russell, a hacker at the MIT computer lab, created the first video game, Spacewar, in 1962. During the same time period, Ivan Sutherland, another MIT hacker, developed a graphics program called Sketchpad which allowed the user to draw shapes on the computer screen using a light pen. Ivan went on to become a professor of computer graphics at the University of Utah, and is considered to be the creator of computer graphics.

The University of Utah quickly made a name for itself as the premiere school for computer graphics research. Many of the techniques currently used in computer graphics were invented by students studying there. For instance, Ed Catmull discovered texture mapping, a method for applying a graphical image to a 3D object. Texture mapping allowed computer scientists to add a new layer of realism to their creations. Ed Catmull went on to become president of Walt Disney Animation Studios.

Computer graphics capabilities have increased over the years. In the 1970’s, state of the art was computer wireframe images. While the 1973 film Westworld included scenes that were post-processed by computers, it wasn’t until 1976 that 3D wireframe images were used for the first time in a movie, Futureworld. [7] In the coming years, computer aided movie design became more and more common.

Using computer generated images, CGI, in movies allows a filmmaker to take their vision further than they could otherwise. For example, prior to using computers, techniques for superimposing an actor onto an artificial background, a process known as bluescreening, was a painstaking process. Using computers, this process can be done with relative ease. The use of computers saves filmmakers both time and money in addition to being able to create realistic scenes such as people flying, or exploration of alien landscapes.

With the advent of cheaper, faster computers, CGI sequences are becoming more commonplace in both movies and television. CGI sequences can be used in place of elaborate sets and backdrops, often removing the need to travel to exotic locations. Filmmakers can make changes to the sequences, even after filming has been completed, adding or removing minute details that would have otherwise had to remain. This allows filmmakers a large amount of flexibility when bringing their vision to life.

In addition to CGI, computers are also used for post-processing. Post processing allows a filmmaker to add and remove elements of a scene, even non-CGI scenes, and adjust various details. For instance, lighting can be adjusted and special effects such as the glow of a lightsaber can be added. Through the use of computers, almost any image adjustment is possible, even those of a questionable nature. Take, for instance, the following two examples.

In July of 2008, Iran was set to meet with the United Nations about its nuclear program. Prior to the meeting, Iran, in what was considered a show of power, announced that it had successfully launched a number of medium range missiles. The Iranian Revolutionary Guard Corps posted a photo of the launch on their website. At the same time, photos were released to various news organizations around the world. After the photos were released, however, several experts began to have doubts about the authenticity of the photos. In fact, it appeared that the photos were altered, possibly to cover up a malfunction in one of the missile systems. [8]

In November of 2008, North Korea released a photo of their leader, Kim Jong Il, standing with a company of soldiers. There were scattered reports that Kim Jong Il had suffered a stroke in previous months and was not in good health. It was believed that this image was released by the North Korean government as proof of their leaders health. Upon closer inspection, however, experts believe that the image of Kim Jong Il was added into the photo using photo editing software. As with the Iranian incident, this photo was believed to be a political maneuver. [9]

In both of these situations, photo manipulation was used for political purposes. Both Iran and North Korea are countries with a questionable government, often at odds with many of the members of the United Nations. Each heavily censors the media, seeking to control what its population knows. Despite this, technology is often used by the populace to fight against the government.

During the 2009 elections in Iran, Iranians used online services such as Twitter and YouTube to post information about demonstrations and protests being held against the government. In fact, Iranian use of Twitter was considered so important that the US State Department urged Twitter to reschedule a maintenance so Iranians would have access during one of the demonstrations. [10] Iranian Twitter users posted first-hand accounts of protests, thoughts and feelings about the election, and, in some cases, links to videos showing alleged violence by government agents. One video showed a young woman, Neda Agha-Soltan, die on the road after being shot. [11] This video quickly went “viral,” becoming one of the most viewed videos of the moment, despite showing a rather graphic scene.

Technology helped the Iranian people show the world what was happening to their country. Some supporters helped by setting up Internet proxy sites, places where Iranians could gain Internet access outside of their country. Other supporters helped by attacking Iranian governmental websites. Both are examples of cyber-warfare, a relatively new way of fighting against opposing forces. The ethics behind such attacks are often muddied by the circumstances of the situation. In the case of Iran, hackers justified their actions by pointing out the real violence occurring in the country as well as the censorship being used to prevent Iranians from speaking to the outside world. Regardless of such justifications, hacking for the purposes of denying access or defacing property is generally frowned upon and is often illegal.

Hacking was not originally a negative activity. In the early days of computing, and even somewhat before, hacking was viewed as a positive activity by an eclectic group of individuals. To hack something was to modify it in a useful way. Hacking was often seen as a way to learn about a new device or process while simultaneously improving upon it. Early hackers went on to develop technologies such as those that run the Internet today.

As computers became more mainstream and began to appear in homes, a new breed of hacker was born. Computers were seen as both business and entertainment devices and companies had formed to offer software for both categories. Computer game companies quickly grew into large corporations which quickly fell into the routine of offering up the same old game in a shiny new wrapper. Seeking something new, some young hackers began learning how to build their own games.

John Carmack was one of those hackers. John started out hacking on an Apple II computer, creating simple games before moving on to work for a small software publishing company. After releasing a few simple games, he helped form his own company, Id Software. Id Software’s first product was a 3D first-person shooter called Doom. Doom was a breakthrough in computer gaming, offering one of the first 3D experiences ever seen on a personal computer. It was also an extremely violent game, pitting the player against a host of enemies depicted as creatures from hell. [12]

The violent nature of Doom and other games was blamed for the 1999 Columbine high school shootings. Opponents of violent games argue that video games desensitize children. They also argue that games such as Doom train them to use weapons and teach them that killing is OK. While lawsuits against game companies were filed, they were ultimately dismissed. [13] Despite this, debates continue today as to the relative merit of games, especially those with violent content.

Violence in games seems to be taking on a new role, however. Some games are beginning to include deep storylines, including moral choices that the player must make. One simple example of this is a game called Passage. [14] Passage is a very low-tech game using very simple graphics written as an entry in a game programming contest. What sets Passage apart is that while it is simple, it seems to contain a powerful message. The game consists of wandering around in a small world. As you move about the world you’re in, you encounter obstacles which you must maneuver around. If you encounter the female character in the game, you become a larger pair which limits your movement, effectively blocking off some areas of the world. Finally, the game only lasts 5 minutes during which your character ages and eventually dies. According to the developer, Passage was written to be a game about life.

Passage is a very low budget, very simplistic game, however, and not many people get a chance to see or play it. For better or worse, it’s the high-budget, mainstream games that get the most attention. But even here, things are beginning to change. In 2009, leaked footage from a high-profile game, Modern Warfare 2, was released. In the footage, the player’s character moved through a highly detailed airport, complete with hundreds of people going through the motions of coming and going. The player held a fully automatic weapon and was traveling through the airport with a number of other companions, all dressed in military garb. Most shocking of all, the player and his companions were shooting into the crowds, tossing grenades, and wreaking havoc. [15]

This footage caused an immediate uproar from the public. The developers defended their position saying that the scenario made sense within the universe of the game. Within the storyline, the player is an undercover agent who has been placed within a terrorist group. The airport scene is played out as an act of terrorism perpetrated by that group. Players are faced with a moral dilemma, having to decide whether the end mission is worth turning a blind eye, or whether they should break cover and attack the terrorists. In the end, the decision is ultimately with the player. It forces the player to think about the situation, often making them feel uncomfortable.

That a set of colored pixels on a screen can make a player feel uncomfortable about a fictional moral dilemma is truly interesting. Technology is being used to provide an ethical situation for someone to solve. If the player makes a “wrong” decision, the computer can help play out that scenario, providing instant feedback for the player without actually harming anyone. Computers can be used, effectively, to teach a player about ethics.

Computers continue to have a wide ranging effect on daily life. They help to make our lives easier in more ways than the average person realizes. And while there are instances where computers and technology in general can be used in negative ways, computers remain an important part of society. Ultimately, computers have provided us with the convenience and comfort we have grown used to having. They have had an overwhelmingly positive effect on society making them a true asset.
References

[1] . Levy, Hackers : Heroes of the Computer Revolution. London: Penguin, 1994.
[2] (2010, April 29) Dumb Laws in Pennsylvania. [Online]. Available: http://www.dumblaws.com/laws/united-states/pennsylvania
[3] S. Nof, Handbook of Industrial Robotics. New York: Wiley, 1999.
[4] E. Petrun. (2010, April 29) Suicide in Japan. [Online]. Available: http://www.cbsnews.com/stories/2007/07/12/asia_letter/main3054259.shtml
[5] L. Kahney. (2010, May 3) Forget a Maid, This Robot Vacuums. [Online]. Available: http://www.wired.com/gadgets/miscellaneous/news/2002/12/56962
[6] P. Singer, Wired for War : the Robotics Revolution and Conflict in the Twenty-first Century. New York: Penguin Press, 2009.
[7] C. Machover, “Springing into the Fifth Decade of Computer Graphics – Where We’ve Been and Where We’re Going!” Siggraph, 1996.
[8] A. Kamen. (2010, May 3) Iran Apparently in Possession of Photoshop. [Online]. Available: http://www.washingtonpost.com/wp-dyn/content/article/2008/07/10/AR2008071002709.html
[9] N. Hines. (2010, May 3) Photoshop Failure in Kim Jong Il Image? [Online]. Available: http://www.timesonline.co.uk/tol/news/world/asia/article5099581.ece
[10] L. Grossman. (2010, May 4) Iran Protests: Twitter, the Medium of the Movement. [Online]. Available: http://www.time.com/time/world/article/0,8599,1905125,00.html
[11] (2010, May 4) ‘Neda’ Becomes Rallying Cry for Iranian Protests. [Online]. Available: http://www.cnn.com/2009/WORLD/meast/06/21/iran.woman.twitter/
[12] L. Grossman. (2010, May 4) The Age of Doom. [Online]. Available: http://www.time.com/time/magazine/article/0,9171,1101040809-674778,00.html
[13] M. Ward. (2010, May 4) Columbine Families Sue Computer Game Makers. [Online]. Available: http://news.bbc.co.uk/2/hi/science/nature/1295920.stm
[14] C. Thompson. (2010, May 4) Poetic Passage Provokes Heavy Thoughts on Life, Death. [Online]. Available: http://www.wired.com/gaming/gamingreviews/commentary/games/2008/04/gamesfrontiers_421
[15] T. Kim. (2010, May 4) Modern Warfare 2: Examining the Airport Level. [Online]. Available: http://www.gamepro.com/article/features/212923/modern-warfare-2-examining-the-airport-level/

 

Visions of the Future

The second of three papers written for a computer science class I took recently. You can find the first here. For this second paper, we were directed to project our chosen technology into the future and explain our predictions. I think I was a bit apprehensive with going too far with this, so this is probably a bit tamer than it could be. Over all, though, I think these predictions are at least reasonable and possibly something I may even see within my lifetime.

 

Today’s entertainment shows a marked progression towards more immersion and realism. As the technology used to provide and enhance entertainment evolves, the ability to provide accurate depictions of previously unattainable events becomes possible. In the past, books and movies relied on the observer’s imagination to fill in any gaps in the story. Newer technology allows artists the ability to realistically generate these scenes, more fully depicting their overall artistic vision. There are numerous benefits to these evolving technologies for many different aspects of daily life.

In the 1999 hit movie, The Matrix, the protagonist, Neo, eventually realizes his full potential and gains the ability to perform superhuman feats. All throughout the movie, seemingly impossible feats are performed with almost no perceptible break in reality. Characters jump from building to building, dodge bullets, and fight with strength unheard of in normal humans. New technology provided the tools used to merge human actors with virtual constructs in a realistic manner. New techniques were used to provide unique viewing angles and sequences such as the stop-motion bullet sequence.

The bullet-time technique was subsequently used by CBS television in the 2001 Superbowl XXXV game. CBS worked with Takeo Kanade, a computer vision expert from Carnegie Mellon University, to develop the technology. [1] Using this technology, CBS was able to provide the viewer a unique look at the game as the camera’s vantage point could be moved, on the fly, at any point during the game. In fact, this new technique allowed referees to correctly uphold a replay challenge, identifying whether or not a player fumbled the ball after passing over the touchdown line. [2] Bullet-time provides what may be a key technology in moving towards real-time 3D broadcasting. Previously, meticulous work was required to generate realistic 3D sequences in movies, and doing this on live television was unheard of.

3D and holographic television has long been a lofty dream of technology enthusiasts. Visions range from standard television-sized displays, capable of displaying three-dimensional movies and sports events to massive room-sized units capable of completely immersing a person in a new world. But televisions capable of 3D imagery are only just starting to appear on the market. At the 2010 International Consumer Electronics Show, a large number of 3D-capable high-definition televisions were announced. [3] These units require the use of 3D glasses to view the images presented. 3D televisions may well prove to be the next “big thing” for tech-savvy consumers, but there is a distinct lack of 3D content available. Additionally, requiring the user to wear a set of 3D glasses during each viewing is going to wear on the users quickly. Until more immersive and accessible technologies are available, widespread adoption of 3D will likely be slow.

The Cave Automatic Virtual Environment, CAVE, is a tentative step in the general direction of more fully immersive 3D technology. Developed at the University of Illinois, the CAVE is a large cube with several screens surrounding the viewer. The system automatically adjusts the perspective displayed by the screens based on the location of the viewer. [4] Images on each screen are projected in 2D. In order to properly view the 3D imagery, special glasses are required. CAVE systems are still very experimental and are most often used by colleges to help students bring their creations to life. Mainstream CAVE use has been slow, but some industries such as the auto industry use CAVE systems to model new car designs. This technology, while immersive, still suffers from inaccessibility. CAVE systems are large, complex systems designed for very specific tasks.

The true “Holy Grail” of immersive projection technologies is holographic. Holographic projection provides the ability to project and view three-dimensional images in real-time without the need for augmentation devices such as glasses. This dream has been on every nerds wishlist since it was described by authors such as Ray Bradbury. The most common example of a full-sized holographic unit is the Holodeck from Star Trek: The Next Generation.

The holodeck is a futuristic device capable of creating realistic environments in which a person can interact. It is capable more than mere image projection, however. According to Star Trek lore [5], the holodeck can create holographic matter, taking on the texture and other characteristics of real matter. Users can then interact with this matter as they would a “real” object.

Research into Holodeck-style environments is on-going. A paper from researchers at the University of Colorado details a method for bringing such an environment to life. [6] Instead of using holographic matter, their system uses a deformable environment in which a computer molds the world around in real-time. Still, these systems are big and bulky, not something the average consumer is likely to add to their home entertainment system.

Looking further into the future, it is feasible from current trends that more accessible technologies are on their way. Within ten to twenty years, holographic displays will be commonplace in consumer homes. These displays won’t necessarily be what we expect, either. Based on current technology, it would appear that a holographic display would be a large, walled unit with a myriad of cameras and other gear to project the 3D images. What is more likely, however, is that something as simple as a coffee table will be the surface used to bring 3D to life.

As holography becomes more mainstream, it will begin to pop up in more places. Many sci-fi authors envision holographic advertisements as commonplace in futuristic worlds. Combining holographic projection with other technologies leads to some interesting scenarios.

Using image recognition techniques, computers can identify when a person is looking at a specific location. Using information about the person such as height, weight, relative age, skin color, and more, the computer can compile a user profile, placing them into a category of consumer. Additional information such as facial features and gait detection may lead to positive identification of an individual, helping to tailor the categorization even more. With this information, the computer can then determine what that person will most likely be interested in and identify potential advertisements to transmit to the target. Holographic laser emitters can be used to “beam” an advertisement directly into the viewers eye.

Advertising in this manner can provide the target with a highly personalized advertisement, as well as relative privacy. It also prevents popular thoroughfares from becoming a disorganized mass of disjointed holographic projections. Complete industries will rise up around preventing such advertisements from making it to the target, circumventing those technologies, and so on.

Another use of holographic technology is akin to the personal digital assistant, or PDA. As computers become more powerful, their relative size is diminishing. In the future, a small wearable device will potentially contain the equivalent power of a supercomputer. This power can be used to “augment” reality in various ways. Heads-up displays can be displayed inside of glasses, or even projected directly onto the user’s cornea. Displays can provide navigation information while traveling, both on foot and in a vehicle. Users can interact with real-time data such as stock quotes or news. Movies can be displayed, providing the user their own private movie theatre.

Augmented reality devices can also be used to overlay information on the real world. Future businesses will be able to overlay their real-world stores with dynamic, digital information. Imagine walking up to a store and having digitized versions of famous people personally inviting you in. Perhaps a personal assistant will escort you around, providing reviews, alternatives, and pricing. Walk into a fast-food restaurant and you can access a menu overlay, personally tailored for you. The applications for such technology is almost limitless.

Artists can use these same technologies to provide a unique experience for viewers. Instead of sitting down in a theatre, watching the latest blockbuster movie, artists can bring the movie to the viewer. Holographic overlays can be used outside of theaters, inviting viewers to join in the action. More immersive movies can dynamically change the flow of the movie based on viewer actions. Imagine changing the outcome of a movie, purely based on your personal choices.

The future of entertainment technology is bright and full of potential. Artists will be able to use new and exciting tools to bring their visions to life. Movie viewers will be able to interact with the performance, even changing aspects of the story as they see fit. Using these technologies in the consumer space provides similar enhancements to daily life. Information such as navigation and news can be provided directly to the user. And augmented reality can provide new views of the world. Computers are definitely shaping how we see the future.
References:

[1] (2010, March 20). [Online]. Available: http://www.ri.cmu.edu/events/sb35/tksuperbowl.html
[2] (2010, March 20). [Online]. Available: http://sportsillustrated.cnn.com/football/nfl/2001/playoffs/news/2001/01/28/superbowl_tv_ap/
[3] (2010, April 6). [Online]. Available: http://ces.cnet.com/8301-31045_1-10431350-269.html
[4] . Cruz-Neira, D. J. Sandin, T. A. DeFanti, R. V. Kenyon, and J. C. Hart, “The Cave: Audio Visual Experience Automatic Virtual Environment,” Communications of the ACM, 1992.
[5] (2010, April 6). [Online]. Available: http://memory-alpha.org/en/wiki/Holodeck
[6] Krunic, V.; Han, R.; , “Towards Cyber-Physical Holodeck Systems Via Physically Rendered Environments (PRE’s),” Distributed Computing Systems Workshops, 2008. ICDCS ’08. 28th International Conference on , vol., no., pp.507-512, 17-20 June 2008

 

Aperture Science Updates

E3 is in full swing and among the myriad of incredible announcements and demos, the fine folks over at Aperture Science demonstrated some of their new technology. Below are some absolutely incredible videos showing off all that is Portal 2. I am so incredibly excited about this game and cannot wait to get my hands on it.

Just look at the beauty of the environment they’ve designed for Portal 2… The bright white of the original Portal lab is marred by rust and wear as well as encroachment from the outside.

The new game mechanics are simply brilliant. I can’t wait to see how creative you can get with the various mechanics. I’m sure the achievements available will reflect this as well.

According to what I’ve read, Valve brought on the team from Digipen that came up with Tag and added that technology to Portal. The result is the gels you see being used to provide additional bounce or speed boosts.

2011 cannot get here fast enough.. Let’s just hope I have enough time to play before the world ends in 2012!

How Did We Get Here

I’ve been taking some courses in Computer Science lately and had the opportunity to take a more ethics-based class this last semester. As part of that class, I had to write a series of papers delving into where computer technology started and where I see it ending up. Ultimately, we had to have a general theme as computer technology can be rather broad. I chose entertainment for my theme, partially as a bit of a challenge to myself, and partially because it can be an interesting field.

Below is the first of the three papers I wrote.

In the beginning, before formal written languages, man told stories. Stories provided news, knowledge, and entertainment. Storytelling was often a group event, with well-known storytellers providing the entertainment through both spoken word and, often, music accompaniment. As time passed, storytelling became more elaborate. Stories were performed in front of audiences, and eventually written down after a formal writing language was developed.

In the late 1800’s, radio was developed. While initially used as a tool for disseminating important information, radio was quickly adapted to provide entertainment for the masses. Both music and stories were broadcast to mass audiences. By the 1920’s, it was not uncommon for families to gather around their radio to listen to the latest broadcast of their favorite program.

In the early 1930’s, the commercialization of television helped to quickly replace radio as the primary source of home entertainment. As with radio, families gathered around the television to watch their favorite program, immersing themselves in their entertainment. With this new medium, entertainers were determined to push the envelope, seeking the very limits of the technology available.

Alongside the development of both radio and television, scientists and mathematicians were progressing towards development of mechanical and, later, electronic computers. Initially, computers were used primarily for calculation. During World War II, computers such as the Colossus were used to break enemy ciphers.

By the late 1950’s, computers were being used at businesses and colleges across the country, primarily for financial calculations. Colleges made computers available to graduate students who used them for research and course work. In many instances, tinkers and hackers gained access to these computers as well. Their goal was not to use the computers as they were intended, but to push the limits of the system and learn as much as they could in the process. Inevitably, the use of computers turned to entertainment as well as utilitarian functions. In 1959, a professor at MIT, John McCarthy, was working on a program for the IBM 704 that would play chess. Some of the grad students working with him devised a program that used a row of lights on the 704 to play a primitive game of Ping Pong. [1]

As computers advanced and moved from rows of lights on a console to integration with video devices, graphical capabilities increased as well. In the early 1960’s, MIT students created interactive graphical programs on the IBM TX-0. Ivan Sutherland created a program called SketchPad which would allow a user to draw shapes on a computer screen using a light pen. Steve Russell created one of the first video games, Spacewar. These programs marked early attempts at using computers for entertainment purposes. [1]

By 1966, Ralph Baer designed a game console called the Brown Box. Magnavox licensed the system and marketed it to the general public in 1972 as The Odyssey. The Odyssey connected to a user’s television and manipulated points of light on the screen. Plastic overlays were used as backgrounds for the games as advanced graphics manipulation was not yet available. [2]

Around the same time that video games were being invented, other computer scientists were working on generating more advanced graphical capabilities for computers. At Cornell in 1965, Professor Donald Greenberg worked with a number of architecture students to develop a computer animated movie about how Cornell was built. Greenberg went on to start the Program of Computer Graphics at Cornell and work on photorealistic rendering. He is considered to be one of the forerunners in the field. [3]

At the University of Utah, Ivan Sutherland, who previously created Sketchpad, joined the Computer Science department and began teaching computer graphics. One of his student, Ed Catmull, would go on to become a pioneer in computer graphics, developing some of the most common graphical techniques used today.

In the early 1970’s, a number of animation studios were formed. Among these were Information International Inc. (Triple I) and Lucasfilm. One of the primary purposes of these new studios was to use computers along with traditional motion picture film. While most of these new studios quickly went out of business, a few, such as Lucasfilm, were quite successful and continue to be innovative today. [4]

In 1973, the movie Westworld was released. This movie marked the first use of Computer Generated Imagery, CGI, in a major motion picture. Technicians at Triple I used digital processing techniques to pixelate a portion of the movie, providing the movie watcher a unique view of one of the main characters, an android. This movie was to be the first of a wave of movies employing computer generated imagery. [5]

Futureworld, the sequel to Westworld, was released in 1976. A scene in Futureworld used a 3D model of a human hand, a model designed and built by Dr. Edwin Catmull while he was a graduate student at the University of Utah. [6] After graduation, he joined the New York Institute of Technology Computer Graphics Lab. Catmull and other researchers at the CGL helped to develop many of the advanced graphics techniques used in todays movies. In 1979, the group started working on the first feature length computer animated movie, The Works. The group worked for 3 years before releasing the first trailer at SIGGRAPH, the Association for Computing Machinery Special Interest Group in Computer Graphics, in 1982. Unfortunately, due to both technical and financial limitations, work on the movie was halted in 1986 and the film was never finished. [7]

George Lucas, a film director and producer, created a new computer graphics division at Lucasfilm in 1979. Dr. Catmull, along with other researchers from NYIT, were among the initial hires. The computer graphics group concentrated on 3D graphics, eventually developing a computer system for Disney and Industrial Light and Magic (ILM) called the Pixar Image Computer. In 1986, Steve Jobs, CEO of Apple Inc., purchased the computer graphics department from Lucasfilm. Pixar used their computer to develop a number of movie shorts to show off the capabilities of the system. Ultimately, however, Pixar stopped selling the computer due to slow sales.

Despite problems selling their Image Computer, Pixar was able to generate revenue by creating animated commercials for various companies. Pixar decided that animation was their strong suit and began pursuing an avenue for producing full-length animated films. Their earlier business dealings with Disney allowed them to sign a deal wherein Pixar would create a full-length film and Disney would market and distribute it. Pixar and Disney released the world’s first full-length computer animated movie, Toy Story, in 1995. [8]

While Pixar was developing technology for cartoon rendering, other companies such as Triple I and ILM were developing technologies that could be used in traditional live-action movies. Perhaps one of the most famous “computer” movies, Tron, was released in 1982. Triple I helped to create approximately 15 minutes of computer animation that was used in the movie. [9] In the same year, ILM used fractals, a mathematical technique, to generate a landscaping sequence for the movie Star Trek II: The Wrath of Khan. [10]

ILM created the digital effects for Terminator 2 in 1991. Several of the sequences in the movie featured a liquid metal humanoid form transforming into several different characters. ILM had to create new techniques for creating realistic humanoid actions such as walking and running. [11]

At the turn of the century, computer graphics has reached a point where so-called hyper-realism is achievable. In 2001, Square Pictures, the computer-animated film division of the Square entertainment company, released Final Fantasy: The Sprits Within. The film featured a lead character, Aki Ross, who was entirely computer generated. Some of the special effects in the film included realistic modeling and animation of hair and facial features. [12]

Computer generated actors and models have been used in recent years for movies, commercials, and even print ads. These realistic characters are used in place of traditional actors for a variety of reasons. While it can take a tremendous amount of time to create a new “actor,” the benefits can easily outweigh the work. CGI actors are predictable and don’t throw tantrums or have trouble remembering lines. Once the major design work has been completed, using a CGI actor is arguably as easy as posing an action figure. [13]

As technology progresses, it is inevitable that we will be able to create even more realistic characters, completely blurring the lines between real and imaginary. One can argue that we have already hit that point with movies such as Avatar, which feature entirely new species and civilizations created entirely out of pixels. But as brilliant as Avatar is, it still relies on human actors to serve as motion capture targets. Even the facial expressions used in Avatar are based on motion captured data from live actors. [14]

It seems, however, that we are quickly approaching a time when even real actors won’t be necessary to create the latest movies and television shows. A time when technology will edge out high paid actors, replacing them with a hard drive full of bits. Bits that can be molded to any role, instantly, without the need to eat or sleep. It means we will have actors who can do all of their own stunts without fear of getting injured or requiring body doubles. In short, it means we can fulfill roles we have never been able to fill before, with relatively inexpensive labor.

Does this mean we will see a shift in the industry as actors move to fill new roles as voices, or even as writers or directors? Or will we see a battle between the real and the imaginary? As was seen in the automotive industry as robots took over human jobs, fear was everywhere. Will the movie industry see this as a negative move, or will they take a queue from workers who shifted from manual labor to technical jobs, in charge of the very robots that threatened to make them obsolete? Either way, technology is changing the way movies are made.

References:
[1] S. Levy, Hackers : Heroes of the Computer Revolution. London: Penguin, 1994.
[2] (2010, February 24). [Online]. Available: http://www.pong-story.com/odyssey.htm
[3] J. Ringen, “Visions of Light,” Metropolis, June, 2002.
[4] D. Sevo. (2010, February 24) History of Computer Graphics. [Online]. Available: http://www.danielsevo.com/hocg/hocg_1970.htm
[5] “Behind the Scenes of Westworld,” American Cinematographer, November, 1973.
[6] C. Machover, “Springing into the Fifth Decade of Computer Graphics – Where We’ve Been and Where We’re Going!” Siggraph, 1996.
[7] J. C. Panettieri, “Out of This World,” NYIT Magazine, Winter, 2003/2004.
[8] A. Deutschman, The Second Coming of Steve Jobs. New York: Broadway Books, 2000.
[9] R. Patterson, “The Making of Tron,” American Cinematographer, August, 1982.
[10] J. Veilleux, “Special Effects for ‘Star Trek II’: Warp Speed and Beyond,” American Cinematographer, October, 1982.
[11] L. Hu, “Computer Graphics in Visual Effects,” Compcon, 1992.
[12] H. Sakaguchi, Final Fantasy: The Spirits Within, Columbia Pictures.
[13] R. La Ferla, “Perfect Model: Gorgeous, No Complaints, Made of Pixels,” New York Times, May 6, 2001.
[14] B. Robertson, “CG In Another World,” Computer Graphics World, December, 2009.

 

Privacy … Or so you think

Ah, the Internet. What an incredible utility. I can be totally anonymous here, saying whatever I want and no one will be the wiser. I can open up a Facebook, MySpace, or Twitter account, abuse it by posting whatever I want about whomever I want, and no one can do anything about it. I’m completely anonymous! Ha! Try to track me down!

I can post comments on news items, send emails through “free” email services like HotMail, Yahoo, and Gmail. I can post pictures on Flickr and Tumblr. I can chat using AIM, ICW, Skype, or GTalk! The limits are endless, and you can’t find me! You have no idea who I am!

Wait, what’s that? You have my IP address? You have the email address I signed up with? You have my username and you’ve used that to link me to other sites? … And now you’re planning on suing me? I .. uhh… Oh boy…

Online anonymity is mostly a myth. There are ways to remain completely anonymous, but they are, at best, extremely cumbersome and difficult. With enough time and dedication, your identity can be tracked down. Don’t be too afraid, though. Typically, no one really cares who you are. There may be a few who take offense at what you have to say, but most don’t have the knowledge or access to obtain the information necessary to start their search.

There are those out there with the means and the access to figure out who you are, though. Take, for instance, the case of Judge Shirley Saffold. According to a newspaper in Cuyahoga county Ohio, Judge Saffold commented on a number of local articles, including articles about cases she had presided over. These comments ranged from simple, innocuous comments, to commentary about ongoing cases and those participating in them.

The Judge, of course, denies any involvement. Her daughter has stepped forward claiming that she is the one that made all of the posts. According to the newspaper, they traced activity back to the Judge’s computer at the courthouse, which they believe to be definitive proof that the Judge is the actual poster.

This is an excellent example of the lack of anonymity on the Internet. There are ways to track you down, and way to identify who you are. In the case of Judge Saffold, and editor for the paper was able to link an online identity to an email address. While I’m not entirely sure he should have had such access, and apparently that access has been removed, the fact remains that he did. This simple piece of information has sparked a massive debate about online privacy.

You, as a user of the Internet, need to understand that you don’t necessarily have anonymity. By merely coming to read this post, you have left digital footprints. The logs for this website have captured a good deal of information about you. What browser you’re using, what IP address you’ve access the site from, and sometimes the address of the last site you visited. It is even possible, though this site doesn’t do it, to send little bits of information back to you that can track your online presence, reporting back where you go from here and how long you stay there.

Believing you are truly anonymous on the Internet can be dangerous. While it may feel liberating to speak your mind, be cognizant that your identity can be obtained if necessary. Don’t go completely crazy, think before you post.

 

Games as saviors?

I watched a video yesterday about using video games as a means to help solve world problems. It sounds outrageous at first, until you really think about the problem. But first, how about watching the video :

Ok, now that you have some background, let’s think about this for a bit. Technology is amazing, and has brought us many advancements. Gaming is one of those advancements. We have the capability of creating entire universes, purely for our own amusement. People spend hours each day exploring these worlds. Players are typically working toward completing goals set forth by the game designers. When a player completes a goal, they are rewarded. Sometimes rewards are new items, monetary in nature, or perhaps clues to other goals. Each goal is within the reach of the player, though some goals may require more work to attain.

Miss McGonigal argues that the devotion that players show to games can be harnessed and used to help solve real-world problems. Players feel empowered by games, finding within them a way to control what happens to them. Games teach players that they can accomplish the goals set before them, bringing with it an excitement to continue.

I had the opportunity to participate in a discussion about this topic with a group of college students. Opinions ranged from a general distaste of gaming, seeing it as a waste of time, to an embrace of the ideas presented in the video. For myself, I believe that many of the ideas Miss McGonigal presents have a lot of merit. Some of the students argued that such realistic games would be complicated and uninteresting. However, I would argue that such realistic games have already proven to be big hits.

Take, for example, The Sims. The Sims was a huge hit, with players spending hours in the game adjusting various aspects of their character’s lives. I found the entire phenomenon to be absolutely fascinating. I honestly don’t know what the draw of the game was. Regardless, it did extremely well, proving that such a game could succeed.

Imagine taking a real-world problem and creating a game to represent that problem. At the very least, such a game can foster conversation about the problem. It can also lead to unique ideas about how to solve the problem, even though those playing the game may not be well-versed on the topic.

It’s definitely an avenue worth tackling, especially as future generations spend more time online. If we can find a way to harness the energy and excitement that gaming generates, we may be able to find solutions to many of the worlds most perplexing problems.

 

SSL MitM Appliance

SSL has been used for years to protect against Man in the Middle attacks. It has worked quite well and kept our secret transactions secure. However, that sense of security is starting to crumble.

At Black Hat USA 2009, Dan Kaminsky, security researcher, presented a talk outlining flaws in x.509 SSL certificates. In short, it is possible to trick a certificate authority into certifying a site as legitimate when the site may, in fact, be malicious. It’s not the easiest hack to pull off, but it’s there.

Once you have a legitimate certificate, pulling off a MitM attack is as simple as proxying the traffic through your own system. If you can trick the user into hitting your server instead of the legitimate server, *cough*DNSPOISONING*cough*, you can impersonate the legitimate server via proxy, and log everything the user does. And the only way the user can tell is if they actually look at the IP they’re hitting. How many people do you know that keep track of the IP of the server they’re trying to get to?

Surely there’s something that will prevent this, right? I mean, the fingerprint of the certificate has changed, so the browser will tell me that something is amiss, right? Well, actually, no. In fact, if you replace a valid certificate from one CA with a valid certificate from another CA, the end user typically sees no change at all. There may be options that can be set to alter this behavior, but I know of no browsers that will detect this by default. Ultimately, this means that if an attacker can obtain a valid certificate and redirect your traffic, he will own everything you do without you being the wiser.

And now, just to make things more interesting, we have this little beauty.

This is an SSL interception device sold by Packet Forensics. In short, you provide the fake certificate and redirect the user traffic and the box will take care of the rest. According to Packet Forensics, this box is sold exclusively to law enforcement agencies, though I’m sure there are ways to get a unit. For “testing,” of course.

The legal use of this device is actually unknown. In order to use it, Law Enforcement Organizations (LEO) will need to obtain legitimate certificates to impersonate the remote website, as well as obtain access to insert the device into a network. If the device is not placed directly in-line with the user, then potentially illegal hacking has to take place in order to redirect the traffic instead. Regardless, once these are obtained, the LEO has full access to the user’s traffic to and from the remote server.

The existence of this device merely drives home the ease with which MitM attacks are possible. In fact, in a paper published by two researchers from the EFF, this may already be happening. To date, there are no readily available tools to prevent this sort of abuse. However, the authors of the aforementioned paper are planning on releasing a Firefox plugin, dubbed CertLock, that will track SSL certificate information and inform the user when it changes. Ultimately, however, it would be great if browser manufacturers would incorporate these checks into the main browser logic.

So remember kiddies, just because you see the pretty lock icon, or the browser bar turns green, there is no guarantee you’re not being watched. Be careful out there, cyberspace is dangerous.

 

Really Awesome New Cisco confIg Differ

Configuration management is pretty important, but often overlooked. It’s typically easy enough to handle configurations for servers since you have access to standard scripting tools as well as cron. Hardware devices such as switches and routers are a bit more to handle, though, as automating backups of these configs can be daunting, at best.

Several years ago, I took the time to write a fairly comprehensive configuration backup system for the company I was working for. It handled Cisco routers and switches, Fore Systems/Marconi ASX ATM switches, Redback SMS aggregators, and a few other odds and ends. Unfortunately, it was written specifically for that company and not something easily converted for general use.

Fortunately, there’s a robust open source alternative called RANCID. The Really Awesome New Cisco confIg Differ, RANCID, is a set of perl scripts designed to automate configuration retrieval from a host of devices including Cisco, Juniper, Redback, ADC, HP, and more. Additionally, since most of the framework is already there, you can extend it as needed to support additional devices.

RANCID has a few interesting features which make life much easier as a network admin. First, when it retrieves the configuration from a device, it checks it in to either a CVS or SVN repository. This gives you the ability to see changes between revisions, as well as the ability to retrieve an old revision of a config from just about any point in time. Additionally, RANCID emails a list of the changes between the current and last revision of a configuration to you. This way you can keep an eye on your equipment, seeing alerts when things change. Very, very useful to detect errors by you and others.

Note: RANCID handles text-based configurations. Binary configurations are a whole different story. While binary configs can be placed in an SVN repository, getting emailed about changes becomes a problem. It’s possible to handle binary configs, though I do not believe RANCID has this capability.

Setup of RANCID is pretty straightforward. You can either install straight from source, or use a pre-packaged RPM. For this short tutorial, I’ll be using an RPM-based installation. The source RPM I’m using can be found here. It is assumed that you can either rebuild the RPM via the rpmbuild utility, or you can install the software from source.

After the software is installed, there are a few steps required to set up the software. First, I would recommend editing the rancid.conf file. I find making the following modifications to be a good first step:

RCSSYS=svn; export RCSSYS
* Change RCSSYS from cvs to svn. I find SVN to be a superior revisioning system. Your mileage may vary, but I’m going to assume you’re using SVN for this tutorial.

FILTER_PWDS=ALL; export FILTER_PWDS
NOCOMMSTR=YES; export NOCOMMSTR
* Uncommenting these and turning them on ensures that passwords are not stored on your server. This is a security consideration as these files are stored in cleartext format.

OLDTIME=4; export OLDTIME
* This setting tells RANCID how long a device can be unreachable before alerting you to the problem. The default is 24 hours. Depending on how often you run RANCID, you may want to change this option.

LIST_OF_GROUPS=”routers switches firewalls”
* This is a list of names you’ll use to identify devices. These names are arbitrary, so Fred Bob and George are ok. However, I would encourage you to use something meaningful.

The next step is to create the CVS/SVN repositories you’ll be using. This can’t possibly be easier. Switch to the rancid user, then run rancid-cvs. You’ll see output similar to the following:

-bash-3.2$ rancid-cvs
Committed revision 1.
Checked out revision 1.
A configs
Adding configs
Committed revision 2.
A router.db
Adding router.db
Transmitting file data .
Committed revision 3.
Committed revision 4.
Checked out revision 4.
A configs
Adding configs
Committed revision 5.
A router.db
Adding router.db
Transmitting file data .
Committed revision 6.
-bash-3.2$

That’s it, your repositories are created. All that’s left is to set up the user credentials that rancid will use to access the devices, tell rancid which devices to contact, and finally, where to send email. Again, this is quite straightforward.

User credentials are stores in the .cloginrc file located in the rancid home directory. This file is quite detailed with explanations of the various configuration options. In short, for most Cisco devices, you’ll want something like this:

add user * <username>
add password * <login password> <enable password>
add method * ssh

This tells the system to use the given username and passwords for accessing all devices in rancid via ssh. You can specify overrides by adding additional lines above these, replacing the * with the device name.

Next, tell rancid what devices to contact. As the rancid user, switch to the appropriate repository directory. For instance, if we’re adding a router, switch to ~rancid/routers and edit the router.db file. Note: This file is always called router.db, regardless of the repository you are in. Each line of this file consists of three fields, separated by colons. Field 1 is the hostname of the device, field 2 is the type of device, and field 3 is either up or down depending on whether the device is up or not. If you remove a device from this file, the configuration is removed from the repository, so be careful.

router.example.com:cisco:up

Finally, set up the mailer addresses for receiving rancid mail. These consist of aliases on the local machine. If you’re using sendmail, edit the /etc/aliases file and add the following :

rancid-<group>: <email target>
rancid-admin-<group>: <email target>

There are two different aliases needed for each group. Groups are the names used for the repositories. So, in our previous example, we have three groups, switches, routers, and firewalls. So we set up two aliases for each, sending the results to the appropriate parties. The standard rancid-<group> alias is used for sending config diffs. The rancid-admin-<group> alias is used to send alerts about program problems such as not being able to contact a device.

Make sure you run newaliases when you’re done editing the aliases file.

Once these are all set up, we can run a test of rancid. As the rancid user, run rancid-run. This will run through all of the devices you have identified and begin retrieving configurations. Assuming all went well, you should receive notifications via email about the new configurations identified.

If you have successfully run rancid and retrieved configurations, it’s time to set up the cron job to have this run automagically. Merely edit the crontab file for rancid and add something similar to the following:

# run config differ 11 minutes after midnight, 2am, 4am, etc.
11 0-23/2 * * * /usr/bin/rancid-run
# clean out config differ logs
50 23 * * * /usr/bin/find /var/rancid/logs -type f -mtime +2 -exec rm {} \;

Offsetting the times a bit is a good practice, just to ensure everything doesn’t run at once and bog down the system. The second entry cleans up the rancid log files, removing anything older than 2 days.

And that’s it! You’re well on your way to being a better admin. Now to finish those other million or so “great ideas” ….