Aperture Science Updates

E3 is in full swing and among the myriad of incredible announcements and demos, the fine folks over at Aperture Science demonstrated some of their new technology. Below are some absolutely incredible videos showing off all that is Portal 2. I am so incredibly excited about this game and cannot wait to get my hands on it.

Just look at the beauty of the environment they’ve designed for Portal 2… The bright white of the original Portal lab is marred by rust and wear as well as encroachment from the outside.

The new game mechanics are simply brilliant. I can’t wait to see how creative you can get with the various mechanics. I’m sure the achievements available will reflect this as well.

According to what I’ve read, Valve brought on the team from Digipen that came up with Tag and added that technology to Portal. The result is the gels you see being used to provide additional bounce or speed boosts.

2011 cannot get here fast enough.. Let’s just hope I have enough time to play before the world ends in 2012!

How Did We Get Here

I’ve been taking some courses in Computer Science lately and had the opportunity to take a more ethics-based class this last semester. As part of that class, I had to write a series of papers delving into where computer technology started and where I see it ending up. Ultimately, we had to have a general theme as computer technology can be rather broad. I chose entertainment for my theme, partially as a bit of a challenge to myself, and partially because it can be an interesting field.

Below is the first of the three papers I wrote.

In the beginning, before formal written languages, man told stories. Stories provided news, knowledge, and entertainment. Storytelling was often a group event, with well-known storytellers providing the entertainment through both spoken word and, often, music accompaniment. As time passed, storytelling became more elaborate. Stories were performed in front of audiences, and eventually written down after a formal writing language was developed.

In the late 1800’s, radio was developed. While initially used as a tool for disseminating important information, radio was quickly adapted to provide entertainment for the masses. Both music and stories were broadcast to mass audiences. By the 1920’s, it was not uncommon for families to gather around their radio to listen to the latest broadcast of their favorite program.

In the early 1930’s, the commercialization of television helped to quickly replace radio as the primary source of home entertainment. As with radio, families gathered around the television to watch their favorite program, immersing themselves in their entertainment. With this new medium, entertainers were determined to push the envelope, seeking the very limits of the technology available.

Alongside the development of both radio and television, scientists and mathematicians were progressing towards development of mechanical and, later, electronic computers. Initially, computers were used primarily for calculation. During World War II, computers such as the Colossus were used to break enemy ciphers.

By the late 1950’s, computers were being used at businesses and colleges across the country, primarily for financial calculations. Colleges made computers available to graduate students who used them for research and course work. In many instances, tinkers and hackers gained access to these computers as well. Their goal was not to use the computers as they were intended, but to push the limits of the system and learn as much as they could in the process. Inevitably, the use of computers turned to entertainment as well as utilitarian functions. In 1959, a professor at MIT, John McCarthy, was working on a program for the IBM 704 that would play chess. Some of the grad students working with him devised a program that used a row of lights on the 704 to play a primitive game of Ping Pong. [1]

As computers advanced and moved from rows of lights on a console to integration with video devices, graphical capabilities increased as well. In the early 1960’s, MIT students created interactive graphical programs on the IBM TX-0. Ivan Sutherland created a program called SketchPad which would allow a user to draw shapes on a computer screen using a light pen. Steve Russell created one of the first video games, Spacewar. These programs marked early attempts at using computers for entertainment purposes. [1]

By 1966, Ralph Baer designed a game console called the Brown Box. Magnavox licensed the system and marketed it to the general public in 1972 as The Odyssey. The Odyssey connected to a user’s television and manipulated points of light on the screen. Plastic overlays were used as backgrounds for the games as advanced graphics manipulation was not yet available. [2]

Around the same time that video games were being invented, other computer scientists were working on generating more advanced graphical capabilities for computers. At Cornell in 1965, Professor Donald Greenberg worked with a number of architecture students to develop a computer animated movie about how Cornell was built. Greenberg went on to start the Program of Computer Graphics at Cornell and work on photorealistic rendering. He is considered to be one of the forerunners in the field. [3]

At the University of Utah, Ivan Sutherland, who previously created Sketchpad, joined the Computer Science department and began teaching computer graphics. One of his student, Ed Catmull, would go on to become a pioneer in computer graphics, developing some of the most common graphical techniques used today.

In the early 1970’s, a number of animation studios were formed. Among these were Information International Inc. (Triple I) and Lucasfilm. One of the primary purposes of these new studios was to use computers along with traditional motion picture film. While most of these new studios quickly went out of business, a few, such as Lucasfilm, were quite successful and continue to be innovative today. [4]

In 1973, the movie Westworld was released. This movie marked the first use of Computer Generated Imagery, CGI, in a major motion picture. Technicians at Triple I used digital processing techniques to pixelate a portion of the movie, providing the movie watcher a unique view of one of the main characters, an android. This movie was to be the first of a wave of movies employing computer generated imagery. [5]

Futureworld, the sequel to Westworld, was released in 1976. A scene in Futureworld used a 3D model of a human hand, a model designed and built by Dr. Edwin Catmull while he was a graduate student at the University of Utah. [6] After graduation, he joined the New York Institute of Technology Computer Graphics Lab. Catmull and other researchers at the CGL helped to develop many of the advanced graphics techniques used in todays movies. In 1979, the group started working on the first feature length computer animated movie, The Works. The group worked for 3 years before releasing the first trailer at SIGGRAPH, the Association for Computing Machinery Special Interest Group in Computer Graphics, in 1982. Unfortunately, due to both technical and financial limitations, work on the movie was halted in 1986 and the film was never finished. [7]

George Lucas, a film director and producer, created a new computer graphics division at Lucasfilm in 1979. Dr. Catmull, along with other researchers from NYIT, were among the initial hires. The computer graphics group concentrated on 3D graphics, eventually developing a computer system for Disney and Industrial Light and Magic (ILM) called the Pixar Image Computer. In 1986, Steve Jobs, CEO of Apple Inc., purchased the computer graphics department from Lucasfilm. Pixar used their computer to develop a number of movie shorts to show off the capabilities of the system. Ultimately, however, Pixar stopped selling the computer due to slow sales.

Despite problems selling their Image Computer, Pixar was able to generate revenue by creating animated commercials for various companies. Pixar decided that animation was their strong suit and began pursuing an avenue for producing full-length animated films. Their earlier business dealings with Disney allowed them to sign a deal wherein Pixar would create a full-length film and Disney would market and distribute it. Pixar and Disney released the world’s first full-length computer animated movie, Toy Story, in 1995. [8]

While Pixar was developing technology for cartoon rendering, other companies such as Triple I and ILM were developing technologies that could be used in traditional live-action movies. Perhaps one of the most famous “computer” movies, Tron, was released in 1982. Triple I helped to create approximately 15 minutes of computer animation that was used in the movie. [9] In the same year, ILM used fractals, a mathematical technique, to generate a landscaping sequence for the movie Star Trek II: The Wrath of Khan. [10]

ILM created the digital effects for Terminator 2 in 1991. Several of the sequences in the movie featured a liquid metal humanoid form transforming into several different characters. ILM had to create new techniques for creating realistic humanoid actions such as walking and running. [11]

At the turn of the century, computer graphics has reached a point where so-called hyper-realism is achievable. In 2001, Square Pictures, the computer-animated film division of the Square entertainment company, released Final Fantasy: The Sprits Within. The film featured a lead character, Aki Ross, who was entirely computer generated. Some of the special effects in the film included realistic modeling and animation of hair and facial features. [12]

Computer generated actors and models have been used in recent years for movies, commercials, and even print ads. These realistic characters are used in place of traditional actors for a variety of reasons. While it can take a tremendous amount of time to create a new “actor,” the benefits can easily outweigh the work. CGI actors are predictable and don’t throw tantrums or have trouble remembering lines. Once the major design work has been completed, using a CGI actor is arguably as easy as posing an action figure. [13]

As technology progresses, it is inevitable that we will be able to create even more realistic characters, completely blurring the lines between real and imaginary. One can argue that we have already hit that point with movies such as Avatar, which feature entirely new species and civilizations created entirely out of pixels. But as brilliant as Avatar is, it still relies on human actors to serve as motion capture targets. Even the facial expressions used in Avatar are based on motion captured data from live actors. [14]

It seems, however, that we are quickly approaching a time when even real actors won’t be necessary to create the latest movies and television shows. A time when technology will edge out high paid actors, replacing them with a hard drive full of bits. Bits that can be molded to any role, instantly, without the need to eat or sleep. It means we will have actors who can do all of their own stunts without fear of getting injured or requiring body doubles. In short, it means we can fulfill roles we have never been able to fill before, with relatively inexpensive labor.

Does this mean we will see a shift in the industry as actors move to fill new roles as voices, or even as writers or directors? Or will we see a battle between the real and the imaginary? As was seen in the automotive industry as robots took over human jobs, fear was everywhere. Will the movie industry see this as a negative move, or will they take a queue from workers who shifted from manual labor to technical jobs, in charge of the very robots that threatened to make them obsolete? Either way, technology is changing the way movies are made.

References:
[1] S. Levy, Hackers : Heroes of the Computer Revolution. London: Penguin, 1994.
[2] (2010, February 24). [Online]. Available: http://www.pong-story.com/odyssey.htm
[3] J. Ringen, “Visions of Light,” Metropolis, June, 2002.
[4] D. Sevo. (2010, February 24) History of Computer Graphics. [Online]. Available: http://www.danielsevo.com/hocg/hocg_1970.htm
[5] “Behind the Scenes of Westworld,” American Cinematographer, November, 1973.
[6] C. Machover, “Springing into the Fifth Decade of Computer Graphics – Where We’ve Been and Where We’re Going!” Siggraph, 1996.
[7] J. C. Panettieri, “Out of This World,” NYIT Magazine, Winter, 2003/2004.
[8] A. Deutschman, The Second Coming of Steve Jobs. New York: Broadway Books, 2000.
[9] R. Patterson, “The Making of Tron,” American Cinematographer, August, 1982.
[10] J. Veilleux, “Special Effects for ‘Star Trek II’: Warp Speed and Beyond,” American Cinematographer, October, 1982.
[11] L. Hu, “Computer Graphics in Visual Effects,” Compcon, 1992.
[12] H. Sakaguchi, Final Fantasy: The Spirits Within, Columbia Pictures.
[13] R. La Ferla, “Perfect Model: Gorgeous, No Complaints, Made of Pixels,” New York Times, May 6, 2001.
[14] B. Robertson, “CG In Another World,” Computer Graphics World, December, 2009.

 

Lego Star Wars

The last few weeks have been pretty hectic and while I have a lot I’d like to write about, I just haven’t had the time. The logjam should be easing a bit this week, so I may be able to get to some of the topics I want to present later in the week. In the meantime, here’s another video you can check out. Quite a few spoilers, though, just in case you’re the only person to have never watch the original Star Wars trilogy.. :P

Privacy … Or so you think

Ah, the Internet. What an incredible utility. I can be totally anonymous here, saying whatever I want and no one will be the wiser. I can open up a Facebook, MySpace, or Twitter account, abuse it by posting whatever I want about whomever I want, and no one can do anything about it. I’m completely anonymous! Ha! Try to track me down!

I can post comments on news items, send emails through “free” email services like HotMail, Yahoo, and Gmail. I can post pictures on Flickr and Tumblr. I can chat using AIM, ICW, Skype, or GTalk! The limits are endless, and you can’t find me! You have no idea who I am!

Wait, what’s that? You have my IP address? You have the email address I signed up with? You have my username and you’ve used that to link me to other sites? … And now you’re planning on suing me? I .. uhh… Oh boy…

Online anonymity is mostly a myth. There are ways to remain completely anonymous, but they are, at best, extremely cumbersome and difficult. With enough time and dedication, your identity can be tracked down. Don’t be too afraid, though. Typically, no one really cares who you are. There may be a few who take offense at what you have to say, but most don’t have the knowledge or access to obtain the information necessary to start their search.

There are those out there with the means and the access to figure out who you are, though. Take, for instance, the case of Judge Shirley Saffold. According to a newspaper in Cuyahoga county Ohio, Judge Saffold commented on a number of local articles, including articles about cases she had presided over. These comments ranged from simple, innocuous comments, to commentary about ongoing cases and those participating in them.

The Judge, of course, denies any involvement. Her daughter has stepped forward claiming that she is the one that made all of the posts. According to the newspaper, they traced activity back to the Judge’s computer at the courthouse, which they believe to be definitive proof that the Judge is the actual poster.

This is an excellent example of the lack of anonymity on the Internet. There are ways to track you down, and way to identify who you are. In the case of Judge Saffold, and editor for the paper was able to link an online identity to an email address. While I’m not entirely sure he should have had such access, and apparently that access has been removed, the fact remains that he did. This simple piece of information has sparked a massive debate about online privacy.

You, as a user of the Internet, need to understand that you don’t necessarily have anonymity. By merely coming to read this post, you have left digital footprints. The logs for this website have captured a good deal of information about you. What browser you’re using, what IP address you’ve access the site from, and sometimes the address of the last site you visited. It is even possible, though this site doesn’t do it, to send little bits of information back to you that can track your online presence, reporting back where you go from here and how long you stay there.

Believing you are truly anonymous on the Internet can be dangerous. While it may feel liberating to speak your mind, be cognizant that your identity can be obtained if necessary. Don’t go completely crazy, think before you post.

 

Games as saviors?

I watched a video yesterday about using video games as a means to help solve world problems. It sounds outrageous at first, until you really think about the problem. But first, how about watching the video :

Ok, now that you have some background, let’s think about this for a bit. Technology is amazing, and has brought us many advancements. Gaming is one of those advancements. We have the capability of creating entire universes, purely for our own amusement. People spend hours each day exploring these worlds. Players are typically working toward completing goals set forth by the game designers. When a player completes a goal, they are rewarded. Sometimes rewards are new items, monetary in nature, or perhaps clues to other goals. Each goal is within the reach of the player, though some goals may require more work to attain.

Miss McGonigal argues that the devotion that players show to games can be harnessed and used to help solve real-world problems. Players feel empowered by games, finding within them a way to control what happens to them. Games teach players that they can accomplish the goals set before them, bringing with it an excitement to continue.

I had the opportunity to participate in a discussion about this topic with a group of college students. Opinions ranged from a general distaste of gaming, seeing it as a waste of time, to an embrace of the ideas presented in the video. For myself, I believe that many of the ideas Miss McGonigal presents have a lot of merit. Some of the students argued that such realistic games would be complicated and uninteresting. However, I would argue that such realistic games have already proven to be big hits.

Take, for example, The Sims. The Sims was a huge hit, with players spending hours in the game adjusting various aspects of their character’s lives. I found the entire phenomenon to be absolutely fascinating. I honestly don’t know what the draw of the game was. Regardless, it did extremely well, proving that such a game could succeed.

Imagine taking a real-world problem and creating a game to represent that problem. At the very least, such a game can foster conversation about the problem. It can also lead to unique ideas about how to solve the problem, even though those playing the game may not be well-versed on the topic.

It’s definitely an avenue worth tackling, especially as future generations spend more time online. If we can find a way to harness the energy and excitement that gaming generates, we may be able to find solutions to many of the worlds most perplexing problems.

 

SSL MitM Appliance

SSL has been used for years to protect against Man in the Middle attacks. It has worked quite well and kept our secret transactions secure. However, that sense of security is starting to crumble.

At Black Hat USA 2009, Dan Kaminsky, security researcher, presented a talk outlining flaws in x.509 SSL certificates. In short, it is possible to trick a certificate authority into certifying a site as legitimate when the site may, in fact, be malicious. It’s not the easiest hack to pull off, but it’s there.

Once you have a legitimate certificate, pulling off a MitM attack is as simple as proxying the traffic through your own system. If you can trick the user into hitting your server instead of the legitimate server, *cough*DNSPOISONING*cough*, you can impersonate the legitimate server via proxy, and log everything the user does. And the only way the user can tell is if they actually look at the IP they’re hitting. How many people do you know that keep track of the IP of the server they’re trying to get to?

Surely there’s something that will prevent this, right? I mean, the fingerprint of the certificate has changed, so the browser will tell me that something is amiss, right? Well, actually, no. In fact, if you replace a valid certificate from one CA with a valid certificate from another CA, the end user typically sees no change at all. There may be options that can be set to alter this behavior, but I know of no browsers that will detect this by default. Ultimately, this means that if an attacker can obtain a valid certificate and redirect your traffic, he will own everything you do without you being the wiser.

And now, just to make things more interesting, we have this little beauty.

This is an SSL interception device sold by Packet Forensics. In short, you provide the fake certificate and redirect the user traffic and the box will take care of the rest. According to Packet Forensics, this box is sold exclusively to law enforcement agencies, though I’m sure there are ways to get a unit. For “testing,” of course.

The legal use of this device is actually unknown. In order to use it, Law Enforcement Organizations (LEO) will need to obtain legitimate certificates to impersonate the remote website, as well as obtain access to insert the device into a network. If the device is not placed directly in-line with the user, then potentially illegal hacking has to take place in order to redirect the traffic instead. Regardless, once these are obtained, the LEO has full access to the user’s traffic to and from the remote server.

The existence of this device merely drives home the ease with which MitM attacks are possible. In fact, in a paper published by two researchers from the EFF, this may already be happening. To date, there are no readily available tools to prevent this sort of abuse. However, the authors of the aforementioned paper are planning on releasing a Firefox plugin, dubbed CertLock, that will track SSL certificate information and inform the user when it changes. Ultimately, however, it would be great if browser manufacturers would incorporate these checks into the main browser logic.

So remember kiddies, just because you see the pretty lock icon, or the browser bar turns green, there is no guarantee you’re not being watched. Be careful out there, cyberspace is dangerous.