The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.
As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.
So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.
The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.
For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.
We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.
Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.
Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.
Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.
What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.
Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.