Network Enhanced Telepathy

I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy. This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years.

According to Wikipedia, telepathy is “the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.“ In other words, you can think *at* someone and communicate. The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting. In this case, technology brings fantasy into reality.

Scientists have already developed methods to “read” thoughts from the human mind. These methods are by no means perfect, but they are a start. As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace. What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution.

What Singer describes in the book is one step further. If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious. Thus we have Network Enhanced Telepathy. And, of course, with that comes all of the baggage we associate with networks today. Everything from connectivity issues and lag to security problems.

The security issues associated with something like this range from inconvenient to downright horrifying. If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain. Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property. These same issues may exist with this new technology as well.

Stolen data is pretty straightforward. Could an exploit allow an attacker to arbitrarily read data from someone’s mind? How would this work? Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted? While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc. Pinpointing exact data could be absolutely devastating. Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind. Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror.

I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions. Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind? If so, what access would the attacker then attain? Could we be looking at the next big thing in brainwashing? Merely insert the new programming, directly into the user.

How about Denial of Service attacks or physical destruction? Could an attacker cause physical damage in their target? Is a connection to the mind enough access to directly modify the cognitive functions of the target? Could an attacker induce something like Locked-In syndrome in a user? What about blocking specific functions, preventing the user from being able to move limbs, or speak? Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target? These are truly scary scenarios and warrant serious thought and discussion.

Technology is racing ahead at breakneck speeds and the future is an exciting one. These technologies could allow humans to take that next evolutionary step. But as with all technology, we should be looking at it with a critical eye. As technology and biology become more and more intertwined, it is essential that we tread carefully and be sure to address potential problems long before they become a reality.

The Future of Personal Computers

The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.

As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.

So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.

The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.

For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.

We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.

Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.

Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.

Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.

What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.

Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.