The Authentication Problem

Authentication is a tricky problem. The goal of authentication is to verify the identify of the person, device, machine, etc. that is attempting to gain access to the protected system. There are many factors to consider when designing an authentication system. Here is a brief sampling:

  • How much security is necessary?
  • Do we require username?
  • How strong should the password be?
  • Do we need multi-factor authentication?

The need for authentication typically means that the data being accessed is sensitive in some way. This can be something as simple as a todo list or a user’s email, or as important as banking or top secret information. It can also mean that the data being accessed is valuable in some way such as a site that requires a subscription. So, the security necessary is dependent on the data being protected.

Usually, authentication systems require a username and some form of a password. For more secure systems, multi-factor authentication is used. Multi-factor authentication means that multiple pieces of information are used to authenticate the user. These vary depending on the security required. In the United States, federal regulators recognize the following factors:

  • Something the user knows (e.g., password, PIN)
  • Something the user has (e.g., ATM card, smart card)
  • Something the user is (e.g., biometric characteristic such as a fingerprint)

A username and a password is an example of a single-factor authentication mechanism. When you use an ATM machine, you supply it with an ATM card and then use a PIN. This is an example of two-factor authentication.

The U.S. Federal Financial Institutions Examination Council (FFIEC) recommends the use of multi-factor authentication for financial institutions. Unfortunately, most of the authentication systems currently in place are still single-factor authentication systems, despite asking for several pieces of information. For example, if you log into your bank system you use a username and password. Once the username and password pass, you are often asked for additional information such as answers to challenge questions. These are all examples of things the user knows, thus only a single factor.

Some institutions have begun using additional factors to identify the user such as a one-time password sent to an email address or cell phone. This can be cumbersome, however, as it can often take additional time to receive this information. To combat this, browser cookies are used after the first successful authentication. After the user logs in for the first time, they are offered a chance to have the system place a “secure token” on their system. Subsequent logins use this secure token in addition to the username and password to authenticate the user. This is arguably a second factor as it’s something the user has, as opposed to something they know. On the other hand, it is extremely easy to duplicate or steal cookies.

There are other ways that two-factor authentication can be circumvented as well. Since most institutions only use a single communication mechanism, hijacking that communication medium can result in a security breach.

Man-in-the-middle attacks use fake websites to lure users in and steal the authentication information the user uses to authenticate. This can happen transparently to the user by forwarding the information to the actual institution and letting the user continue to access the system. More sophisticated attacks have the user “fail” authentication the first time and let them in on subsequent tries. The attacker can then use the first authentication attempt to gain access themselves.

Another method is the use of Trojans. If a user can be tricked into installing malicious software into their system, an attacker can ride on the user’s session, injecting their own transactions into the communications channel.

Defending against these attacks is not easy and may be impossible in many situations. For instance, requiring a second method of communication for authentication may help to authenticate the user, but if an attacker can hijack the main communication path, they can still obtain access to the user’s current session. Use of encryption and proper training of users can help mitigate these types of attacks, but ultimately, any system using a public communication mechanism is susceptible to hijacking.

Session Security

Once authentication is complete, session security comes into play. Why go through all the trouble of authenticating the user if you’re not protecting the data they’re accessing? Assuming that the data itself is protected, we need to focus on protecting the data being transferred to and from the user. Additionally, we need to protect the user’s session itself.

Session hijacking is the term used to identify the stealing of a user’s session information to gain access to the information the user is accessing. There are four primary method of session hijacking.

  • Session Fixation
  • Session Sidejacking
  • Physical Access
  • Cross-site Scripting

Physical access is pretty straightforward. This involves an attacker directly accessing the user’s computer terminal and copying the session data. Session data can be something as simple as an alphanumeric token displayed right in the URL of the site being accessed. Or, it can be a piece of data on the machine such as a browser cookie.

Session fixation refers to a method by which an attacker can trick a user into using a pre-determined session ID. Once the user authenticates, the attacker gains access by using the same session ID. The system recognized the session ID as an authenticated session and lets the user in without verification.

Session Sidejacking involves an attacker intercepting the traffic between a user and the system. If a session is not encrypted, the attacker can obtain the session ID or cookie used to identify the user’s session. Once this information is obtained, the attacker can use the same information to gain access to the user’s session.

Finally, cross-side scripting is when an attacker tricks the user’s computer into sending session information to the attacker. This can happen when a user accesses a website that contains malicious code. For instance, an attacker can create a website with a special link to a well-known site such as a bank. The link contains additional code that, when run, sends the user’s authentication or session information to the attacker.

Encryption of the communications channel can mitigate some of these attack scenarios, but not all of them. Programmers should ensure that additional information is used to verify a user’s session. For instance, something as simple as verifying the user’s source IP address in addition to a session cookie is often enough to mitigate both physical access and session sidejacking. Not allowing a pre-defined session ID can prevent session fixation. And finally, proper coding can prevent cross-side scripting.

Additionally, any session information stored on the remote system being accessed should be properly secured as well. Merely securing the data accessed isn’t enough if an attacker can access the remote system and steal session information.

Unauthentication

Finally, how and when should a user be unauthenticated? Unauthentication is often overlooked when designing a secure system. If the user fails to log out, then attacks such as session hijacking become easier. Unauthentication can be tricky, however. There a number of factors to consider such as:

  • How and when should a user’s session be closed?
  • Should a user’s session time out?
  • How long should the timer be?

Most unauthentication currently consists of a user’s session timing out. After a pre-determined period of inactivity, the system will log a user out, deleting their session. Depending on the situation, this can be incredibly disruptive. For example, if a user’s email system has a short time out, they run the risk of losing a long email they’ve been working on. Some systems can mitigate this by recording the user’s data prior to logging them out, making it available again upon login so the user doesn’t lose it. Regardless, the longer the time out, the less secure a session can be.

Other unauthentication mechanisms have been discussed as well. When a physical token such as a USB key is used, the user can be unauthenticated if the key is removed from the system. Or, a device with some sort of radio in it, such as bluetooth, can unauthenticate the user if it is removed from the proximity of the system. Unfortunately, user’s will likely end up leaving these devices behind, significantly reducing their effectiveness.

As with authentication, unauthentication methods can depend on the sensitivity of the data being protected. Ultimately, though, every system should have some form of automatic unauthentication.

Data security in general can be a difficult nut to crack. System designers are typically either very lax in their security design, often overlooking session security and unauthentication, or they can be very draconian, opting to make the system very secure at the expense of the user. Designing a user-friendly, but secure, system is difficult, at best.

 

The Third Category

“Is there room for a third category of device in the middle, something that’s between a laptop and smartphone?”

And with that, Steve Jobs, CEO of Apple, ushered in the iPad.

So what is the iPad, exactly? I’ve been seeing it referred to as merely a gigantic iPod Touch. But is there more to it than that? Is this thing just a glorified iPod, or can there be more there?

On the surface, it truly is an oversized iPod Touch. It has the same basic layout as an iPod Touch with the home button at the bottom. It has a thick border around the screen where the user can hold the unit without interfering with the multitouch display.

The screen itself is an LCD display using IPS technology. According to Wikipedia, IPS (In-Plane Switching) is a technology designed by Hitachi. It offers a wide viewing angle and accurate color reproduction. The screen is backlit using LEDs, offering much longer battery life, uniform backlighting, and longer life.

Apple is introducing a total of 6 units, varying only in the size of the built-in flash storage, and the presence of 3G connectivity. Storage comes in either 16, 32, or 64 GB varieties. 3G access requires a data plan from a participating 3G provider, AT&T to start, and will entail a monthly fee. 3G access will also require the use of a micro-SIM card. AT&T is currently the only US provider using these cards. The base 16GB model will go for $499, while the 64GB 3G model will run you $829, plus a monthly data plan. As it stands now, however, the data plan is on a month by month basis, no contract required.

Ok, so with the standard descriptive details out of the way, what is this thing? Is it worth the money? What is the “killer feature,” if there is one?

On the surface, the iPad seems to be just a big iPod Touch, nothing more. In fact, the iPad runs an enhanced version of the iPhone OS, the same OS the iPod Touch runs. Apple claims that most of the existing apps in the iTunes App Store will run on the iPad, both in original size, as well as an enhanced mode that will allow the app to take up the entire screen.

Based on the demonstration that Steve Jobs gave, as well as various other reports, there’s more to this enhanced OS, though. For starters, it looks like there will be pop-out or drop-down menus, something the current iPhone OS does not have. Additionally, apps will be able to take advantage of file sharing, split screen views, custom fonts, and external displays.

One of the more touted features of the iPad was the inclusion of the iBook store. It seems that Apple wants a piece of the burgeoning eBook market and has decided to approach it just like they approached the music market. The problem here is that the iPad is still a backlit LCD screen at its core. Staring at a backlit display for long periods of time generally leads to headaches and/or eye strain. This is why eInk based units such as the Kindle or the Sony Reader do so well. It’s not the aesthetics of the Kindle that people like, it’s the comfort of using the unit.

It would be nice to see the eBook market opened up the way the music market has been. In fact, I look forward to the day that the majority of eBooks are available without DRM. Apple’s choice of using the ePub format for books is an auspicious one. The ePub format is fast becoming the standard of choice for eBooks and includes support for both a DRM and non-DRM format. Additionally, the format uses standard open formats as a base.

But what else does the iPad offer? Is it just a fancy book reader with some extra multimedia functionality? Or is there something more?

There has been some speculation that the iPad represents more than just an entry into the tablet market. That it, instead, represents an entry into the mobile processor market. After all, Apple put together their own processor, the Apple A4, specifically for this product. So is Apple merely using this as a platform for a launch into the mobile processor market? If so, early reports indicate that they may have something spectacular. Reports from those able to get hands-on time with the iPad report that the unit is very responsive and incredibly fast.

But for all of the design and power behind the iPad, there is one glaring hole. Flash support. And Apple isn’t hiding it, either. On stage, during the announcement of the iPad, Steve Jobs demonstrated web browsing by heading to the New York Times homepage. If you’ve ever been to their homepage, it’s dotted by various flash objects with video, slideshows, and more. On the iPad, these shows up as big white boxes with the Safari plugin icon showing.

So what is Apple playing at? Flash is pretty prevalent on the web, so not supporting it will result in a lot of missing content, as one Adobe employee demonstrated. Of course, the iPhone and iPod Touch have the same problem. Or, do they? If a device is popular, developers adapt. This can easily be seen by the number of websites that have adapted to the iPhone. But even more than that, look at the number of sites that adapt to the various web browsers, creating special markup to work with each one. This is nothing new for developers, it happens today.

Flash is unique, though, in that it gives the developers capabilities that don’t otherwise exist in HTML, right? Well, not exactly. HTML5 gives developers a standardized way to deploy video, handle offline storage, draw, and more. Couple this with CSS and you can replicate much of what Flash already does. There are lots of examples already of what HTML5 can do.

So what does the iPad truly mean to computing? Will it be as revolutionary as Apple wants us to believe it will be? I’m still not 100% sold on it, but it’s definitely something to watch. Microsoft has tried tablets in the past and failed, will Apple succeed?

 

Web Security

People use the web today for just about anything. We get our news from news sites and blogs, we play games, we view pictures, etc. Most of these activities are fairly innocuous and don’t require much in the way of security, beyond typical anti-viral and anti-spyware security. However, there are activities we engage in on the web where we want to keep our information private and secure. For instance, when we interact with our bank, we’d like to keep those transactions private. The same goes for other activities such as stock transfers and shopping.

And it’s not enough to merely keep it private, we also want to ensure that no one can inject anything into our sessions. Online banking wouldn’t be very useful if someone could inject phantom transfers into your session, draining your bank account. Likewise, having someone inject additional items into your order, or changing the delivery address, wouldn’t be very helpful.

Fortunately, Netscape developed a protocol to handle browser to server security called Secure Sockets Layer, or SSL. SSL was first released to the public in 1995 and updated a year later after several security flaws were uncovered. In 1999, SSL became TLS, Transport Layer Security. TLS has been updated twice since it’s inception and currently stands at version 1.2.

The purpose of SSL/TLS is pretty simple and straightforward, though the implementation details are enough to give anyone a headache. In short, when you connect to a remote site with your browser, the browser and web server negotiate a secure connection. Once established, everything you send back and forth is first encrypted locally and decrypted on the server end. Only the endpoints have the information required to both encrypt and decrypt, so the communication remains secure.

What about man-in-the-middle attacks? What if you were able to insert yourself between the browser and the server and then pass the messages back and forth. The browser would negotiate with you, and then you’d negotiate with the server. This way, you would have unencrypted access to the bits before you passed them on. That would work, wouldn’t it? Well, yes. Sort of. If the end-user allowed it or was tricked into allowing it.

When a secure connection is negotiated between a browser and server, the server presents the user with a certificate. The certificate identifies the server to the browser. While anyone can create a certificate, certificates can be signed by others to “prove” their authenticity. When the server is set up, the administrator requests a certificate from a well-known third party and uses that certificate to identify the server. When the browser receives the certificate, it can verify that the certificate is authentic by contacting the certificate signer and asking. If the certificate is not authentic, expired, or was not signed by a well known third party, the user is presented with an error dialog explaining the problem.

Unfortunately, the dialog presented isn’t always helpful and typically requires some knowledge of SSL/TLS to understand. Most browser vendors have “corrected” this by placing lots of red text, exclamation marks, and other graphics to indicate that something bad has happened. The problem here is that these messages are intended to be warnings. There are instances where certificates not signed by third parties are completely acceptable. In fact, it’s possible for you, as a user, to put together a valid certificate signing system that will provide users the exact same protections a third-party certificate provide. I’ll post a how-to a little later on the exact process. You can also use a self-signed certificate, one that has no root, and still provide the same level of encryption.

So if we can provide the same protection using our own signed or self-signed certificates, then why pay a third party to sign certificates for us? Well, there are a couple of reasons, though they’ve somewhat faded with time. First and foremost, the major third-party signers have their root certificates, used for validation, added to all of the major web browsers. In other words, you don’t need to install these certificates, they’re already there. And since most users don’t know how SSL works, let alone how to install a certificate, this makes third-party certificates very appealing. This is the one feature of third-party certificates that still makes sense.

Another reason is that your information is validated by the third-party provider. Or, at least, that’s how it used to be. Perhaps some providers still do, but since there is no standard across the board, SSL certificates as a de-facto identity check are broken. Some providers offer differing levels of validation for normal certificates, but there are no indicators within the browser to identify the level of validation. As a result, determining whether to trust a site or not falls completely on the shoulders of the user.

In response to this, an organization called the Certificate Authority/Browser Forum was created. This forum developed a set of guidelines that providers must adhere to in order to issue a new type of certificate, the Extended Validation, or EV, certificate. Audits are performed on an annual basis to ensure that providers continue to adhere to the guidelines. The end result is a certificate with special properties. When a browser visits a site that uses an EV certificate, the URL bar, or part of the URL bar turns green and displays the name of the company that owns the certificate. The purpose is to allow users a quick glance check to validate a site.

To a certain degree, I agree that these certificates provide a slight enhancement of security. However, I think this is more security theater than actual security. At its core, an EV certificate offers no better security than that of a self-signed certificate. The “value” lies in the vetting process a site has to go through in order to obtain such a certificate. It also relies on users being trained to recognize the green bar. Unfortunately, most of the training I’ve seen in this regard seem to teach the user that seeing a green URL bar instantly means they can trust the site with no further checking. I feel this is absolutely the wrong message to send. Users should be taught to verify website addresses as well as verifying SSL credentials.

Keeping our information private and secure goes way beyond the conversation between the browser and the server, however. Both before information is sent, and after it is received, it is available in some plain text format. If an attacker can infiltrate either end of the conversation, they can potentially retrieve this information. At the user’s end, security software such as an anti-virus, anti-spyware, and firewall, can be installed to protect the user. However, the user has absolutely no control over the server end.

To the user, the server is a mystery. The users trusts the administrator to keep their information safe and secure, but has no way of determining whether or not it is. Servers are protected much in the same way a user’s computer is. Firewalls are typically the main defense against intruders, though server firewalls are typically more advanced than those used on end-user computers. Data on the server can be stored using encryption, so even if a server is compromised, the data cannot be accessed.

Security on the Internet is a full-time job, both for the end user as well as the server administrator. Properly done, however, our data can be kept secure and private. All it takes is some due diligence and a little education.

 

Tis The Season…

…to be charitable.

Christmas is right around the corner, only a few weeks away! Time really flies. So, if you’re wondering what to get me for Christmas, look no further! I’ll tell you.

Child’s Play.

That’s all. Seriously! That’s it.

Child’s Play is a charity started by the guys from Penny Arcade. Not content with the bad rap that gamers tend to get, they set out to prove that not all gamers are bad. To that end, they have created a charity that has been growing every year. Money donated to Child’s Play is used to purchase games, toys, movies, and more for sick children located at hospitals in the US, Canada, and Europe. Christmas for these kids can be a bit light given the cost of medical care and the strain on their families.

Here, Gabe from Penny Arcade can explain it better:

If you are like me, every time you see an article like this one, where the author claims that video games are training our nations youth to kill you get angry. The media seems intent on perpetuating the myth that gamers are ticking time bombs just waiting to go off. I know for a fact that gamers are good people. I have had the opportunity on multiple occasions to meet hundreds of you at conventions all over the country. We are just regular people who happen to love video games. With that in mind we have put together a little something we like to call “Child’s Play”. Penny Arcade is working with the Seattle Children’s Hospital and Amazon.com to make this Christmas really special for a lot of very sick kids. With the help of the Children’s Hospital we have created an Amazon Wish List for the kids. It’s full of video games, movies and toys. Some of these kids are in pretty bad shape and just having a Game Boy would really raise their spirits.

Please take some time to browse the Wish List. Maybe all you can afford is a package of batteries or maybe you want to go in with your entire office and get the kids a GameCube. Every single contribution will help out the Children’s Hospital and the 190,000 kids they treat each year.

All the toys and games will be delivered to us and we will in turn deliver them to the Children’s Hospital. As soon as the toys start arriving I’ll set up a web site and post as many pictures as I can. We will be making a trip over to one of the hospitals next week and we’ll bring you back stories from some of the kids along with more pictures.

Penny Arcade has a readership of something like 4.5 million gamers across the world. We are arguably the largest community of gamers on the internet. The important word there being community. This isn’t IGN, this isn’t Gamespy, we are not a faceless corporation, you are not just a number tracked by a database and then relayed to hungry advertisers. You guys have proven yourselves to be a powerful force when stirred into action. Here is your opportunity to use that power to do some real good.

Let’s give these kids the Christmas that they deserve and let’s give the news papers a different kind of story to write about gamers.
-Gabe out

That post originally appeared back in 2003 and more information about the start of Child’s Play can be found on their About page.

So that’s it. That’s all I want. Show these kids that even in the darkest of times, there is a ray of hope. Give them the gift of fun and distraction. You’ll be happy you did.

 

“Educate to Innovate”

About 2 weeks ago, the President gave a speech about a new program called “Educate to Innovate.” The program aims to improve education in the categories of Science, Technology, Engineering, and Mathematics, or STEM. At the end of his speech, students from Oakton High School demonstrate their “Cougar Cannon,” a robot designed to scoop up and throw “moon rocks.” A video of the speech, and the demonstration, is below.

“As President, I believe that robotics can inspire young people to pursue science and engineering. And I also want to keep an eye on those robots in case they try anything.”


As a lover of technology, I find it wonderful that the president is moving in this direction. I wrote, not too long ago, about my disappointment with our current educational system. When I was in school, there were always extra subjects we could engage in to expand our knowledge. In fact, the high school I attended was set up similar to that of a college, requiring that a number of extra credits, beyond the core classes, be taken. Often these were foreign languages or some form of a shop class. Fortunately, for me, the school also offered classes in programming and electronics.

I was invited back to the school by my former electronics teacher a few years after I graduated. The electronics program had expanded somewhat and they were involved in a program called FIRST Robotics, developed by Dean Kamen. Unfortunately, I had moved out of the area, so my involvement was extremely limited, but I did enjoy working with the students. The FIRST program is an excellent way to engage competitiveness along with education. Adults get to assist the students with the building and programming of the robot, guiding them along the process. Some of the design work was simply outstanding, and solutions to problems were truly intuitive.

One of the first “Educate to Innovate” projects is called “National Lab Day.” National Lab Day is a program designed to bring students, educators, and volunteers together to learn and have fun. Local communities, called “hubs,” are encouraged to meet regularly throughout the year. Each year, communities will gather to show off what they have learned and created. Labs range from computer science to biology, geology to physics, and more. In short, this sounds like an exciting project, one that I have signed up for as a volunteer.

I’m excited to see education become a priority once again. Seeing what my children learn in school is very disappointing at times. Sure, they’re younger and I know that basic skills are necessary, but it seems they are learning at a much slower pace than when I was in school. I don’t want to see them struggle later in life because they didn’t get the education they need and deserve. I encourage you to help out where you can, volunteer for National Lab Day, or find another educational program you can participate in. Never stop learning!

 

Gaming Legend

I ran across an article on Gamasutra a few months ago, and I’ve had it in my list of things to write about since then. I decided to finally get to writing about it today.

Scott Miller is the founder of Apogee Software. Apogee, and it’s sister-company, 3D Realms, are makers of some of the greatest games I’ve played. I grew up with these guys!

If we travel back a few years, back to the BBS days, there was a rather well-known BBS called Software Creations. I fondly remember dialing in weekly to check on the latest Apogee releases. Of course, I also remember, less fondly, getting in a helluva lot of trouble for running up the phone bill too. But, in the end, I think it was worth it. Apogee made some of the best games of that time and being the first on the virtual block with their latest creation was stuff of legend.

But Apogee was more than just a game company. They helped spawn a PC gaming revolution. Before Apogee, game makers either sold their games commercially, or released them as shareware, hoping users who downloaded their games would send them a few bucks. Commercial games relied solely on marketing and flashy ads while shareware authors relied solely on faith.

Apogee can be credited with bringing shareware to the masses and kickstarting the PC gaming revolution. They broke their games into multiple parts and released the first part for free, radically changing the well-established shareware model. This served as a fully-functional demo, enough to get you hooked, and then sold the rest of the game as a commercial product. And so the episodic model was born. They were also responsible for helping kickstart one of the most well-known game development companies, id Software.

Apogee started in 1986 with ASCII-based games such as Beyond the Titanic and the Kroz series. From there they moved into 2D CGA/EGA games such as Crystal Caves, Bio Menace, and, Duke Nukem, which would go on to become one of their most popular properties. Shortly after Apogee started doing business as 3D Realms in 1996, they released Duke Nukem 3D, arguably their greatest hit.

In the 20+ year history of Apogee and 3D Realms, they have released in excess of 70+ games. Unfortunately, most of these releases were from before Apogee entered the 3D age and formed 3D Realms, but then, most publishers have slowed output considerably since then due to the big budget games they create. More recently, 3D Realms has been working with external development teams.

3D Realm announced in May that it will be closing its doors, though they have since made announcements regarding an overhaul of their online store, as well as the release of a Prey-based iPhone game. Both of these announcements came roughly 1 month after the announcement of their imminent closing. According to Scott Miller, however, only the internal development team was released and 3D Realms will continue to do business. Miller claims there are still several titles in development by external teams.

Even today, Apogee continues to move in new directions. Scott Miller helped form a new game company, the Radar Group, which aims to take new ideas and form them into marketable properties for games, television, and movies. The Radar Group aims to take gaming into a whole new direction.

The Apogee name has been licensed to a new group of developers who aim to revive the label. According to Scott Miller, the new Apogee group is working on a Duke Nukem Trilogy and an up-to-date version of Rise of the Triad. RotT was originally intended as a Wolfenstein 3D sequel until ID Software pulled the plug.

While most of the gaming world has moved on to bigger titles, and while Apogee’s role seems to have diminished somewhat, it’s good to remember where it all started. Apogee helped make PC gaming what it is today. And who knows, perhaps they have something else up their sleeve.

 

Reign of the Fallen

Fan videos tend to be low-budget and that usually shows through in the end product. Don’t get me wrong, there are some incredible fan-made creations out there. Every once in a while, a fan-film comes along that just fills you with awe. This is one of those films.

Star Wars – Reign of the Fallen from Darth Anonymous on Vimeo.

Unfortunately, there doesn’t seem to be an HD version available, so you’ll have to deal with Vimeo scaling it for you, or just watch the smaller version. Thanks to John Simpson (he with the famous beard), I stand corrected. You can get an HD version of the video, and even a DVD, from their official site. Best part is, it’s all free. Though, it doesn’t hurt to donate if you’d like to see more from these guys.

There’s also an article about the shooting of this film. Interestingly enough, this was shot in Central New Jersey, though you’d never tell from the visuals.