Web Security

People use the web today for just about anything. We get our news from news sites and blogs, we play games, we view pictures, etc. Most of these activities are fairly innocuous and don’t require much in the way of security, beyond typical anti-viral and anti-spyware security. However, there are activities we engage in on the web where we want to keep our information private and secure. For instance, when we interact with our bank, we’d like to keep those transactions private. The same goes for other activities such as stock transfers and shopping.

And it’s not enough to merely keep it private, we also want to ensure that no one can inject anything into our sessions. Online banking wouldn’t be very useful if someone could inject phantom transfers into your session, draining your bank account. Likewise, having someone inject additional items into your order, or changing the delivery address, wouldn’t be very helpful.

Fortunately, Netscape developed a protocol to handle browser to server security called Secure Sockets Layer, or SSL. SSL was first released to the public in 1995 and updated a year later after several security flaws were uncovered. In 1999, SSL became TLS, Transport Layer Security. TLS has been updated twice since it’s inception and currently stands at version 1.2.

The purpose of SSL/TLS is pretty simple and straightforward, though the implementation details are enough to give anyone a headache. In short, when you connect to a remote site with your browser, the browser and web server negotiate a secure connection. Once established, everything you send back and forth is first encrypted locally and decrypted on the server end. Only the endpoints have the information required to both encrypt and decrypt, so the communication remains secure.

What about man-in-the-middle attacks? What if you were able to insert yourself between the browser and the server and then pass the messages back and forth. The browser would negotiate with you, and then you’d negotiate with the server. This way, you would have unencrypted access to the bits before you passed them on. That would work, wouldn’t it? Well, yes. Sort of. If the end-user allowed it or was tricked into allowing it.

When a secure connection is negotiated between a browser and server, the server presents the user with a certificate. The certificate identifies the server to the browser. While anyone can create a certificate, certificates can be signed by others to “prove” their authenticity. When the server is set up, the administrator requests a certificate from a well-known third party and uses that certificate to identify the server. When the browser receives the certificate, it can verify that the certificate is authentic by contacting the certificate signer and asking. If the certificate is not authentic, expired, or was not signed by a well known third party, the user is presented with an error dialog explaining the problem.

Unfortunately, the dialog presented isn’t always helpful and typically requires some knowledge of SSL/TLS to understand. Most browser vendors have “corrected” this by placing lots of red text, exclamation marks, and other graphics to indicate that something bad has happened. The problem here is that these messages are intended to be warnings. There are instances where certificates not signed by third parties are completely acceptable. In fact, it’s possible for you, as a user, to put together a valid certificate signing system that will provide users the exact same protections a third-party certificate provide. I’ll post a how-to a little later on the exact process. You can also use a self-signed certificate, one that has no root, and still provide the same level of encryption.

So if we can provide the same protection using our own signed or self-signed certificates, then why pay a third party to sign certificates for us? Well, there are a couple of reasons, though they’ve somewhat faded with time. First and foremost, the major third-party signers have their root certificates, used for validation, added to all of the major web browsers. In other words, you don’t need to install these certificates, they’re already there. And since most users don’t know how SSL works, let alone how to install a certificate, this makes third-party certificates very appealing. This is the one feature of third-party certificates that still makes sense.

Another reason is that your information is validated by the third-party provider. Or, at least, that’s how it used to be. Perhaps some providers still do, but since there is no standard across the board, SSL certificates as a de-facto identity check are broken. Some providers offer differing levels of validation for normal certificates, but there are no indicators within the browser to identify the level of validation. As a result, determining whether to trust a site or not falls completely on the shoulders of the user.

In response to this, an organization called the Certificate Authority/Browser Forum was created. This forum developed a set of guidelines that providers must adhere to in order to issue a new type of certificate, the Extended Validation, or EV, certificate. Audits are performed on an annual basis to ensure that providers continue to adhere to the guidelines. The end result is a certificate with special properties. When a browser visits a site that uses an EV certificate, the URL bar, or part of the URL bar turns green and displays the name of the company that owns the certificate. The purpose is to allow users a quick glance check to validate a site.

To a certain degree, I agree that these certificates provide a slight enhancement of security. However, I think this is more security theater than actual security. At its core, an EV certificate offers no better security than that of a self-signed certificate. The “value” lies in the vetting process a site has to go through in order to obtain such a certificate. It also relies on users being trained to recognize the green bar. Unfortunately, most of the training I’ve seen in this regard seem to teach the user that seeing a green URL bar instantly means they can trust the site with no further checking. I feel this is absolutely the wrong message to send. Users should be taught to verify website addresses as well as verifying SSL credentials.

Keeping our information private and secure goes way beyond the conversation between the browser and the server, however. Both before information is sent, and after it is received, it is available in some plain text format. If an attacker can infiltrate either end of the conversation, they can potentially retrieve this information. At the user’s end, security software such as an anti-virus, anti-spyware, and firewall, can be installed to protect the user. However, the user has absolutely no control over the server end.

To the user, the server is a mystery. The users trusts the administrator to keep their information safe and secure, but has no way of determining whether or not it is. Servers are protected much in the same way a user’s computer is. Firewalls are typically the main defense against intruders, though server firewalls are typically more advanced than those used on end-user computers. Data on the server can be stored using encryption, so even if a server is compromised, the data cannot be accessed.

Security on the Internet is a full-time job, both for the end user as well as the server administrator. Properly done, however, our data can be kept secure and private. All it takes is some due diligence and a little education.

 

Leave a Reply

Your email address will not be published. Required fields are marked *