Privacy Redux

I wrote a short piece on privacy about 2 weeks ago. A few things were pointed out to me about that piece that I want to address. My thanks to Lauren Weinstein of the People for Internet Responsibility and the Network Neutrality Squad for his comments and direction.

Lauren pointed out that the Constitution has no explicit provision for privacy. Instead, the constitution merely provides a number of guidelines for personal security. Specifically, the constitution provides guidance such as protection against unreasonable search and seizure. Additionally, a person cannot be compelled to be a witness against themselves.

The supreme court has, over time, upheld these guidelines and extended them to provide additional privacy protections for all. The ninth amendment, specifically, has been used to uphold that the rights provided by the Constitution do not supersede rights already held by individuals.

To be sure, the issue of privacy is a tangled one, and opinions abound. Interpretations of the Constitution will change over time. It is up to each and every one of us to ensure that our rights stay intact and to fight when necessary to uphold those rights.

 

Really Awesome New Cisco confIg Differ

Configuration management is pretty important, but often overlooked. It’s typically easy enough to handle configurations for servers since you have access to standard scripting tools as well as cron. Hardware devices such as switches and routers are a bit more to handle, though, as automating backups of these configs can be daunting, at best.

Several years ago, I took the time to write a fairly comprehensive configuration backup system for the company I was working for. It handled Cisco routers and switches, Fore Systems/Marconi ASX ATM switches, Redback SMS aggregators, and a few other odds and ends. Unfortunately, it was written specifically for that company and not something easily converted for general use.

Fortunately, there’s a robust open source alternative called RANCID. The Really Awesome New Cisco confIg Differ, RANCID, is a set of perl scripts designed to automate configuration retrieval from a host of devices including Cisco, Juniper, Redback, ADC, HP, and more. Additionally, since most of the framework is already there, you can extend it as needed to support additional devices.

RANCID has a few interesting features which make life much easier as a network admin. First, when it retrieves the configuration from a device, it checks it in to either a CVS or SVN repository. This gives you the ability to see changes between revisions, as well as the ability to retrieve an old revision of a config from just about any point in time. Additionally, RANCID emails a list of the changes between the current and last revision of a configuration to you. This way you can keep an eye on your equipment, seeing alerts when things change. Very, very useful to detect errors by you and others.

Note: RANCID handles text-based configurations. Binary configurations are a whole different story. While binary configs can be placed in an SVN repository, getting emailed about changes becomes a problem. It’s possible to handle binary configs, though I do not believe RANCID has this capability.

Setup of RANCID is pretty straightforward. You can either install straight from source, or use a pre-packaged RPM. For this short tutorial, I’ll be using an RPM-based installation. The source RPM I’m using can be found here. It is assumed that you can either rebuild the RPM via the rpmbuild utility, or you can install the software from source.

After the software is installed, there are a few steps required to set up the software. First, I would recommend editing the rancid.conf file. I find making the following modifications to be a good first step:

RCSSYS=svn; export RCSSYS
* Change RCSSYS from cvs to svn. I find SVN to be a superior revisioning system. Your mileage may vary, but I’m going to assume you’re using SVN for this tutorial.

FILTER_PWDS=ALL; export FILTER_PWDS
NOCOMMSTR=YES; export NOCOMMSTR
* Uncommenting these and turning them on ensures that passwords are not stored on your server. This is a security consideration as these files are stored in cleartext format.

OLDTIME=4; export OLDTIME
* This setting tells RANCID how long a device can be unreachable before alerting you to the problem. The default is 24 hours. Depending on how often you run RANCID, you may want to change this option.

LIST_OF_GROUPS=”routers switches firewalls”
* This is a list of names you’ll use to identify devices. These names are arbitrary, so Fred Bob and George are ok. However, I would encourage you to use something meaningful.

The next step is to create the CVS/SVN repositories you’ll be using. This can’t possibly be easier. Switch to the rancid user, then run rancid-cvs. You’ll see output similar to the following:

-bash-3.2$ rancid-cvs
Committed revision 1.
Checked out revision 1.
A configs
Adding configs
Committed revision 2.
A router.db
Adding router.db
Transmitting file data .
Committed revision 3.
Committed revision 4.
Checked out revision 4.
A configs
Adding configs
Committed revision 5.
A router.db
Adding router.db
Transmitting file data .
Committed revision 6.
-bash-3.2$

That’s it, your repositories are created. All that’s left is to set up the user credentials that rancid will use to access the devices, tell rancid which devices to contact, and finally, where to send email. Again, this is quite straightforward.

User credentials are stores in the .cloginrc file located in the rancid home directory. This file is quite detailed with explanations of the various configuration options. In short, for most Cisco devices, you’ll want something like this:

add user * <username>
add password * <login password> <enable password>
add method * ssh

This tells the system to use the given username and passwords for accessing all devices in rancid via ssh. You can specify overrides by adding additional lines above these, replacing the * with the device name.

Next, tell rancid what devices to contact. As the rancid user, switch to the appropriate repository directory. For instance, if we’re adding a router, switch to ~rancid/routers and edit the router.db file. Note: This file is always called router.db, regardless of the repository you are in. Each line of this file consists of three fields, separated by colons. Field 1 is the hostname of the device, field 2 is the type of device, and field 3 is either up or down depending on whether the device is up or not. If you remove a device from this file, the configuration is removed from the repository, so be careful.

router.example.com:cisco:up

Finally, set up the mailer addresses for receiving rancid mail. These consist of aliases on the local machine. If you’re using sendmail, edit the /etc/aliases file and add the following :

rancid-<group>: <email target>
rancid-admin-<group>: <email target>

There are two different aliases needed for each group. Groups are the names used for the repositories. So, in our previous example, we have three groups, switches, routers, and firewalls. So we set up two aliases for each, sending the results to the appropriate parties. The standard rancid-<group> alias is used for sending config diffs. The rancid-admin-<group> alias is used to send alerts about program problems such as not being able to contact a device.

Make sure you run newaliases when you’re done editing the aliases file.

Once these are all set up, we can run a test of rancid. As the rancid user, run rancid-run. This will run through all of the devices you have identified and begin retrieving configurations. Assuming all went well, you should receive notifications via email about the new configurations identified.

If you have successfully run rancid and retrieved configurations, it’s time to set up the cron job to have this run automagically. Merely edit the crontab file for rancid and add something similar to the following:

# run config differ 11 minutes after midnight, 2am, 4am, etc.
11 0-23/2 * * * /usr/bin/rancid-run
# clean out config differ logs
50 23 * * * /usr/bin/find /var/rancid/logs -type f -mtime +2 -exec rm {} \;

Offsetting the times a bit is a good practice, just to ensure everything doesn’t run at once and bog down the system. The second entry cleans up the rancid log files, removing anything older than 2 days.

And that’s it! You’re well on your way to being a better admin. Now to finish those other million or so “great ideas” ….

 

The Privacy Problem

Live free or die: Death is not the worst of evils.
General John Stark

Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take; but as for me, give me liberty or give me death!
Patrick Henry

Privacy, n.
1. The state or condition of being alone, undisturbed, or free from public attention, as a matter of choice or right; seclusion; freedom from interference or intrusion.
2. The state of being privy to some act
3. a. Absence or avoidance of publicity or display; secrecy, concealment, discretion; protection from public knowledge or availability.
3. b. The keeping of a secret; reticence.
Oxford English Dictionary

Privacy is often taken for granted. When the US Constitution was drafted, the founding fathers made sure to put in provisions to guarantee the privacy of the citizens they would govern. Most scholars agree that their intention was to prevent government intrusion in private lives and activities. They were very forward thinking, trying to ensure this protection would continue indefinitely into the future. Unfortunately, even the most forward thinking, well intentioned individual won’t be able to cover all of the possible scenarios that will occur in the future.

Since that fateful day in 1787, a war has raged between those advocating absolute privacy and those advocating reasonable intrusion for the sake of security. At the extreme edge of the argument are the non-consequentialists who believe that privacy should be absolute. They believe that privacy is non-negotiable and that the loss of privacy is akin to slavery. A common argument is that giving up privacy merely encourages additional loss. In other words, if you allow your privacy to be compromised once, then those that violate it will expect to be able to violate it again.

At the other edge are those that believe that privacy is irrelevant in the face of potential evil. This is also a non-consequentialist view. Individuals with this view tend to argue that if you have something to hide, then you are obviously guilty of something.

Somewhere in the middle are the consequentialists who believe that privacy is essential to a point. Violation of privacy should be allowed when the benefit of doing so outweighs the benefit of keeping something private. In other words, if disclosing a secret may save a life, or prevent an innocent person from going to jail, then a violation of privacy should be allowed.

The right to privacy has been fought over for years. In more recent years, technological advances have brought to light many of the problems with absolute privacy, and at the same time, have highlighted the need for some transparency. Technology has benefits for both the innocent and the criminal. It makes no delineation between the two, offering the same access to information for both.

New technologies have allowed communication over long distances, allowing criminals to coordinate criminal activities without the need to gather. Technology has brought devastating weaponry to the average citizen. Terrorists can use an Internet search engine to learn how to build bombs, plan attacks, and communicate with relative privacy. Common tools can be used to replicate identification papers, allowing criminals access to secure areas. The Internet can be used to obtain access to remote systems without permission.

Technology can also be used in positive ways. Mapping data can be used to optimize travel, find new places, and get you home when you’re lost. Online stores can be used to conveniently shop from your home, or find products you normally wouldn’t have access to. Social networking can be used to keep in touch with friends and relatives, and to form new friendships with strangers you may never have come in contact with otherwise. Wikipedia can be used for research and updated by complete strangers to spread knowledge. Companies can stay in contact with customers, alerting them of new products, updates to existing ones, or even alert them to potential problems with something they previously purchased.

In the last ten or so years, privacy in the US has been “under attack.” These so-called attacks come from many different sources. Governmental agencies seek access to more and more private information in order to combat terrorism and other criminal activities. Private organizations seek to obtain private information to identify new customers, customize advertisements, prevent fraud, etc. Technology has enabled these organizations to obtain this data in a variety of ways, often unbeknownst to the average user.

When was the last time you went to the airport and waited for someone to arrive at the gate? How about escorting someone to the gate before their flight? As recently as 20 years ago, it was possible to do both. However, since that time, security measures have been put in place to prevent non-ticketed individuals access beyond security checkpoints. Since the 9/11 terrorist attacks, security has been enhanced to include random searches, bomb sniffing, pat downs, full-body scanners, and more. In fact, the Transportation Security Administration (TSA) started random screening at the gate in 2008. Even more recently, the TSA has authorized random swabbing of passenger hands to detect explosive residue. While these measures arguably enhance security, it does so at the expense of the private individual. Many travelers feel violated by the process, even arguing that they are assumed to be guilty, having to prove their innocence every time they fly.

Traditionally, any criminal proceeding is conducted with the assumption of innocence. A criminal is considered innocent of a crime unless and until they are proven guilty. In the airport example above, the passengers are being screened with what can be considered an assumption of guilt. If you refuse to be screened, you are barred from flying, if lucky, or taken in for additional questioning and potentially jailed for the offense. Of course, individuals are not granted the right to fly, but rather offered the opportunity at the expense of giving up some privacy. It’s when these restrictions are applied to daily life, without the consent of the individual, that more serious problems arise.

Each and every day, the government gathers information about its citizens. This information is generally available to the public, although access is not necessarily easy. How this information is used, however, is often a source of criticism by privacy advocates. Massive databases of information have been built with algorithms digging through the data looking for patterns. If these patterns match, the individuals to whom the data belongs can be subject to additional scrutiny. This “fishing” for wrongdoing is often at the crux of the privacy argument. Generally speaking, if you look hard enough, and you gather enough data, you can find wrongdoing. More often, however, false positives pop up and individuals are subjected to additional scrutiny without warrant. In some cases, individuals can be wrongly detained.

Many privacy opposers argue that an innocent person has nothing to hide. However, this argument can be considered a fallacy. Professor Daniel Solove wrote an essay explaining why this argument is faulty. He argues that the “nothing to hide argument” is essentially hollow. Privacy is an inherently individualistic preference. Without knowing the full extent of how information will be used, it is impossible to say that revealing everything will have no ill effects, assuming the individual is innocent of wrongdoing. For instance, data collected by the government may not be used to identify you as a criminal, but it may result in embarrassment or feelings of exposure. What one person may consider a non-issue, others may see as evil or wrong.

These arguments extend beyond government surveillance and into the private sector as well. Companies collect information about consumers at an alarming rate. Information entered into surveys, statistics collected from websites, travel information collected from toll booths, and more can be used to profile individuals. This information is made available, usually at a cost, to other companies or even individuals. This information isn’t always kept secure, either. Criminals often access remote systems, obtaining credit card and social security numbers. Stalkers and pedophiles use social networking sites to follow their victims. Personal information posted on public sites can find its way into credit reports and is even used by some businesses to justify firing employees.

Privacy laws have been put in place to prevent such abuses, but information is already out there. Have you taken the time to put your name into a search engine lately? Give it a try, you may be surprised by the information you can find out about yourself. These are public records that can be accessed by anyone. Financial and real estate information is commonly available to the public, accessible to those knowing how to look for it. Criminal records and court proceedings are published on the web now, allowing anyone a chance to access it.

Whenever you access a website, check out a book from the library, or chat with a friend in email, you run the risk of making that information available to people you don’t want to have it. In recent years, it has been common for potential employers to use the Internet to obtain background information on a potential employee. In some cases, embarrassing information can be uncovered, casting a negative light on an individual. Teachers have been fired because of pictures they posted, innocently, on their profile pages. Are you aware of how the information you publish on the Internet can be used against you?

There is no clear answer on what should and should not be kept private. Likewise, there is no clear answer on what private data the government and private companies should have access to. It is up to you, as an individual, to make a conscious choice as to what you make public. In an ever evolving world, the decisions you make today can and will have an impact on what may happen in the future. What you may think of as an innocent act today can potentially be used against you in the future. It’s up to you to fight for your privacy, both from the government, and from the companies you interact with. Be sure you’re aware of how your data can be used before you provide it. Privacy and private data is being used in new, interesting, and potentially harmful ways every day. Be sure you’re aware of how your data can be used before you provide it.

 

The Authentication Problem

Authentication is a tricky problem. The goal of authentication is to verify the identify of the person, device, machine, etc. that is attempting to gain access to the protected system. There are many factors to consider when designing an authentication system. Here is a brief sampling:

  • How much security is necessary?
  • Do we require username?
  • How strong should the password be?
  • Do we need multi-factor authentication?

The need for authentication typically means that the data being accessed is sensitive in some way. This can be something as simple as a todo list or a user’s email, or as important as banking or top secret information. It can also mean that the data being accessed is valuable in some way such as a site that requires a subscription. So, the security necessary is dependent on the data being protected.

Usually, authentication systems require a username and some form of a password. For more secure systems, multi-factor authentication is used. Multi-factor authentication means that multiple pieces of information are used to authenticate the user. These vary depending on the security required. In the United States, federal regulators recognize the following factors:

  • Something the user knows (e.g., password, PIN)
  • Something the user has (e.g., ATM card, smart card)
  • Something the user is (e.g., biometric characteristic such as a fingerprint)

A username and a password is an example of a single-factor authentication mechanism. When you use an ATM machine, you supply it with an ATM card and then use a PIN. This is an example of two-factor authentication.

The U.S. Federal Financial Institutions Examination Council (FFIEC) recommends the use of multi-factor authentication for financial institutions. Unfortunately, most of the authentication systems currently in place are still single-factor authentication systems, despite asking for several pieces of information. For example, if you log into your bank system you use a username and password. Once the username and password pass, you are often asked for additional information such as answers to challenge questions. These are all examples of things the user knows, thus only a single factor.

Some institutions have begun using additional factors to identify the user such as a one-time password sent to an email address or cell phone. This can be cumbersome, however, as it can often take additional time to receive this information. To combat this, browser cookies are used after the first successful authentication. After the user logs in for the first time, they are offered a chance to have the system place a “secure token” on their system. Subsequent logins use this secure token in addition to the username and password to authenticate the user. This is arguably a second factor as it’s something the user has, as opposed to something they know. On the other hand, it is extremely easy to duplicate or steal cookies.

There are other ways that two-factor authentication can be circumvented as well. Since most institutions only use a single communication mechanism, hijacking that communication medium can result in a security breach.

Man-in-the-middle attacks use fake websites to lure users in and steal the authentication information the user uses to authenticate. This can happen transparently to the user by forwarding the information to the actual institution and letting the user continue to access the system. More sophisticated attacks have the user “fail” authentication the first time and let them in on subsequent tries. The attacker can then use the first authentication attempt to gain access themselves.

Another method is the use of Trojans. If a user can be tricked into installing malicious software into their system, an attacker can ride on the user’s session, injecting their own transactions into the communications channel.

Defending against these attacks is not easy and may be impossible in many situations. For instance, requiring a second method of communication for authentication may help to authenticate the user, but if an attacker can hijack the main communication path, they can still obtain access to the user’s current session. Use of encryption and proper training of users can help mitigate these types of attacks, but ultimately, any system using a public communication mechanism is susceptible to hijacking.

Session Security

Once authentication is complete, session security comes into play. Why go through all the trouble of authenticating the user if you’re not protecting the data they’re accessing? Assuming that the data itself is protected, we need to focus on protecting the data being transferred to and from the user. Additionally, we need to protect the user’s session itself.

Session hijacking is the term used to identify the stealing of a user’s session information to gain access to the information the user is accessing. There are four primary method of session hijacking.

  • Session Fixation
  • Session Sidejacking
  • Physical Access
  • Cross-site Scripting

Physical access is pretty straightforward. This involves an attacker directly accessing the user’s computer terminal and copying the session data. Session data can be something as simple as an alphanumeric token displayed right in the URL of the site being accessed. Or, it can be a piece of data on the machine such as a browser cookie.

Session fixation refers to a method by which an attacker can trick a user into using a pre-determined session ID. Once the user authenticates, the attacker gains access by using the same session ID. The system recognized the session ID as an authenticated session and lets the user in without verification.

Session Sidejacking involves an attacker intercepting the traffic between a user and the system. If a session is not encrypted, the attacker can obtain the session ID or cookie used to identify the user’s session. Once this information is obtained, the attacker can use the same information to gain access to the user’s session.

Finally, cross-side scripting is when an attacker tricks the user’s computer into sending session information to the attacker. This can happen when a user accesses a website that contains malicious code. For instance, an attacker can create a website with a special link to a well-known site such as a bank. The link contains additional code that, when run, sends the user’s authentication or session information to the attacker.

Encryption of the communications channel can mitigate some of these attack scenarios, but not all of them. Programmers should ensure that additional information is used to verify a user’s session. For instance, something as simple as verifying the user’s source IP address in addition to a session cookie is often enough to mitigate both physical access and session sidejacking. Not allowing a pre-defined session ID can prevent session fixation. And finally, proper coding can prevent cross-side scripting.

Additionally, any session information stored on the remote system being accessed should be properly secured as well. Merely securing the data accessed isn’t enough if an attacker can access the remote system and steal session information.

Unauthentication

Finally, how and when should a user be unauthenticated? Unauthentication is often overlooked when designing a secure system. If the user fails to log out, then attacks such as session hijacking become easier. Unauthentication can be tricky, however. There a number of factors to consider such as:

  • How and when should a user’s session be closed?
  • Should a user’s session time out?
  • How long should the timer be?

Most unauthentication currently consists of a user’s session timing out. After a pre-determined period of inactivity, the system will log a user out, deleting their session. Depending on the situation, this can be incredibly disruptive. For example, if a user’s email system has a short time out, they run the risk of losing a long email they’ve been working on. Some systems can mitigate this by recording the user’s data prior to logging them out, making it available again upon login so the user doesn’t lose it. Regardless, the longer the time out, the less secure a session can be.

Other unauthentication mechanisms have been discussed as well. When a physical token such as a USB key is used, the user can be unauthenticated if the key is removed from the system. Or, a device with some sort of radio in it, such as bluetooth, can unauthenticate the user if it is removed from the proximity of the system. Unfortunately, user’s will likely end up leaving these devices behind, significantly reducing their effectiveness.

As with authentication, unauthentication methods can depend on the sensitivity of the data being protected. Ultimately, though, every system should have some form of automatic unauthentication.

Data security in general can be a difficult nut to crack. System designers are typically either very lax in their security design, often overlooking session security and unauthentication, or they can be very draconian, opting to make the system very secure at the expense of the user. Designing a user-friendly, but secure, system is difficult, at best.

 

The Third Category

“Is there room for a third category of device in the middle, something that’s between a laptop and smartphone?”

And with that, Steve Jobs, CEO of Apple, ushered in the iPad.

So what is the iPad, exactly? I’ve been seeing it referred to as merely a gigantic iPod Touch. But is there more to it than that? Is this thing just a glorified iPod, or can there be more there?

On the surface, it truly is an oversized iPod Touch. It has the same basic layout as an iPod Touch with the home button at the bottom. It has a thick border around the screen where the user can hold the unit without interfering with the multitouch display.

The screen itself is an LCD display using IPS technology. According to Wikipedia, IPS (In-Plane Switching) is a technology designed by Hitachi. It offers a wide viewing angle and accurate color reproduction. The screen is backlit using LEDs, offering much longer battery life, uniform backlighting, and longer life.

Apple is introducing a total of 6 units, varying only in the size of the built-in flash storage, and the presence of 3G connectivity. Storage comes in either 16, 32, or 64 GB varieties. 3G access requires a data plan from a participating 3G provider, AT&T to start, and will entail a monthly fee. 3G access will also require the use of a micro-SIM card. AT&T is currently the only US provider using these cards. The base 16GB model will go for $499, while the 64GB 3G model will run you $829, plus a monthly data plan. As it stands now, however, the data plan is on a month by month basis, no contract required.

Ok, so with the standard descriptive details out of the way, what is this thing? Is it worth the money? What is the “killer feature,” if there is one?

On the surface, the iPad seems to be just a big iPod Touch, nothing more. In fact, the iPad runs an enhanced version of the iPhone OS, the same OS the iPod Touch runs. Apple claims that most of the existing apps in the iTunes App Store will run on the iPad, both in original size, as well as an enhanced mode that will allow the app to take up the entire screen.

Based on the demonstration that Steve Jobs gave, as well as various other reports, there’s more to this enhanced OS, though. For starters, it looks like there will be pop-out or drop-down menus, something the current iPhone OS does not have. Additionally, apps will be able to take advantage of file sharing, split screen views, custom fonts, and external displays.

One of the more touted features of the iPad was the inclusion of the iBook store. It seems that Apple wants a piece of the burgeoning eBook market and has decided to approach it just like they approached the music market. The problem here is that the iPad is still a backlit LCD screen at its core. Staring at a backlit display for long periods of time generally leads to headaches and/or eye strain. This is why eInk based units such as the Kindle or the Sony Reader do so well. It’s not the aesthetics of the Kindle that people like, it’s the comfort of using the unit.

It would be nice to see the eBook market opened up the way the music market has been. In fact, I look forward to the day that the majority of eBooks are available without DRM. Apple’s choice of using the ePub format for books is an auspicious one. The ePub format is fast becoming the standard of choice for eBooks and includes support for both a DRM and non-DRM format. Additionally, the format uses standard open formats as a base.

But what else does the iPad offer? Is it just a fancy book reader with some extra multimedia functionality? Or is there something more?

There has been some speculation that the iPad represents more than just an entry into the tablet market. That it, instead, represents an entry into the mobile processor market. After all, Apple put together their own processor, the Apple A4, specifically for this product. So is Apple merely using this as a platform for a launch into the mobile processor market? If so, early reports indicate that they may have something spectacular. Reports from those able to get hands-on time with the iPad report that the unit is very responsive and incredibly fast.

But for all of the design and power behind the iPad, there is one glaring hole. Flash support. And Apple isn’t hiding it, either. On stage, during the announcement of the iPad, Steve Jobs demonstrated web browsing by heading to the New York Times homepage. If you’ve ever been to their homepage, it’s dotted by various flash objects with video, slideshows, and more. On the iPad, these shows up as big white boxes with the Safari plugin icon showing.

So what is Apple playing at? Flash is pretty prevalent on the web, so not supporting it will result in a lot of missing content, as one Adobe employee demonstrated. Of course, the iPhone and iPod Touch have the same problem. Or, do they? If a device is popular, developers adapt. This can easily be seen by the number of websites that have adapted to the iPhone. But even more than that, look at the number of sites that adapt to the various web browsers, creating special markup to work with each one. This is nothing new for developers, it happens today.

Flash is unique, though, in that it gives the developers capabilities that don’t otherwise exist in HTML, right? Well, not exactly. HTML5 gives developers a standardized way to deploy video, handle offline storage, draw, and more. Couple this with CSS and you can replicate much of what Flash already does. There are lots of examples already of what HTML5 can do.

So what does the iPad truly mean to computing? Will it be as revolutionary as Apple wants us to believe it will be? I’m still not 100% sold on it, but it’s definitely something to watch. Microsoft has tried tablets in the past and failed, will Apple succeed?

 

Web Security

People use the web today for just about anything. We get our news from news sites and blogs, we play games, we view pictures, etc. Most of these activities are fairly innocuous and don’t require much in the way of security, beyond typical anti-viral and anti-spyware security. However, there are activities we engage in on the web where we want to keep our information private and secure. For instance, when we interact with our bank, we’d like to keep those transactions private. The same goes for other activities such as stock transfers and shopping.

And it’s not enough to merely keep it private, we also want to ensure that no one can inject anything into our sessions. Online banking wouldn’t be very useful if someone could inject phantom transfers into your session, draining your bank account. Likewise, having someone inject additional items into your order, or changing the delivery address, wouldn’t be very helpful.

Fortunately, Netscape developed a protocol to handle browser to server security called Secure Sockets Layer, or SSL. SSL was first released to the public in 1995 and updated a year later after several security flaws were uncovered. In 1999, SSL became TLS, Transport Layer Security. TLS has been updated twice since it’s inception and currently stands at version 1.2.

The purpose of SSL/TLS is pretty simple and straightforward, though the implementation details are enough to give anyone a headache. In short, when you connect to a remote site with your browser, the browser and web server negotiate a secure connection. Once established, everything you send back and forth is first encrypted locally and decrypted on the server end. Only the endpoints have the information required to both encrypt and decrypt, so the communication remains secure.

What about man-in-the-middle attacks? What if you were able to insert yourself between the browser and the server and then pass the messages back and forth. The browser would negotiate with you, and then you’d negotiate with the server. This way, you would have unencrypted access to the bits before you passed them on. That would work, wouldn’t it? Well, yes. Sort of. If the end-user allowed it or was tricked into allowing it.

When a secure connection is negotiated between a browser and server, the server presents the user with a certificate. The certificate identifies the server to the browser. While anyone can create a certificate, certificates can be signed by others to “prove” their authenticity. When the server is set up, the administrator requests a certificate from a well-known third party and uses that certificate to identify the server. When the browser receives the certificate, it can verify that the certificate is authentic by contacting the certificate signer and asking. If the certificate is not authentic, expired, or was not signed by a well known third party, the user is presented with an error dialog explaining the problem.

Unfortunately, the dialog presented isn’t always helpful and typically requires some knowledge of SSL/TLS to understand. Most browser vendors have “corrected” this by placing lots of red text, exclamation marks, and other graphics to indicate that something bad has happened. The problem here is that these messages are intended to be warnings. There are instances where certificates not signed by third parties are completely acceptable. In fact, it’s possible for you, as a user, to put together a valid certificate signing system that will provide users the exact same protections a third-party certificate provide. I’ll post a how-to a little later on the exact process. You can also use a self-signed certificate, one that has no root, and still provide the same level of encryption.

So if we can provide the same protection using our own signed or self-signed certificates, then why pay a third party to sign certificates for us? Well, there are a couple of reasons, though they’ve somewhat faded with time. First and foremost, the major third-party signers have their root certificates, used for validation, added to all of the major web browsers. In other words, you don’t need to install these certificates, they’re already there. And since most users don’t know how SSL works, let alone how to install a certificate, this makes third-party certificates very appealing. This is the one feature of third-party certificates that still makes sense.

Another reason is that your information is validated by the third-party provider. Or, at least, that’s how it used to be. Perhaps some providers still do, but since there is no standard across the board, SSL certificates as a de-facto identity check are broken. Some providers offer differing levels of validation for normal certificates, but there are no indicators within the browser to identify the level of validation. As a result, determining whether to trust a site or not falls completely on the shoulders of the user.

In response to this, an organization called the Certificate Authority/Browser Forum was created. This forum developed a set of guidelines that providers must adhere to in order to issue a new type of certificate, the Extended Validation, or EV, certificate. Audits are performed on an annual basis to ensure that providers continue to adhere to the guidelines. The end result is a certificate with special properties. When a browser visits a site that uses an EV certificate, the URL bar, or part of the URL bar turns green and displays the name of the company that owns the certificate. The purpose is to allow users a quick glance check to validate a site.

To a certain degree, I agree that these certificates provide a slight enhancement of security. However, I think this is more security theater than actual security. At its core, an EV certificate offers no better security than that of a self-signed certificate. The “value” lies in the vetting process a site has to go through in order to obtain such a certificate. It also relies on users being trained to recognize the green bar. Unfortunately, most of the training I’ve seen in this regard seem to teach the user that seeing a green URL bar instantly means they can trust the site with no further checking. I feel this is absolutely the wrong message to send. Users should be taught to verify website addresses as well as verifying SSL credentials.

Keeping our information private and secure goes way beyond the conversation between the browser and the server, however. Both before information is sent, and after it is received, it is available in some plain text format. If an attacker can infiltrate either end of the conversation, they can potentially retrieve this information. At the user’s end, security software such as an anti-virus, anti-spyware, and firewall, can be installed to protect the user. However, the user has absolutely no control over the server end.

To the user, the server is a mystery. The users trusts the administrator to keep their information safe and secure, but has no way of determining whether or not it is. Servers are protected much in the same way a user’s computer is. Firewalls are typically the main defense against intruders, though server firewalls are typically more advanced than those used on end-user computers. Data on the server can be stored using encryption, so even if a server is compromised, the data cannot be accessed.

Security on the Internet is a full-time job, both for the end user as well as the server administrator. Properly done, however, our data can be kept secure and private. All it takes is some due diligence and a little education.