Posts Tagged ‘security’

It’s docker, it’s a container, it’s… a process?

Thursday, November 29th, 2018

In a previous post I discussed Docker from a high level. In this post, we’ll take a closer look at how processes run in a container and how it differs from the common view of the architecture that is used to explain Docker. Remember this?

Common Docker Architecture Overview

The problem with this image, however, is that while it helps conceptualize what we’re talking about, it doesn’t reflect reality. If you listed the processes outside of the container, one might think you’d see the docker daemon running and a bunch of additional processes that represent the containers themselves:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      4000     1  0 Oct15 ?        00:03:44 dockerd
root      4353  4000  0 Oct15 ?        00:03:44 myawesomecontainer1
root      4354  4000  0 Oct15 ?        00:03:44 myawesomecontainer2
root      4355  4000  0 Oct15 ?        00:03:44 myawesomecontainer3

And while this might be what you’d expect based on the image above, it does not represent reality. What you’ll actually see is the docker daemon running with a number of additional helper daemons to handle things like networking, and the processes that are running “inside” of the containers like this:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      1514     1  0 Oct15 ?        04:28:40 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nat
root      1673  1514  0 Oct15 ?        01:27:08 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout
root      4035  1673  0 Oct31 ?        00:00:07 /usr/bin/docker-containerd-shim-current d548c5b83fa61d8e3bd86ad42a7ffea9b7c86e3f9d8095c1577d3e1270bb9420 /var/run/docker/libcontainerd/
root      4054  4035  0 Oct31 ?        00:01:24 apache2 -DFOREGROUND
33        6281  4054  0 Nov13 ?        00:00:07 apache2 -DFOREGROUND
33        8526  4054  0 Nov16 ?        00:00:03 apache2 -DFOREGROUND
33       24333  4054  0 04:13 ?        00:00:00 apache2 -DFOREGROUND
root     28489  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.22.0.3 -container-port 443
root     28502  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.22.0.3 -container-port 80
33       19216  4054  0 Nov13 ?        00:00:08 apache2 -DFOREGROUND

Without diving too deep into this, the docker processes you see above serve a few processes. There’s the main dockerd process which is responsible for management of docker containers on this host. The containerd processes handle all of the lower level management tasks for the containers themselves. And finally, the docker-proxy processes are responsible for the networking layer between the docker daemon and the host.

You’ll also see a number of apache2 processes mixed in here as well. Those are the processes running within the container, and they look just like regular processes running on a linux system. The key difference is that a number of kernel features are being used to isolate these processes so they are isolated away from the rest of the system. On the docker host you can see them, but when viewing the world from the context of a container, you cannot.

What is this black magic, you ask? Well, it’s primarily two kernel features called Namespaces and cgroups. Let’s take a look at how these work.

Namespaces are essentially internal mapping mechanisms that allow processes to have their own collections of partitioned resources. So, for instance, a process can have a pid namespace allowing that process to start a number of additional processed that can only see each other and not anything outside of the main process that owns the pid namespace. So let’s take a look at our earlier process list example. Inside of a given container you may see this:

[root@dockercontainer ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Nov27 ?        00:00:12 apache2 -DFOREGROUND
www-data    18     1  0 Nov27 ?        00:00:56 apache2 -DFOREGROUND
www-data    20     1  0 Nov27 ?        00:00:24 apache2 -DFOREGROUND
www-data    21     1  0 Nov27 ?        00:00:22 apache2 -DFOREGROUND
root       559     0  0 14:30 ?        00:00:00 ps -ef

While outside of the container, you’ll see this:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      1514     1  0 Oct15 ?        04:28:40 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nat
root      1673  1514  0 Oct15 ?        01:27:08 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout
root      4035  1673  0 Oct31 ?        00:00:07 /usr/bin/docker-containerd-shim-current d548c5b83fa61d8e3bd86ad42a7ffea9b7c86e3f9d8095c1577d3e1270bb9420 /var/run/docker/libcontainerd/
root      4054  4035  0 Oct31 ?        00:01:24 apache2 -DFOREGROUND
33        6281  4054  0 Nov13 ?        00:00:07 apache2 -DFOREGROUND
33        8526  4054  0 Nov16 ?        00:00:03 apache2 -DFOREGROUND
33       24333  4054  0 04:13 ?        00:00:00 apache2 -DFOREGROUND
root     28489  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.22.0.3 -container-port 443
root     28502  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.22.0.3 -container-port 8033       19216  4054  0 Nov13 ?        00:00:08 apache2 -DFOREGROUND

There are two things to note here. First, within the container, you’re only seeing the processes that the container runs. No systems, no docker daemons, etc. Only the apache2 and ps processes. From outside of the container, however, you see all of the processes running on the system, including those within the container. And, the PIDs listed inside if the container are different from those outside of the container. In this example, PID 4054 outside of the container would appear to map to PID 1 inside of the container. This provides a layer of security such that running a process inside of a container can only interact with other processes running in the container. And if you kill process 1 inside of a container, the entire container comes to a screeching halt, much as if you kill process 1 on a linux host.

PID namespaces are only one of the namespaces that Docker makes use of. There are also NET, IPC, MNT, UTS, and User namespaces, though User namespaces are disabled by default. Briefly, these namespaces provide the following:

  • NET
    • Isolates a network stack for use within the container. Network stacks can, and typically are, shared between containers.
  • IPC
    • Provides isolated Inter-Process Communications within a container, allowing a container to use features such as shared memory while keeping the communication isolated within the container.
  • MNT
    • Allows mount points to be isolated, preventing new mount points from being added to the host system.
  • UTS
    • Allows different host and domains names to be presented to containers
  • User
    • Allows a mapping of users and groups with container to the host system, thereby preventing a root user within a container from running as did 1 outside of the container.

The second piece of black magic used is Control Groups or cgroups. Cgroups isolates resource usage for a process. Where Namespaces creates a localized view of resources for a process, cgroups creates a limited pool of resources for a process. For instance, you can assign specific CPU, Memory, and Disk I/O limits to a container. With a cgroup is assigned, the process cannot exceed the limits put on it, thereby preventing processes from “running away” and exhausting system resources. Instead, the process either deals with the lower resource limits, or crashes.

By themselves, these features can be a bit daunting to set up for each process or group of processes. Docker conveniently packages this up, making deployment as simple as a docker run command. Combined with the packaging of a Docker container (which I’ll cover in a future post), Docker becomes a great way to deploy software in a reproducible, secure manner.

Yes, your iToaster needs security

Tuesday, March 24th, 2015

Let’s talk about your house for a moment. For the sake of argument, we’ll assume that you live in a nice house with doors, windows, the works. All of the various entries have the requisite locking devices. As with most homes, these help prevent unwanted entry, though a determined attacker can surely bypass them. For the moment, let’s ignore the determined attacker and just talk about casual attempts.

Throughout your time living in your home, casual attempts at illegal entry have been rebuffed. You may or may not even know about these attempts. They happen pretty randomly, but there’s typically not much in the way of evidence after the attacker gives up and leaves. So you’re pretty happy with how secure things are.

Recently, you’ve heard about this great new garage from a friend who has one. It’s really nice, low cost, and you have room for it on your property, so you decide to purchase one. You place the order and, after a few days, your new garage arrives. It’s everything you could have imagined. Plenty of room to store all the junk you have in the house, plus you can fit the car in there too!

You use the garage every day, moving boxes in and out of the garage as needed until one day you return home and, for some inexplicable reason, your car won’t fit all the way in. Well, that’s pretty weird, you think. You decide that maybe you stored too much in the garage, so you spend the rest of the day cleaning out the garage. You make some tough decisions and eventually you make enough room to put the car back in the garage.

Time passes and this happens a few more times. After a while you start to get a bit frustrated and decide that maybe you need to buy a bigger garage. You pull out your trusty measuring tape to verify the dimensions of the garage and, to your amazement, the garage is smaller than what you remember. You do some more checking and, to your amazement, the garage is bigger on the outside. So you call an expert to figure out what’s going on.

When the expert arrives, she takes one look at the situation and tells you she knows exactly what has happened. You watch with awe as she walks up to the closed garage, places her hand on the door, and the door opens by itself! Curious, you ask how she performed that little magic trick. She explains that this particular model of garage has a little known problem that allows the door to be opened by putting pressure on just the right place. Next, she head into the garage and starts poking around at the walls. After a few moments, one of the walls slides open revealing another room full of stuff you don’t recognize.

Your expert explains that obviously someone else knows about this weakness and has set up a false wall in your garage to hide their own stuff in. This is the source of the shrinking space and your frustration. She helps you clean up the mess and tear down the false wall. After everything is back to normal, she recommends you contact the manufacturer and see if they have a fix for the faulty door.

While this story may sound pretty far fetched when we’re talking about houses and garages, it’s an all too common story for consumer grade appliances. And as we move further into this new age of connected devices, commonly called the Internet of Things (IoT), it’s going to become and even bigger issue.

Network access itself is the first challenge. Many of the major home router vendors have already experienced problems with security. So right out of the gate, home networks are potentially vulnerable. This is a major problem, especially given the potentially sensitive nature of data being transmitted by a variety of new IoT devices.

Today’s devices are incredibly data-centric. From fitness trackers to environmental sensors, our devices are tracking everything. This data is collected and then transmitted to an internet-connected service where it is made available to the user in a variety of ways. Some users may find this data to be sensitive, hoping to keep it relatively private, available only to the user and, anonymously, to the service they subscribe to. Others may make this data public. But in the world of IoT, a security problem with a device compromises that choice.

Or maybe the attacker isn’t after your data at all. Perhaps, like our garage example, they’re looking for resources they can use. Maybe they want to store files, or maybe they’re looking to use your device to process their own data. Years ago, attackers would gain access to a remote system so they could take advantage of the space available on the system, typically storing data and setting up a warez site. That is, illegal copies of software available to those who know where to look. These days, however, storage is everywhere and there are many superior ways to transmit files between users. As a result, the old-school practice of setting up a warez site has mostly fallen by the wayside.

In today’s world, attackers want access to your devices for a variety of reasons. Some attackers use these devices as zombie systems for sending massive amounts of spam. Typically this just results in a slow Internet connection and possibly gets your IP banned from sending mail. Not a big deal for you, but it can be a real headache for those of us dealing with the influx of spam.

More and more, however, attackers are taking over machines to use them for their processing power, or for their connection to the Internet. For instance, some attackers compromise machines just so they can use them to mine bitcoins. It seems harmless enough, but it can be an inconvenience to the owner of the device when it doesn’t respond the way it should because it’s too busy working on something else.

Attackers are also using the Internet connections for nefarious purposes such as setting up denial of service hosts. They use your connection, and the connections of other systems they have compromised, to send massive amounts of data to a remote system. The entire purpose of this activity it to prevent the remote system from being accessible. It was widely reported that this sort of activity is what caused connectivity problems to both Microsoft’s Xbox Live service as well as the Playstation Network during Christmas of 2014.

So what can we do about this? Users clearly want this technology, so we need to do something to make it more secure. And to be clear, this problem goes beyond the vendors, it includes the users as well. Software has and will always have bugs. Some of these bugs can be exploited and result in a security problem. So the first step is ensuring that vendors are patching those bugs when they’re found. And, perhaps, vendors can be convinced to bolster their internal security teams such that secure coding practices are followed.

But vendors patching bugs isn’t the only problem, and in most cases, it’s the easy part of the problem. Once a patch exists, users have to apply that patch to their system. As we’ve seen over the years, patching isn’t something that users are very good at. Thus, automatic update systems such as those used by Microsoft and Apple, are commonplace. But this practice hasn’t carried over to devices yet. Vendors need to work on this and build these features into their hardware. Until they do, these security issues will remain a widespread problem.

So yes, your iToaster needs security. And we need vendors to take the next step and bake in automatic updating so security becomes the default. End users want devices that work without having to worry about how and when to update them. Not all manufacturers have the marketing savvy that Apple uses to make updating sexy. Maybe they can take a page out of the book Microsoft used with the Xbox One. Silent updates, automatically, overnight.

Hacker is not a dirty word

Saturday, March 21st, 2015

Have you ever had to fix a broken item and you didn’t have the right parts? Instead of just giving up, you looked around and found something that would work for the time being. Occasionally, you come back later and fix it the right way, but more often than not, that fix stays in place indefinitely. Or, perhaps you’ve found a novel new use for a device. It wasn’t built for that purpose, but you figured out that it fit the exact use you had in mind.

Those are the actions of a hacker. No, really. If you look up the definition of a hacker, you get all sort of responses. Wikipedia has three separate entries for the word hacker in relation to technology :

Hacker – someone who seeks and exploits weaknesses in a computer system or computer network

Hacker – (someone) who makes innovative customizations or combinations of retail electronic and computer equipment

Hacker – (someone) who combines excellence, playfulness, cleverness and exploration in performed activities

Google defines it as follows :

1. a person who uses computers to gain unauthorized access to data.

(informal) an enthusiastic and skillful computer programmer or user.

2. a person or thing that hacks or cuts roughly.

And there are more. What’s interesting here is that depending on where you look, the word hacker means different things. It has become a pretty contentious word, mostly because the media has, over time, used it to describe the actions of a particular type of person. Specifically, hacker is often used to describe the criminal actions of a person who gains unauthorized access to computer systems. But make no mistake, the media is completely wrong on this and they’re using the word improperly.

Sure, the person who broke into that computer system and stole all of that data is most likely a hacker. But, first and foremost, that person is a criminal. Being a hacker is a lifestyle and, in many cases, a career choice. Much like being a lawyer or a doctor is a career choice. Why then is hacker used as a negative term to identify criminal activity and not doctor or lawyer? There are plenty of instances where doctors, lawyers, and people from a wide variety of professions have indulged in criminal activity.

Keren Elazari spoke in 2014 at TED about hackers, and their importance in our society. During her talk she discusses the role of hackers in our society, noting that there are hackers who use their skills for criminal activity, but many more who use their skills to better the world. From hacktivist groups like Anonymous to hackers like Barnaby Jack, these people have changed the world in positive ways, helping to identify weaknesses in systems to weaknesses in governments and laws. In her own words :

My years in the hacker world have made me realize both the problem and the beauty about hackers: They just can’t see something broken in the world and leave it be. They are compelled to either exploit it or try and change it, and so they find the vulnerable aspects in our rapidly changing world. They make us, they force us to fix things or demand something better, and I think we need them to do just that, because after all, it is not information that wants to be free, it’s us.

It’s time to stop letting the media use this word improperly. It’s time to take back what is ours. Hacker has long been a term used to describe those we look up to, those we seek to emulate. It is a term we hold dear, a term we seek to defend. When Loyd Blankenship was arrested in 1986, he wrote what has become known as the Hacker’s Manifesto. This document, often misunderstood, describes the struggle many of us went through, and the joy of discovering something we could call our own. Yes, we’re often misunderstood. Yes, we’ve been marginalized for a long time. But times have changed since then and our culture is strong and growing.

Will online retailers be the next major breach target?

Tuesday, October 14th, 2014

In the past year we have seen several high-profile breaches of brick and mortar retailers. Estimates range in the tens of millions of credit cards stolen in each case. For the most part, these retailers have weathered the storm with virtually no ill effects. In fact, it seems the same increase in stock price that TJ Maxx saw after their breach still rings true today. A sad fact indeed.

Regardless, the recent slew of breaches has finally prompted the credit card industry to act. They have declared that 2015 will be the year that chip and pin becomes the standard for all card-present transactions. And while chip and pin isn’t a silver bullet, and attackers will eventually find new and innovative ways to circumvent it, it has proven to be quite effective in Europe where it has been the standard for years.

Chip and pin changes how the credit card information is transmitted to the processor. Instead of the credit card number being read, in plain text, off of the magnetic strip, the card reader initiates an encrypted communication between the chip on the card and the card reader. The card details are encrypted and sent, along with the user’s PIN, to the card processor for verification. It is this encrypted communication between the card and, ultimately, the card processor that results in increased security. In short, the attack vectors used in recent breaches is difficult, if not impossible to pull off with these new readers. Since the information is not decrypted until it hits the card processor, attackers can’t simply skim the information at the card reader. There are, of course, other attacks, though these have not yet proven widespread.

At it’s heart, though, chip and pin only “fixes” one type of credit card transaction, card-present transactions. That is, transactions in which the card holder physically scans their card via a card reader. The other type of transaction, card-not-present transactions, are unaffected by chip and pin. In fact, the move to chip and pin may result in putting online transactions at greater risk. With brick and mortar attacks gone, attackers will move to online retailers. Despite the standard SSL encryption used between shoppers and online retailers, there are plenty of ways to steal credit card data. In fact, one might argue that a single attack could net more card numbers in a shorter time since online retailers often store credit card data as a convenience for the user.

It seems that online fraud, though expected, is being largely ignored for the moment. After all, how are we going to protect that data without supplying card readers to every online shopper? Online solutions such as PayPal, Amazon Payments, and others mitigate this problem slightly, but we still have to rely on the security they’ve put in place to protect cardholder data. Other solutions such as Apple Pay and Google Wallet seemingly combine on and offline protections, but the central data warehouse remains. The problem seems to be the security of the card number itself. And losing this data can be a huge burden for many users as they have to systematically update payment information as the result of a possible breach. This can often lead to late payments, penalties, and more.

One possible alternative is to reduce the impact a single breach can cause. What if the data that retailers stored was of little or no value to an attacker while still allowing the retailer a way to simplify payments for the shopper? What if a breach at a retailer only affected that retailer and resulted in virtually no impact on the user? A solution like this may be just what we need.

Instead of providing a retailer your credit card number and CVV, the retailer is provided a simple token. That token, coupled with a private retailer-specific token should be all that is needed to verify a transaction. Tokens can and should be different for each retailer. If a retailer is compromised, new tokens can be generated, reducing the impact on the user significantly. Attackers who successfully breach a retailer can only submit transactions if they can obtain both the private retailer token as well as the user token. And if processors put simple access-control lists in place, it increases the difficulty an attacker encounters when trying to push through a fraudulent transaction.

Obtaining tokens can be handled by redirecting a user to a payment gateway for their initial transaction. The payment gateway verifies the user and their credit card data, and then passes the generated token back to the retailer. This is similar to how retailers using existing online payment processors such as Paypal and Amazon Payments already handle payments. The credit card data never passes through the retailer network. The number of locations credit card data is stored reduces significantly as well. This, in turn, means that attackers have fewer targets and while this increases the risk a payment processor network incurs, one can argue that these networks should already have significant defenses in place.

This is only one possible solution for online payments. There are many other solutions out there being presented by both security and non-security folks. But there seems to be no significant movement on an online solution. Will it take several high-profile online breaches to convince credit card companies that a solution is needed? Or will credit card companies move to protect retailers and card holders ahead of attackers redirecting their efforts? If history is any indication, get used to having your card re-issued several times a year for the foreseeable future.

Bleeding Heart Security

Friday, April 11th, 2014

Unless you’ve been living under a rock the past few days, you’ve probably heard about the Heartbleed vulnerability in OpenSSL that was disclosed on Monday, April 7th. Systems and network administrators across the globe have spent the last few days testing for this vulnerability, patching systems, and probably rocking in the corner while crying. Yes, it’s that bad. What’s more, there are a number of reports that intelligence agencies may have known about this vulnerability for some time now.

The quick and dirty is that a buffer overflow bug in the code allows an attacker to remotely read memory of an affected system in 64k chunks. The only memory accessible to an attacker would be memory used by the process being connected to, but, depending on the process, there may be a LOT of useful data in there. For instance, Yahoo was leaking usernames and passwords until late Tuesday evening.

The fabulous web comic, xkcd, explains how the attack works in layman’s terms. If you’re interested in the real nitty gritty of this vulnerability, though, there’s an excellent write-up on the IOActive Labs blog. If you’re the type that likes to play, you can find proof-of-concept code here. And let’s not forget about the client side, there’s PoC code for that as well.

OpenSSL versions 1.0.1 through 1.0.1f as well as the 1.0.2 beta code are affected. The folks at OpenSSL released version 1.0.1g on Monday which fixed the problem. Or, at least, the current problem. There’s a bit of chatter about other issues that may be lurking in the OpenSSL codebase.

Now that a few days have passed, however, what remains to be done? After all, everyone has patched their servers, right? Merely patching doesn’t make the problem disappear, though. Vulnerable code is out there and mistakes can be made. For the foreseeable future, you should be regularly scanning your network for vulnerable systems with something like Nmap. The Nmap NSE for Heartbleed scanning is already available. Alternatively, you can use something like Nagios to regularly check your existing servers.

Patching immediately may not have prevented a breach, either. Since Heartbleed doesn’t leave much of a trace beyond some oddities that your IDS may have seen, there’s virtually no way to know if anything has been taken. The best way to deal with this is to just go ahead and assume that your private keys are compromised and start replacing them. New keys, new certs. It’s painful, it’s slow, but it’s necessary.

For end users, the best thing you can do is change your passwords. I’m not aware of any “big” websites that have not patched by now, so changing passwords should be relatively safe. However, that said, Wired and Engadget have some of the best advice I’ve seen about this. In short, change your passwords today, then change them again in a few weeks. If you’re really paranoid, change them a third time in about a month. By that time, any site that is going to patch will have already patched.

Unfortunately, I think the fun is just beginning. I expect we’ll start seeing a number of related attacks. Phishing attacks are the most likely in the beginning. If private keys were compromised, then attackers can potentially impersonate websites, including their SSL certificates. This would likely involve a DNS poisoning attack, but could also be accomplished by compromising a user’s local system and setting a hosts file entry. Certificate revocation is a potential defense against this, but since many browsers have CRL checks disabled by default, it probably won’t help. Users will have to watch what they click, where they go, and what software they run. Not much different from the advice given already.

Another possible source of threats are consumer devices. As Bruce Schneier put it, “An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.” What he’s referring to are the many embedded devices we use on a daily basis that may never receive updates to protect the end user. In other words, that router you purchased from the discount store? That may be affected and unless you replace it, you’ll continue to be vulnerable. Fortunately, most of these devices aren’t configured, by default, to face the Internet, so there may yet be hope.

The Heartbleed vulnerability is a serious contender for the worst security vulnerability ever released. I’m not sure of another vulnerability that exposes so many systems to such a degree as this one. Network and systems administrators will be cleaning up after this one for a while.

Becoming your own CA

Thursday, February 13th, 2014

SSL, as I mentioned in a previous blog entry, has some issues when it comes to trust. But regardless of the problems with SSL, it is a necessary part of the security toolchain. In certain situations, however, it is possible to overcome these trust issues.

Commercial providers are not the only entities that are capable of being a Certificate Authority. In fact, anyone can become a CA and the tools to do so are available for free. Becoming your own CA is a fairly painless process, though you might want to brush up on your openSSL skills. And lest you think you can just start signing certificates and selling them to third parties, it’s not quite that simple. The well-known certificate authorities have worked with browser vendors to have their root certificates added as part of the browser installation process. You’ll have to convince the browser vendors that they need to add your root certificate as well. Good luck.

Having your own CA provides you the means to import your own root certificate into your browser and use it to validate certificates you use within your network. You can use these SSL certificates for more than just websites as well. LDAP, RADIUS, SMTP, and other common applications use standard SSL certificates for encrypting traffic and validating remote connections. But as mentioned above, be aware that unless a remote user has a copy of your root certificate, they will be unable to validate the authenticity of your signed certificates.

Using certificates signed by your own CA can provide you that extra trust level you may be seeking. Perhaps you configured your mail server to use your certificate for the POP and IMAP protocols. This makes it more difficult for an attacker to masquerade as either of those services without obtaining your signing certificate so they can create their own. This is especially true if you configure your mail client such that your root certificate is the only certificate that can be used for validation.

Using your own signed certificates for internal, non-public facing services provides an even better use-case. Attacks such as DNS cache poisoning make it possible for attackers to trick devices into using the wrong address for an intended destination. If these services are configured to only use your certificates and reject connection attempts from peers with invalid certificates, then attackers will only be able to impersonate the destination if they can somehow obtain a valid certificate signed by your signing certificate.

Sound good? Well, how do we go about creating our own root certificate and all the various machinery necessary to make this work? Fortunately, all of the necessary tools are open-source and part of most Linux distributions. For the purposes of this blog post, I will be explaining how this is accomplished using the CentOS 6.x Linux distribution. I will also endeavor to break down each command and explain what each parameter does. Much of this information can be found in the man pages for the various commands.

OpenSSL is installed as part of a base CentOS install. Included in the install is a directory structure in /etc/pki. All of the necessary tools and configuration files are located in this directory structure, so instead of reinventing the wheel, we’ll use the existing setup.

To get started, edit the default openssl.cnf configuration file. You can find this file in /etc/pki/tls. There are a few options you want to change from their defaults. Search for the following headers and change the options listed within.

[CA_default]
default_md = sha256

[req]
default_bits = 4096
default_md = sha256
  • default_md : This option defined the default message digest to use. Switching this to sha256 result in a stronger message digest being used.
  • default_bits : This option defines the default key size. 2048 is generally considered a minimum these days. I recommend setting this to 4096.

Once the openssl.cnf file is set up, the rest of the process is painless. First, switch into the correct directory.

cd /etc/pki/tls/misc

Next, use the CA command to create a new CA.

[root@localhost misc]# ./CA -newca
CA certificate filename (or enter to create)

Making CA certificate ...
Generating a 4096 bit RSA private key
 ...................................................................................................................................................................................................................................................++
.......................................................................++
writing new private key to '/etc/pki/CA/private/./cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:cert.example.com
Email Address []:certadmin@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/./cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798347 (0xf837fc8d719b304b)
        Validity
            Not Before: Feb 13 18:37:14 2014 GMT
            Not After : Feb 12 18:37:14 2017 GMT
        Subject:
            countryName               = US
            stateOrProvinceName       = MyState
            organizationName          = My Company Inc.
            commonName                = cert.example.com
            emailAddress              = certadmin@example.com
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

            X509v3 Basic Constraints:
                CA:TRUE
Certificate is to be certified until Feb 12 18:37:14 2017 GMT (1095 days)

Write out database with 1 new entries
Data Base Updated

And that’s about it. The root certificate is located in /etc/pki/CA/cacert.pem. This file can be made public without compromising the security of your system. This is the same certificate you’ll want to import into your browser, email client, etc. in order to validate and certificates you may sign.

Now you can start signing certificates. First you’ll need to create a CSR on the server you want to install it on. The following command creates both the private key and the CSR for you. I recommend using the server name as the name of the CSR and the key.

openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr
  • openssl : The openSSL command itself
  • req : This option tells openSSL that we are performing a certificate signing request (CSR) operation.
  • -newkey : This option creates a new certificate request and a new private key. It will prompt the user for the relevant field values. The rsa:4096 argument indicates that we want to use the RSA algorithm with a key size of 4096 bits.
  • -keyout : This gives the filename to write the newly created private key to.
  • -out : This specifies the output filename to write to.
[root@localhost misc]# openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr Generating a 4096 bit RSA private key
.....................................................................................................................++
..........................................................................................................................................................................................................................................................................................................................................................................................................++
writing new private key to 'www.example.com.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:www.example.com
Email Address []:hostmaster@example.com

Once you have the CSR, copy it over to the server you’re using to sign certificates. Unfortunately, the existing tools don’t make it easy to merely name the CSR you’re trying to sign, so we need to create our own tool. First, create a new directory to put the CSRs in.

mkdir /etc/pki/tls/csr

Next, create the sign_cert.sh script in the directory we just created. This file needs to be executable.

#!/bin/sh

# Revoke last year's certificate first :
# openssl ca -revoke cert.crt

DOMAIN=$1
YEAR=`date +%Y`
rm -f newreq.pem
ln -s $DOMAIN.csr newreq.pem
/etc/pki/tls/misc/CA -sign
mv newcert.pem $DOMAIN.$YEAR.crt

That’s all you need to start signing certificates. Place the CSR you transferred from the other server into the csr directory and use script we just created to sign it.

[root@localhost csr]# ./sign_cert.sh www.example.com
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
        Subject:
            countryName = US
            stateOrProvinceName = MyState
            localityName = MyCity
            organizationName = My Company Inc.
            commonName = www.example.com
            emailAddress = hostmaster@example.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Comment:
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier:
                3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Certificate is to be certified until Feb 13 18:48:55 2015 GMT (365 days)
Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=MyState, O=My Company Inc., CN=cert.example.com/emailAddress=certadmin@example.com
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
    Subject: C=US, ST=MyState, L=MyCity, O=My Company Inc., CN=www.example.com/emailAddress=hostmaster@example.com
    Subject Public Key Info:
        Public Key Algorithm: rsaEncryption
            Public-Key: (4096 bit)
            Modulus:
                00:d9:5a:cc:87:f0:e5:1e:6f:a0:25:cd:fe:36:64:
                6c:68:ae:2f:3e:7e:93:93:a4:69:6f:f1:28:c1:c2:
                4d:5f:3c:3a:61:2e:4e:f0:90:89:54:48:d6:03:83:
                fb:ac:1e:7c:9a:e8:be:cf:c9:8f:93:41:27:3e:1b:
                66:63:db:a1:54:cb:f7:1d:0b:71:bc:5f:80:e1:30:
                e4:28:14:68:1c:09:ba:d0:aa:d3:e6:2b:24:cd:21:
                67:99:dc:8b:7a:2c:94:d0:ed:8e:02:5f:2f:52:06:
                09:0e:8a:b7:bf:64:e8:d7:bf:94:94:ad:80:34:57:
                32:89:51:00:fe:fd:8c:7d:17:35:4c:c7:5f:5b:58:
                f4:97:9b:21:42:9e:a9:6c:86:5f:f4:35:98:a5:81:
                62:9d:fa:15:07:9d:29:25:38:2b:5d:22:74:58:f8:
                58:56:1c:e9:65:a3:62:b5:a7:66:17:95:12:21:ca:
                82:12:90:b6:8a:8d:1f:79:e8:5c:f4:f9:6c:3a:44:
                f9:3a:3f:29:0d:2e:bf:51:98:9f:58:21:e5:d9:ee:
                78:54:ad:5a:a2:6f:d1:85:9a:bc:b9:21:92:e8:76:
                80:b8:0f:96:77:9a:99:5e:3b:06:bb:6f:da:1c:6e:
                f2:10:16:69:ba:2b:57:c8:1a:cc:b6:e4:0c:1d:b2:
                a6:b7:b9:6c:37:2e:80:13:46:a1:46:c3:ca:d6:2b:
                cd:f7:ba:38:98:74:15:7f:f1:67:03:8e:24:89:96:
                55:31:eb:d8:44:54:a5:11:04:59:e6:73:59:42:ed:
                aa:a3:37:13:ab:63:ab:ef:61:65:0a:af:2f:71:91:
                23:40:7d:f8:e8:a1:9d:cf:3f:e5:33:d9:5f:d2:4d:
                06:d0:2c:70:59:63:06:0f:2a:59:ae:ae:12:8d:f4:
                6c:fd:b2:33:76:e8:34:0f:1f:24:91:2a:a8:aa:1b:
                11:8a:0b:86:f3:67:b8:be:b7:a0:06:02:4a:76:ef:
                dd:ed:c4:a9:03:a1:8c:b0:39:9d:35:98:7f:04:1c:
                24:8a:1c:7c:6f:35:56:71:ee:b5:36:b7:3f:14:04:
                eb:48:a1:4f:6f:8e:43:7c:8b:36:4a:bf:ba:e9:8b:
                d9:38:0c:76:24:e9:a3:38:bf:4e:86:fd:31:4d:c3:
                6f:16:07:09:dd:d8:6b:0b:9d:4d:97:eb:1f:92:21:
                b2:a5:f9:d8:55:61:85:d2:99:97:bc:27:12:be:eb:
                55:86:ee:1f:f5:6f:a7:c5:64:2f:4e:c2:67:a3:52:
                97:7a:d9:66:89:05:6a:59:ed:69:7b:22:10:2b:a1:
                14:4e:5d:b8:f0:21:e9:11:d0:25:ae:bc:05:2b:c3:
                db:ad:cf
            Exponent: 65537 (0x10001)
    X509v3 extensions:
        X509v3 Basic Constraints:
            CA:FALSE
        Netscape Comment:
            OpenSSL Generated Certificate
        X509v3 Subject Key Identifier:
            3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
        X509v3 Authority Key Identifier:
            keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Signature Algorithm: sha256WithRSAEncryption
     ca:66:b2:55:64:e6:40:a5:85:19:11:66:0d:63:89:fb:0d:3a:
     0c:ec:fd:cb:5c:93:44:1e:3f:1b:ca:f5:3d:85:ab:0a:0b:dc:
     f3:18:1d:1f:ec:85:ff:f3:82:52:9e:c7:12:19:07:e9:6a:82:
     bd:32:f6:d1:19:b2:b7:09:1c:34:d7:89:45:7e:51:4d:42:d6:
     4e:78:b6:39:b3:76:58:f8:20:57:b3:d8:7b:e0:b3:2f:ce:9f:
     a2:59:de:f6:31:f2:09:1c:91:3b:7f:97:61:cb:11:a4:b4:73:
     ab:47:64:e8:93:07:98:d5:47:75:8d:9a:8f:a3:8f:e8:f4:42:
     7e:b8:1b:e8:36:72:13:93:f9:a8:cc:6d:b4:85:a7:af:94:fe:
     f3:6e:76:c2:4d:78:c3:c2:0b:a4:48:27:d3:eb:52:c3:46:14:
     c1:26:03:28:a0:53:c7:db:59:c9:95:b8:d9:f0:d9:a8:19:4a:
     a7:0f:81:ad:3c:e1:ec:f2:21:51:0d:bc:f9:f9:f6:b6:75:02:
     9f:43:de:e6:2f:9b:77:d3:c3:72:6f:f6:18:d7:a3:43:91:d2:
     04:2a:c8:bf:67:23:35:b7:41:3f:d1:63:fe:dc:53:a7:26:e9:
     f4:ee:3b:96:d5:2a:9c:6d:05:3d:27:6e:57:2f:c9:dc:12:06:
     2c:cf:0c:1b:09:62:5c:50:82:77:6b:5c:89:32:86:6b:26:30:
     d2:6e:33:20:fc:a6:be:5a:f0:16:1a:9d:b7:e0:d5:d7:bb:d8:
     35:57:d2:be:d5:07:98:b7:3c:18:38:f9:94:4c:26:3a:fe:f2:
     ad:40:e6:95:ef:4b:a9:df:b0:06:87:a2:6c:f2:6a:03:85:3b:
     97:a7:ef:e6:e5:d9:c3:57:87:09:06:ae:8a:5a:63:26:b9:35:
     29:a5:87:4b:7b:08:b9:63:1c:c3:65:7e:97:ae:79:79:ed:c3:
     a3:36:c3:87:1f:54:fe:0a:f1:1a:c1:71:3d:bc:9e:36:fc:da:
     03:2b:61:b5:19:0c:d7:4d:19:37:61:45:91:4c:c9:7a:5b:00:
     cd:c2:2d:36:f9:1f:c2:b1:97:2b:78:86:aa:75:0f:0a:7f:04:
     85:81:c5:8b:be:af:a6:a7:7a:d2:17:26:7a:86:0d:f8:fe:c0:
     27:a8:66:c7:92:cd:c5:34:99:c9:8e:c1:25:f3:98:df:4e:48:
     37:4a:ee:76:4a:fa:e4:66:b4:1f:cd:d8:e0:25:fd:c7:0b:b3:
     12:af:bb:b7:29:98:5e:86:f2:12:8e:20:c6:a9:40:6f:39:14:
     8b:71:9f:98:22:a0:5b:57:d1:f1:88:7d:86:ad:19:04:7b:7d:
     ee:f2:c9:87:f4:ca:06:07
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIJAPg3/I1xmzBMMA0GCSqGSIb3DQEBCwUAMHoxCzAJBgNV
BAYTAlVTMRAwDgYDVQQIDAdNeVN0YXRlMRgwFgYDVQQKDA9NeSBDb21wYW55IElu
Yy4xGTAXBgNVBAMMEGNlcnQuZXhhbXBsZS5jb20xJDAiBgkqhkiG9w0BCQEWFWNl
cnRhZG1pbkBleGFtcGxlLmNvbTAeFw0xNDAyMTMxODQ4NTVaFw0xNTAyMTMxODQ4
NTVaMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHTXlTdGF0ZTEPMA0GA1UEBwwG
TXlDaXR5MRgwFgYDVQQKDA9NeSBDb21wYW55IEluYy4xGDAWBgNVBAMMD3d3dy5l
eGFtcGxlLmNvbTElMCMGCSqGSIb3DQEJARYWaG9zdG1hc3RlckBleGFtcGxlLmNv
bTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANlazIfw5R5voCXN/jZk
bGiuLz5+k5OkaW/xKMHCTV88OmEuTvCQiVRI1gOD+6wefJrovs/Jj5NBJz4bZmPb
oVTL9x0LcbxfgOEw5CgUaBwJutCq0+YrJM0hZ5nci3oslNDtjgJfL1IGCQ6Kt79k
6Ne/lJStgDRXMolRAP79jH0XNUzHX1tY9JebIUKeqWyGX/Q1mKWBYp36FQedKSU4
K10idFj4WFYc6WWjYrWnZheVEiHKghKQtoqNH3noXPT5bDpE+To/KQ0uv1GYn1gh
5dnueFStWqJv0YWavLkhkuh2gLgPlneamV47Brtv2hxu8hAWaborV8gazLbkDB2y
pre5bDcugBNGoUbDytYrzfe6OJh0FX/xZwOOJImWVTHr2ERUpREEWeZzWULtqqM3
E6tjq+9hZQqvL3GRI0B9+Oihnc8/5TPZX9JNBtAscFljBg8qWa6uEo30bP2yM3bo
NA8fJJEqqKobEYoLhvNnuL63oAYCSnbv3e3EqQOhjLA5nTWYfwQcJIocfG81VnHu
tTa3PxQE60ihT2+OQ3yLNkq/uumL2TgMdiTpozi/Tob9MU3DbxYHCd3YawudTZfr
H5IhsqX52FVhhdKZl7wnEr7rVYbuH/Vvp8VkL07CZ6NSl3rZZokFalntaXsiECuh
FE5duPAh6RHQJa68BSvD263PAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJYIZIAYb4
QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBQ6
7is6c6bDXDmQ6oU/2nEze5FNfzAfBgNVHSMEGDAWgBQU/BS89KU+awxYO987JjVG
oL7s8TANBgkqhkiG9w0BAQsFAAOCAgEAymayVWTmQKWFGRFmDWOJ+w06DOz9y1yT
RB4/G8r1PYWrCgvc8xgdH+yF//OCUp7HEhkH6WqCvTL20RmytwkcNNeJRX5RTULW
Tni2ObN2WPggV7PYe+CzL86folne9jHyCRyRO3+XYcsRpLRzq0dk6JMHmNVHdY2a
j6OP6PRCfrgb6DZyE5P5qMxttIWnr5T+8252wk14w8ILpEgn0+tSw0YUwSYDKKBT
x9tZyZW42fDZqBlKpw+BrTzh7PIhUQ28+fn2tnUCn0Pe5i+bd9PDcm/2GNejQ5HS
BCrIv2cjNbdBP9Fj/txTpybp9O47ltUqnG0FPSduVy/J3BIGLM8MGwliXFCCd2tc
iTKGayYw0m4zIPymvlrwFhqdt+DV17vYNVfSvtUHmLc8GDj5lEwmOv7yrUDmle9L
qd+wBoeibPJqA4U7l6fv5uXZw1eHCQauilpjJrk1KaWHS3sIuWMcw2V+l655ee3D
ozbDhx9U/grxGsFxPbyeNvzaAythtRkM100ZN2FFkUzJelsAzcItNvkfwrGXK3iG
qnUPCn8EhYHFi76vpqd60hcmeoYN+P7AJ6hmx5LNxTSZyY7BJfOY305IN0rudkr6
5Ga0H83Y4CX9xwuzEq+7tymYXobyEo4gxqlAbzkUi3GfmCKgW1fR8Yh9hq0ZBHt9
7vLJh/TKBgc=
-----END CERTIFICATE-----
Signed certificate is in newcert.pem

The script automatically renamed the newly signed certificate. In the above example, the signed certificate is in www.example.com.2014.crt. Transfer this file back to the server it belongs on and you’re all set to start using it.

That’s it! You’re now a certificate authority with the power to sign your own certificates. Don’t let all that power go to your head!

SSL “Security”

Friday, February 7th, 2014

SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990’s. Today, SSL, and it’s replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet.

SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI.

PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin.

When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate.

CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA.

Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated.

In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the “Green Address Bar” means that the website is completely trustworthy. And they’ve been pretty effective. But, as with most marketing, they didn’t quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it’s still possible that the certificate is fake.

There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any website the attacker wanted to hijack. That certificate plus some creative DNS poisoning and the attacker suddenly looks like your bank, or google, or whatever site the attacker wants to be. And, they’ll have a nice shiny green EV certificate.

So how do we fix this? Well, one way would be to use the certificate revocation system that already exists within the PKI infrastructure. If a certificate is stolen, or a false certificate is created, the CA has the ability to put the signature for that certificate into the revocation system. When a user tries to load a site with a bad certificate, a warning is displayed telling the user that the certificate is not to be trusted.

Checking revocation of a certificate takes time, and what happens if the revocation server is down? Should the browser let the user go to the site anyway? Or should it block by default? The more secure option is to block, of course, but most users won’t understand what’s going on. So most browser manufacturers have either disabled revocation checking completely, or they default to allowing a user to access the site when the revocation site is slow or unavailable.

Without the ability to verify if a certificate is valid or not, there can be no real trust in the security of the connection, and that’s a problem. Perhaps one way to fix this problem is to disconnect the revocation process from the process of loading the webpage. If the revocation check happened in parallel to the page loading, it shouldn’t interfere with the speed of the page load. Additional controls can be put into place to prevent any data from being sent to the remote site without a warning until the revocation check completes. In this manner, the revocation check can take a few seconds to complete without impeding the use of the site. And after the first page load, the revocation information is cached anyway, so subsequent page loads are unaffected.

Another option, floated by the browser builders themselves, is to have the browser vendors host the revocation information. This information is then passed on to the browsers when they’re loaded. This way the revocation process can be handled outside of the CAs, handling situations such as those caused by a CA being compromised. Another idea would be to use short term certificates that expire quickly, dropping the need for revocation checks entirely.

It’s unclear as to what direction the market will move with this issue. It has been over two years since the attacks on Diginotar and Comodo and the immediacy of this problem seems to have passed. At the moment, the only real fix for this is user education. But with the marketing departments for SSL vendors working to convince users of the security of SSL, this seems unlikely.

BSides Delaware 2013

Tuesday, November 12th, 2013

The annual BSides Delaware conference took place this past weekend, November 8th and 9th. BSides Delaware is a free community driven security event that takes place at the Wilmington University New Castle campus. The community is quite open, welcoming seasoned professionals, newcomers, curious individuals, and even children. There were a number of families who attended, bringing their children with them to learn and have fun.

I was fortunate enough to be able to speak at last years BSides and was part of the staff for this years event. There were two tracks for talks, many of which were recorded and are already online thanks to Adrian Crenshaw, the IronGeek. Adrian has honed his video skills and was able to have every recording online by the closing ceremonies on Saturday evening.

In all there were more than 25 talks over the course of two days covering a wide variety of topics, logging, Bitcoins, forensics, and more. While most speakers were established security professionals, there were a few new speakers striving to make a name for themselves.

This year also included a FREE wireless essentials training class. The class was taught by a team of world-class instructors including Mike Kershaw (drag0rn), author of the immensely popular Kismet wireless tool, Russell Handorf from the FBI Cyber Squad, and Rick Farina, lead developer for Pentoo. The class covered everything from wireless basics to software-defined radio hacking. An absolutely amazing class.

In addition to the talks, BSides also features not one, but two lockpick villages. Both Digital Trust as well as Toool were present. The lockpick villages were a big hit with seasoned professionals as well as the very young. It’s amazing to see how adept a young child can be with a lockpick.

Hackers for Charity was present as well with a table of goodies for sale. They also held a silent (and not so silent) auction where all proceeds went to the charity. Hackers for Charity raises money to help with a variety of projects they engage in across the world. From their website :

We employ volunteer hackers and technologists through our Volunteer Network and engage their skills in short projects designed to help charities that can not afford traditional technical resources.

We’ve personally witnessed how one person can have a profound impact on the world. By giving of their skills, time and talent our volunteers are profoundly impacting the world, one “hacker” at a time.

BSides 2013 was an amazing experience. This was my second year at the conference and it’s amazing how it has grown. The dates for BSidesDE 2014 have already been announced, November 14th and 15th. Mark your calendars and make an effort to come join in the fun. It’s worth it.

Derbycon 2012

Saturday, October 6th, 2012

I spent this past weekend in Louisville, KY attending a relatively new security conference called Derbycon. This year was the second year they held the conference and the first year I spoke there. It was amazing, to say the least.

I haven’t been to many conventions, and this is the only security-oriented convention I’ve attended. When I first attended last year, it was with come trepidation. I knew that some of the attendees I’d be seeing were truly rockstars in the security world. And, unfortunately, one of the people who was supposed to come with us was unable to attend. Of course, that person was the one person in our group who was connected within the security world and we were depending on them to introduce us to everyone.

It went well, nonetheless, and we were able to meet a lot of amazing people while we were there. Going back this year, we were able to rekindle friendships that started last year, and even make a few new ones. Derbycon has an absolutely amazing sense of family. Even the true rockstars of the con are down to earth enough to hang out with the newcomers.

And this year, I had the opportunity to speak. I submitted my CFP earlier in the year, not really expecting it to be chosen. Much to my surprise, though, it was. And so I spent some time putting together my talk and prepared to stand in front of the very people I looked up to. It was nerve-wracking to say the least. You can watch the video over on the Irongeek site, and you can find the slides in my presentation archive.

But I powered through it. I delivered my talk and while it may not have been the most amazing talk, it was an accomplishment. I think it’s given me a bit more confidence in my own abilities and I’m looking forward to giving another. In fact, I’ve since submitted a talk to BSides Deleware at the behest of the organizers. I haven’t heard back yet, but here’s hoping.

I’m already making plans to attend Derbycon 2013 and I hope to be a permanent fixture there for many years to come. Derbycon is an amazing place to go and something truly magnificent to experience. I may not be in the security industry, but they made me feel truly welcome despite my often dumb questions and inane comments. Rel1k, IronGeek, and Purehate have put together something special and I was proud to be a part of it again.

Jumping The Gap

Sunday, August 26th, 2012

I listened to a news story on NPR’s On The Media recently about “Cyber Warfare” and assessing it’s true threat. On the one hand, it seemed like another misguided report from a clueless news media. On the other hand, though, it did make me think a bit.

Much of the talk about Cyber Warfare revolves around attacking the various SCADA systems used to control the nation’s physical infrastructure. By today’s standards, many of these systems are quite primitive. Many of these systems are designed for a very specific purpose, rarely upgraded to run on modern operating systems, and very rarely, if ever, designed to be secure. The state of the art in security for many of these systems is to not allow outside access to the system.

Unfortunately, if numerous reports are to be believed, a good portion of the world’s infrastructure is connected to the Internet in one manner or another. The number of institutions that truly air gap their critical networks is alarmingly low. A researcher from IO Active, who provided some of the information for the aforementioned NPR article, used SHODAN to scour the Internet for SCADA systems. Why use SHODAN? Turns out, the simple act of scanning the Internet for these systems often resulted in the target systems crashing and going offline. If a simple network scan can kill one of these systems, then what hope do we have?

But, air gapping is by no means a guarantee against attacks since users of these systems may regularly switch between connected and non-connected systems and use some form of media to transfer files back and forth. There is precedence for this with the Stuxnet virus. According to reports, the Iranian nuclear facility was, in fact, air gapped. However, Stuxnet was designed to replicate onto USB drives and other media. Plug an infected USB drive into a targeted SCADA system and poof, instant infection across an air gapped system.

So what can be done here? How do we keep our infrastructure safe from attackers? Yes, even aging attackers…

Personally, I believe this comes down, again, to Defense in Depth. With the exception of not building it in the first place, I don’t believe that there is a way to prevent attacks. And any determined attacker will eventually get in, given time. So the only way to defend against this is to build a layered defense grid with a full monitoring back end. Expect that attackers will make it through one or two layers before being detected. Determined attackers may make it even further. But if you build you defenses with this in mind, you will stand a better chance at detecting and repelling these attacks.

I don’t believe that air gapping systems is a viable security strategy. If anything, it can result in a false sense of security for users and administrators. After all, if the system isn’t connected, how can it possibly be infected? Instead, start building in security from the start and deploy your defense in monitored layers. It works.