Becoming your own CA

SSL, as I mentioned in a previous blog entry, has some issues when it comes to trust. But regardless of the problems with SSL, it is a necessary part of the security toolchain. In certain situations, however, it is possible to overcome these trust issues.

Commercial providers are not the only entities that are capable of being a Certificate Authority. In fact, anyone can become a CA and the tools to do so are available for free. Becoming your own CA is a fairly painless process, though you might want to brush up on your openSSL skills. And lest you think you can just start signing certificates and selling them to third parties, it’s not quite that simple. The well-known certificate authorities have worked with browser vendors to have their root certificates added as part of the browser installation process. You’ll have to convince the browser vendors that they need to add your root certificate as well. Good luck.

Having your own CA provides you the means to import your own root certificate into your browser and use it to validate certificates you use within your network. You can use these SSL certificates for more than just websites as well. LDAP, RADIUS, SMTP, and other common applications use standard SSL certificates for encrypting traffic and validating remote connections. But as mentioned above, be aware that unless a remote user has a copy of your root certificate, they will be unable to validate the authenticity of your signed certificates.

Using certificates signed by your own CA can provide you that extra trust level you may be seeking. Perhaps you configured your mail server to use your certificate for the POP and IMAP protocols. This makes it more difficult for an attacker to masquerade as either of those services without obtaining your signing certificate so they can create their own. This is especially true if you configure your mail client such that your root certificate is the only certificate that can be used for validation.

Using your own signed certificates for internal, non-public facing services provides an even better use-case. Attacks such as DNS cache poisoning make it possible for attackers to trick devices into using the wrong address for an intended destination. If these services are configured to only use your certificates and reject connection attempts from peers with invalid certificates, then attackers will only be able to impersonate the destination if they can somehow obtain a valid certificate signed by your signing certificate.

Sound good? Well, how do we go about creating our own root certificate and all the various machinery necessary to make this work? Fortunately, all of the necessary tools are open-source and part of most Linux distributions. For the purposes of this blog post, I will be explaining how this is accomplished using the CentOS 6.x Linux distribution. I will also endeavor to break down each command and explain what each parameter does. Much of this information can be found in the man pages for the various commands.

OpenSSL is installed as part of a base CentOS install. Included in the install is a directory structure in /etc/pki. All of the necessary tools and configuration files are located in this directory structure, so instead of reinventing the wheel, we’ll use the existing setup.

To get started, edit the default openssl.cnf configuration file. You can find this file in /etc/pki/tls. There are a few options you want to change from their defaults. Search for the following headers and change the options listed within.

[CA_default]
default_md = sha256

[req]
default_bits = 4096
default_md = sha256
  • default_md : This option defined the default message digest to use. Switching this to sha256 result in a stronger message digest being used.
  • default_bits : This option defines the default key size. 2048 is generally considered a minimum these days. I recommend setting this to 4096.

Once the openssl.cnf file is set up, the rest of the process is painless. First, switch into the correct directory.

cd /etc/pki/tls/misc

Next, use the CA command to create a new CA.

[root@localhost misc]# ./CA -newca
CA certificate filename (or enter to create)

Making CA certificate ...
Generating a 4096 bit RSA private key
 ...................................................................................................................................................................................................................................................++
.......................................................................++
writing new private key to '/etc/pki/CA/private/./cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:cert.example.com
Email Address []:certadmin@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/./cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798347 (0xf837fc8d719b304b)
        Validity
            Not Before: Feb 13 18:37:14 2014 GMT
            Not After : Feb 12 18:37:14 2017 GMT
        Subject:
            countryName               = US
            stateOrProvinceName       = MyState
            organizationName          = My Company Inc.
            commonName                = cert.example.com
            emailAddress              = certadmin@example.com
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

            X509v3 Basic Constraints:
                CA:TRUE
Certificate is to be certified until Feb 12 18:37:14 2017 GMT (1095 days)

Write out database with 1 new entries
Data Base Updated

And that’s about it. The root certificate is located in /etc/pki/CA/cacert.pem. This file can be made public without compromising the security of your system. This is the same certificate you’ll want to import into your browser, email client, etc. in order to validate and certificates you may sign.

Now you can start signing certificates. First you’ll need to create a CSR on the server you want to install it on. The following command creates both the private key and the CSR for you. I recommend using the server name as the name of the CSR and the key.

openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr
  • openssl : The openSSL command itself
  • req : This option tells openSSL that we are performing a certificate signing request (CSR) operation.
  • -newkey : This option creates a new certificate request and a new private key. It will prompt the user for the relevant field values. The rsa:4096 argument indicates that we want to use the RSA algorithm with a key size of 4096 bits.
  • -keyout : This gives the filename to write the newly created private key to.
  • -out : This specifies the output filename to write to.
[root@localhost misc]# openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr Generating a 4096 bit RSA private key
.....................................................................................................................++
..........................................................................................................................................................................................................................................................................................................................................................................................................++
writing new private key to 'www.example.com.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:www.example.com
Email Address []:hostmaster@example.com

Once you have the CSR, copy it over to the server you’re using to sign certificates. Unfortunately, the existing tools don’t make it easy to merely name the CSR you’re trying to sign, so we need to create our own tool. First, create a new directory to put the CSRs in.

mkdir /etc/pki/tls/csr

Next, create the sign_cert.sh script in the directory we just created. This file needs to be executable.

#!/bin/sh

# Revoke last year's certificate first :
# openssl ca -revoke cert.crt

DOMAIN=$1
YEAR=`date +%Y`
rm -f newreq.pem
ln -s $DOMAIN.csr newreq.pem
/etc/pki/tls/misc/CA -sign
mv newcert.pem $DOMAIN.$YEAR.crt

That’s all you need to start signing certificates. Place the CSR you transferred from the other server into the csr directory and use script we just created to sign it.

[root@localhost csr]# ./sign_cert.sh www.example.com
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
        Subject:
            countryName = US
            stateOrProvinceName = MyState
            localityName = MyCity
            organizationName = My Company Inc.
            commonName = www.example.com
            emailAddress = hostmaster@example.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Comment:
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier:
                3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Certificate is to be certified until Feb 13 18:48:55 2015 GMT (365 days)
Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=MyState, O=My Company Inc., CN=cert.example.com/emailAddress=certadmin@example.com
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
    Subject: C=US, ST=MyState, L=MyCity, O=My Company Inc., CN=www.example.com/emailAddress=hostmaster@example.com
    Subject Public Key Info:
        Public Key Algorithm: rsaEncryption
            Public-Key: (4096 bit)
            Modulus:
                00:d9:5a:cc:87:f0:e5:1e:6f:a0:25:cd:fe:36:64:
                6c:68:ae:2f:3e:7e:93:93:a4:69:6f:f1:28:c1:c2:
                4d:5f:3c:3a:61:2e:4e:f0:90:89:54:48:d6:03:83:
                fb:ac:1e:7c:9a:e8:be:cf:c9:8f:93:41:27:3e:1b:
                66:63:db:a1:54:cb:f7:1d:0b:71:bc:5f:80:e1:30:
                e4:28:14:68:1c:09:ba:d0:aa:d3:e6:2b:24:cd:21:
                67:99:dc:8b:7a:2c:94:d0:ed:8e:02:5f:2f:52:06:
                09:0e:8a:b7:bf:64:e8:d7:bf:94:94:ad:80:34:57:
                32:89:51:00:fe:fd:8c:7d:17:35:4c:c7:5f:5b:58:
                f4:97:9b:21:42:9e:a9:6c:86:5f:f4:35:98:a5:81:
                62:9d:fa:15:07:9d:29:25:38:2b:5d:22:74:58:f8:
                58:56:1c:e9:65:a3:62:b5:a7:66:17:95:12:21:ca:
                82:12:90:b6:8a:8d:1f:79:e8:5c:f4:f9:6c:3a:44:
                f9:3a:3f:29:0d:2e:bf:51:98:9f:58:21:e5:d9:ee:
                78:54:ad:5a:a2:6f:d1:85:9a:bc:b9:21:92:e8:76:
                80:b8:0f:96:77:9a:99:5e:3b:06:bb:6f:da:1c:6e:
                f2:10:16:69:ba:2b:57:c8:1a:cc:b6:e4:0c:1d:b2:
                a6:b7:b9:6c:37:2e:80:13:46:a1:46:c3:ca:d6:2b:
                cd:f7:ba:38:98:74:15:7f:f1:67:03:8e:24:89:96:
                55:31:eb:d8:44:54:a5:11:04:59:e6:73:59:42:ed:
                aa:a3:37:13:ab:63:ab:ef:61:65:0a:af:2f:71:91:
                23:40:7d:f8:e8:a1:9d:cf:3f:e5:33:d9:5f:d2:4d:
                06:d0:2c:70:59:63:06:0f:2a:59:ae:ae:12:8d:f4:
                6c:fd:b2:33:76:e8:34:0f:1f:24:91:2a:a8:aa:1b:
                11:8a:0b:86:f3:67:b8:be:b7:a0:06:02:4a:76:ef:
                dd:ed:c4:a9:03:a1:8c:b0:39:9d:35:98:7f:04:1c:
                24:8a:1c:7c:6f:35:56:71:ee:b5:36:b7:3f:14:04:
                eb:48:a1:4f:6f:8e:43:7c:8b:36:4a:bf:ba:e9:8b:
                d9:38:0c:76:24:e9:a3:38:bf:4e:86:fd:31:4d:c3:
                6f:16:07:09:dd:d8:6b:0b:9d:4d:97:eb:1f:92:21:
                b2:a5:f9:d8:55:61:85:d2:99:97:bc:27:12:be:eb:
                55:86:ee:1f:f5:6f:a7:c5:64:2f:4e:c2:67:a3:52:
                97:7a:d9:66:89:05:6a:59:ed:69:7b:22:10:2b:a1:
                14:4e:5d:b8:f0:21:e9:11:d0:25:ae:bc:05:2b:c3:
                db:ad:cf
            Exponent: 65537 (0x10001)
    X509v3 extensions:
        X509v3 Basic Constraints:
            CA:FALSE
        Netscape Comment:
            OpenSSL Generated Certificate
        X509v3 Subject Key Identifier:
            3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
        X509v3 Authority Key Identifier:
            keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Signature Algorithm: sha256WithRSAEncryption
     ca:66:b2:55:64:e6:40:a5:85:19:11:66:0d:63:89:fb:0d:3a:
     0c:ec:fd:cb:5c:93:44:1e:3f:1b:ca:f5:3d:85:ab:0a:0b:dc:
     f3:18:1d:1f:ec:85:ff:f3:82:52:9e:c7:12:19:07:e9:6a:82:
     bd:32:f6:d1:19:b2:b7:09:1c:34:d7:89:45:7e:51:4d:42:d6:
     4e:78:b6:39:b3:76:58:f8:20:57:b3:d8:7b:e0:b3:2f:ce:9f:
     a2:59:de:f6:31:f2:09:1c:91:3b:7f:97:61:cb:11:a4:b4:73:
     ab:47:64:e8:93:07:98:d5:47:75:8d:9a:8f:a3:8f:e8:f4:42:
     7e:b8:1b:e8:36:72:13:93:f9:a8:cc:6d:b4:85:a7:af:94:fe:
     f3:6e:76:c2:4d:78:c3:c2:0b:a4:48:27:d3:eb:52:c3:46:14:
     c1:26:03:28:a0:53:c7:db:59:c9:95:b8:d9:f0:d9:a8:19:4a:
     a7:0f:81:ad:3c:e1:ec:f2:21:51:0d:bc:f9:f9:f6:b6:75:02:
     9f:43:de:e6:2f:9b:77:d3:c3:72:6f:f6:18:d7:a3:43:91:d2:
     04:2a:c8:bf:67:23:35:b7:41:3f:d1:63:fe:dc:53:a7:26:e9:
     f4:ee:3b:96:d5:2a:9c:6d:05:3d:27:6e:57:2f:c9:dc:12:06:
     2c:cf:0c:1b:09:62:5c:50:82:77:6b:5c:89:32:86:6b:26:30:
     d2:6e:33:20:fc:a6:be:5a:f0:16:1a:9d:b7:e0:d5:d7:bb:d8:
     35:57:d2:be:d5:07:98:b7:3c:18:38:f9:94:4c:26:3a:fe:f2:
     ad:40:e6:95:ef:4b:a9:df:b0:06:87:a2:6c:f2:6a:03:85:3b:
     97:a7:ef:e6:e5:d9:c3:57:87:09:06:ae:8a:5a:63:26:b9:35:
     29:a5:87:4b:7b:08:b9:63:1c:c3:65:7e:97:ae:79:79:ed:c3:
     a3:36:c3:87:1f:54:fe:0a:f1:1a:c1:71:3d:bc:9e:36:fc:da:
     03:2b:61:b5:19:0c:d7:4d:19:37:61:45:91:4c:c9:7a:5b:00:
     cd:c2:2d:36:f9:1f:c2:b1:97:2b:78:86:aa:75:0f:0a:7f:04:
     85:81:c5:8b:be:af:a6:a7:7a:d2:17:26:7a:86:0d:f8:fe:c0:
     27:a8:66:c7:92:cd:c5:34:99:c9:8e:c1:25:f3:98:df:4e:48:
     37:4a:ee:76:4a:fa:e4:66:b4:1f:cd:d8:e0:25:fd:c7:0b:b3:
     12:af:bb:b7:29:98:5e:86:f2:12:8e:20:c6:a9:40:6f:39:14:
     8b:71:9f:98:22:a0:5b:57:d1:f1:88:7d:86:ad:19:04:7b:7d:
     ee:f2:c9:87:f4:ca:06:07
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIJAPg3/I1xmzBMMA0GCSqGSIb3DQEBCwUAMHoxCzAJBgNV
BAYTAlVTMRAwDgYDVQQIDAdNeVN0YXRlMRgwFgYDVQQKDA9NeSBDb21wYW55IElu
Yy4xGTAXBgNVBAMMEGNlcnQuZXhhbXBsZS5jb20xJDAiBgkqhkiG9w0BCQEWFWNl
cnRhZG1pbkBleGFtcGxlLmNvbTAeFw0xNDAyMTMxODQ4NTVaFw0xNTAyMTMxODQ4
NTVaMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHTXlTdGF0ZTEPMA0GA1UEBwwG
TXlDaXR5MRgwFgYDVQQKDA9NeSBDb21wYW55IEluYy4xGDAWBgNVBAMMD3d3dy5l
eGFtcGxlLmNvbTElMCMGCSqGSIb3DQEJARYWaG9zdG1hc3RlckBleGFtcGxlLmNv
bTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANlazIfw5R5voCXN/jZk
bGiuLz5+k5OkaW/xKMHCTV88OmEuTvCQiVRI1gOD+6wefJrovs/Jj5NBJz4bZmPb
oVTL9x0LcbxfgOEw5CgUaBwJutCq0+YrJM0hZ5nci3oslNDtjgJfL1IGCQ6Kt79k
6Ne/lJStgDRXMolRAP79jH0XNUzHX1tY9JebIUKeqWyGX/Q1mKWBYp36FQedKSU4
K10idFj4WFYc6WWjYrWnZheVEiHKghKQtoqNH3noXPT5bDpE+To/KQ0uv1GYn1gh
5dnueFStWqJv0YWavLkhkuh2gLgPlneamV47Brtv2hxu8hAWaborV8gazLbkDB2y
pre5bDcugBNGoUbDytYrzfe6OJh0FX/xZwOOJImWVTHr2ERUpREEWeZzWULtqqM3
E6tjq+9hZQqvL3GRI0B9+Oihnc8/5TPZX9JNBtAscFljBg8qWa6uEo30bP2yM3bo
NA8fJJEqqKobEYoLhvNnuL63oAYCSnbv3e3EqQOhjLA5nTWYfwQcJIocfG81VnHu
tTa3PxQE60ihT2+OQ3yLNkq/uumL2TgMdiTpozi/Tob9MU3DbxYHCd3YawudTZfr
H5IhsqX52FVhhdKZl7wnEr7rVYbuH/Vvp8VkL07CZ6NSl3rZZokFalntaXsiECuh
FE5duPAh6RHQJa68BSvD263PAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJYIZIAYb4
QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBQ6
7is6c6bDXDmQ6oU/2nEze5FNfzAfBgNVHSMEGDAWgBQU/BS89KU+awxYO987JjVG
oL7s8TANBgkqhkiG9w0BAQsFAAOCAgEAymayVWTmQKWFGRFmDWOJ+w06DOz9y1yT
RB4/G8r1PYWrCgvc8xgdH+yF//OCUp7HEhkH6WqCvTL20RmytwkcNNeJRX5RTULW
Tni2ObN2WPggV7PYe+CzL86folne9jHyCRyRO3+XYcsRpLRzq0dk6JMHmNVHdY2a
j6OP6PRCfrgb6DZyE5P5qMxttIWnr5T+8252wk14w8ILpEgn0+tSw0YUwSYDKKBT
x9tZyZW42fDZqBlKpw+BrTzh7PIhUQ28+fn2tnUCn0Pe5i+bd9PDcm/2GNejQ5HS
BCrIv2cjNbdBP9Fj/txTpybp9O47ltUqnG0FPSduVy/J3BIGLM8MGwliXFCCd2tc
iTKGayYw0m4zIPymvlrwFhqdt+DV17vYNVfSvtUHmLc8GDj5lEwmOv7yrUDmle9L
qd+wBoeibPJqA4U7l6fv5uXZw1eHCQauilpjJrk1KaWHS3sIuWMcw2V+l655ee3D
ozbDhx9U/grxGsFxPbyeNvzaAythtRkM100ZN2FFkUzJelsAzcItNvkfwrGXK3iG
qnUPCn8EhYHFi76vpqd60hcmeoYN+P7AJ6hmx5LNxTSZyY7BJfOY305IN0rudkr6
5Ga0H83Y4CX9xwuzEq+7tymYXobyEo4gxqlAbzkUi3GfmCKgW1fR8Yh9hq0ZBHt9
7vLJh/TKBgc=
-----END CERTIFICATE-----
Signed certificate is in newcert.pem

The script automatically renamed the newly signed certificate. In the above example, the signed certificate is in www.example.com.2014.crt. Transfer this file back to the server it belongs on and you’re all set to start using it.

That’s it! You’re now a certificate authority with the power to sign your own certificates. Don’t let all that power go to your head!

SSL “Security”

SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990’s. Today, SSL, and it’s replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet.

SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI.

PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin.

When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate.

CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA.

Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated.

In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the “Green Address Bar” means that the website is completely trustworthy. And they’ve been pretty effective. But, as with most marketing, they didn’t quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it’s still possible that the certificate is fake.

There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any website the attacker wanted to hijack. That certificate plus some creative DNS poisoning and the attacker suddenly looks like your bank, or google, or whatever site the attacker wants to be. And, they’ll have a nice shiny green EV certificate.

So how do we fix this? Well, one way would be to use the certificate revocation system that already exists within the PKI infrastructure. If a certificate is stolen, or a false certificate is created, the CA has the ability to put the signature for that certificate into the revocation system. When a user tries to load a site with a bad certificate, a warning is displayed telling the user that the certificate is not to be trusted.

Checking revocation of a certificate takes time, and what happens if the revocation server is down? Should the browser let the user go to the site anyway? Or should it block by default? The more secure option is to block, of course, but most users won’t understand what’s going on. So most browser manufacturers have either disabled revocation checking completely, or they default to allowing a user to access the site when the revocation site is slow or unavailable.

Without the ability to verify if a certificate is valid or not, there can be no real trust in the security of the connection, and that’s a problem. Perhaps one way to fix this problem is to disconnect the revocation process from the process of loading the webpage. If the revocation check happened in parallel to the page loading, it shouldn’t interfere with the speed of the page load. Additional controls can be put into place to prevent any data from being sent to the remote site without a warning until the revocation check completes. In this manner, the revocation check can take a few seconds to complete without impeding the use of the site. And after the first page load, the revocation information is cached anyway, so subsequent page loads are unaffected.

Another option, floated by the browser builders themselves, is to have the browser vendors host the revocation information. This information is then passed on to the browsers when they’re loaded. This way the revocation process can be handled outside of the CAs, handling situations such as those caused by a CA being compromised. Another idea would be to use short term certificates that expire quickly, dropping the need for revocation checks entirely.

It’s unclear as to what direction the market will move with this issue. It has been over two years since the attacks on Diginotar and Comodo and the immediacy of this problem seems to have passed. At the moment, the only real fix for this is user education. But with the marketing departments for SSL vendors working to convince users of the security of SSL, this seems unlikely.

BSides Delaware 2013

The annual BSides Delaware conference took place this past weekend, November 8th and 9th. BSides Delaware is a free community driven security event that takes place at the Wilmington University New Castle campus. The community is quite open, welcoming seasoned professionals, newcomers, curious individuals, and even children. There were a number of families who attended, bringing their children with them to learn and have fun.

I was fortunate enough to be able to speak at last years BSides and was part of the staff for this years event. There were two tracks for talks, many of which were recorded and are already online thanks to Adrian Crenshaw, the IronGeek. Adrian has honed his video skills and was able to have every recording online by the closing ceremonies on Saturday evening.

In all there were more than 25 talks over the course of two days covering a wide variety of topics, logging, Bitcoins, forensics, and more. While most speakers were established security professionals, there were a few new speakers striving to make a name for themselves.

This year also included a FREE wireless essentials training class. The class was taught by a team of world-class instructors including Mike Kershaw (drag0rn), author of the immensely popular Kismet wireless tool, Russell Handorf from the FBI Cyber Squad, and Rick Farina, lead developer for Pentoo. The class covered everything from wireless basics to software-defined radio hacking. An absolutely amazing class.

In addition to the talks, BSides also features not one, but two lockpick villages. Both Digital Trust as well as Toool were present. The lockpick villages were a big hit with seasoned professionals as well as the very young. It’s amazing to see how adept a young child can be with a lockpick.

Hackers for Charity was present as well with a table of goodies for sale. They also held a silent (and not so silent) auction where all proceeds went to the charity. Hackers for Charity raises money to help with a variety of projects they engage in across the world. From their website :

We employ volunteer hackers and technologists through our Volunteer Network and engage their skills in short projects designed to help charities that can not afford traditional technical resources.

We’ve personally witnessed how one person can have a profound impact on the world. By giving of their skills, time and talent our volunteers are profoundly impacting the world, one “hacker” at a time.

BSides 2013 was an amazing experience. This was my second year at the conference and it’s amazing how it has grown. The dates for BSidesDE 2014 have already been announced, November 14th and 15th. Mark your calendars and make an effort to come join in the fun. It’s worth it.

Derbycon 2012

I spent this past weekend in Louisville, KY attending a relatively new security conference called Derbycon. This year was the second year they held the conference and the first year I spoke there. It was amazing, to say the least.

I haven’t been to many conventions, and this is the only security-oriented convention I’ve attended. When I first attended last year, it was with come trepidation. I knew that some of the attendees I’d be seeing were truly rockstars in the security world. And, unfortunately, one of the people who was supposed to come with us was unable to attend. Of course, that person was the one person in our group who was connected within the security world and we were depending on them to introduce us to everyone.

It went well, nonetheless, and we were able to meet a lot of amazing people while we were there. Going back this year, we were able to rekindle friendships that started last year, and even make a few new ones. Derbycon has an absolutely amazing sense of family. Even the true rockstars of the con are down to earth enough to hang out with the newcomers.

And this year, I had the opportunity to speak. I submitted my CFP earlier in the year, not really expecting it to be chosen. Much to my surprise, though, it was. And so I spent some time putting together my talk and prepared to stand in front of the very people I looked up to. It was nerve-wracking to say the least. You can watch the video over on the Irongeek site, and you can find the slides in my presentation archive.

But I powered through it. I delivered my talk and while it may not have been the most amazing talk, it was an accomplishment. I think it’s given me a bit more confidence in my own abilities and I’m looking forward to giving another. In fact, I’ve since submitted a talk to BSides Deleware at the behest of the organizers. I haven’t heard back yet, but here’s hoping.

I’m already making plans to attend Derbycon 2013 and I hope to be a permanent fixture there for many years to come. Derbycon is an amazing place to go and something truly magnificent to experience. I may not be in the security industry, but they made me feel truly welcome despite my often dumb questions and inane comments. Rel1k, IronGeek, and Purehate have put together something special and I was proud to be a part of it again.

Jumping The Gap

I listened to a news story on NPR’s On The Media recently about “Cyber Warfare” and assessing it’s true threat. On the one hand, it seemed like another misguided report from a clueless news media. On the other hand, though, it did make me think a bit.

Much of the talk about Cyber Warfare revolves around attacking the various SCADA systems used to control the nation’s physical infrastructure. By today’s standards, many of these systems are quite primitive. Many of these systems are designed for a very specific purpose, rarely upgraded to run on modern operating systems, and very rarely, if ever, designed to be secure. The state of the art in security for many of these systems is to not allow outside access to the system.

Unfortunately, if numerous reports are to be believed, a good portion of the world’s infrastructure is connected to the Internet in one manner or another. The number of institutions that truly air gap their critical networks is alarmingly low. A researcher from IO Active, who provided some of the information for the aforementioned NPR article, used SHODAN to scour the Internet for SCADA systems. Why use SHODAN? Turns out, the simple act of scanning the Internet for these systems often resulted in the target systems crashing and going offline. If a simple network scan can kill one of these systems, then what hope do we have?

But, air gapping is by no means a guarantee against attacks since users of these systems may regularly switch between connected and non-connected systems and use some form of media to transfer files back and forth. There is precedence for this with the Stuxnet virus. According to reports, the Iranian nuclear facility was, in fact, air gapped. However, Stuxnet was designed to replicate onto USB drives and other media. Plug an infected USB drive into a targeted SCADA system and poof, instant infection across an air gapped system.

So what can be done here? How do we keep our infrastructure safe from attackers? Yes, even aging attackers…

Personally, I believe this comes down, again, to Defense in Depth. With the exception of not building it in the first place, I don’t believe that there is a way to prevent attacks. And any determined attacker will eventually get in, given time. So the only way to defend against this is to build a layered defense grid with a full monitoring back end. Expect that attackers will make it through one or two layers before being detected. Determined attackers may make it even further. But if you build you defenses with this in mind, you will stand a better chance at detecting and repelling these attacks.

I don’t believe that air gapping systems is a viable security strategy. If anything, it can result in a false sense of security for users and administrators. After all, if the system isn’t connected, how can it possibly be infected? Instead, start building in security from the start and deploy your defense in monitored layers. It works.

Who’s Problem Is It Anyway?

This week, Adobe released a security patch for their CS5 product line. While Adobe releasing security patches isn’t really that surprising given their track record with vulnerable products, what is somewhat surprising are the circumstances surrounding the patch. Adobe released the patch somewhat reluctantly.

Sometime in May, possibly earlier, Adobe was made aware of a fairly severe security vulnerability in their CS5 product line. A specially crafted image file was enough to compromise the victim’s computer. Obviously this is a pretty severe flaw and should be fixed ASAP, right? Well, Adobe didn’t really see it that way. Their initial response to the problem was that users who wanted a fixed version would have to pay to upgrade to the CS6 product line, in which the flaw was patched. Eventually they decided to backport the patch to the CS5 version.

Adobe’s initial response and their eventual capitulation leads to a broader discussion. Given any security problem, or even any bug in general, who is responsible for fixing it? The vendor, of course, right? Well… Maybe?

In a perfect world, there would be no bugs, security or otherwise. In a slightly less perfect world, all bugs would be resolved before a product is retired. But neither world exists and bugs seem to prevail. So, given that, who’s problem is it anyway?

There are a lot of justifications vendors make as to when they’ll patch, how they’ll support something, and, of course, excuses. It’s not an easy problem for vendors, though, and some vendors put a lot of thought into their policies. They don’t always get them right, and there’s never a way to make everyone happy.

Patching generally follows a product lifecycle. While the product is supported, patching happens as a normal course of business. When a product is retired, some companies put together a support plan with For instance, when Cisco announces that a product has entered the End-of-Life cycle, they lay out a multi-year plan for support. Typically this involves regular software maintenance for a year, security releases for 2-3 years, and then hardware maintenance for the remainder. This gives businesses ample time to deal with finding a suitable replacement.

Unfortunately, not all vendors act responsibly and often customers are left high and dry when a product is suddenly obsoleted. Depending on the vendor, this sometimes leads to discussions about the possibility of legislation forcing vendors to support products, or to at least address security vulnerabilities. If something like this were to pass, where does it end? Are vendors forced to support products forever? Should they only have to fix severe security problems? And what constitutes a severe security problem?

There are a multitude of reasons that bugs, security or otherwise, are not dealt with. Some justifiable, others not. Working in networking, the primary excuse I’ve heard from hardware vendors over the year is that the management interface of their product is not intended to be on a public network where it can be attacked. Or that the management interfaces should be put behind a firewall where it can’t be attacked. These excuses are garbage, of course, but some vendors just continue to give them. And, unfortunately, you’re not always in a position to drop a vendor and move elsewhere. So, we do what we can to secure the systems and move on.

And sometimes the problem isn’t the vendor, but the customer. How long has it been since Microsoft phased out older versions of it’s Windows operating system? Windows XP is relatively recent, but it’s been a number of years since Windows 2000 was phased out. Or how about Windows 98, 95, and even Windows NT? And customers still have these deployed in their networks. Hell, I know of at least one OS/2 Warp system that’s still deployed in a Telco Central Office!

There is a basis for some regulation, however, and it may affect vendors. When the security of a particular product can significantly impact the public, it can be argued that regulation is necessary. The poster child for this argument are SCADA systems which seem to be perpetually riddled with security holes, mostly due to outdated operating systems.

SCADA systems are what typically control the electrical grid or nuclear power plants. For obvious reasons, security problems with these systems are a deadly serious problem. I often hear that these systems should be air gapped from the Internet, but the lure of easy access and control often pushes users to ignore this advice.

So should SCADA systems be regulated? It’s obvious that the regulations in place already for the industries they are used in aren’t working, so what makes us think that more regulation will help? And if we regulate and force vendors to provide patches for security problems, what makes us think that industries will install them?

This is a complex problem and there are no easy answers. The best we can hope for is a competent administrator who knows how to handle security and deal with threats properly. Until then, let’s hope for incompetent criminals.

Protecting Sources in the 21st Century

Trust is key in many situations. This can be especially true for journalists interested in reporting on sensitive matters. If journalists couldn’t be trusted to protect the identity of their confidential sources, many news items we take for granted would never have been written, or perhaps they wouldn’t have included some of the crucial information they revealed. For instance, much of the critical information about the Watergate scandal was given to reporters by a confidential source who went by the name of Deep Throat.

Until recently, reporters made contact with their sources via anonymous phone calls, often from pay phones, secret meetings, and dead drops. The identify of sources could be kept secret fairly easily, especially if the meetings were carefully conducted in such a manner as to leave little or no trail for anyone to follow. This meant avoiding the use of phones as they were traceable. Additionally, many journalists were willing to risk jail time instead of revealing their sources.

With the advent of the Internet, it became possible to contact sources, both local and distant, quickly and conveniently via email or some form of instant messaging. The ability to reach out to a source and get an almost immediate answer means journalists can quickly deal with rapidly evolving stories. The anonymity of the Internet means that sources stay anonymous. It’s a win-win situation.

Or is it…

I was listening to an On The Media podcast recently and they featured a story about how reporters using the Internet are, in some cases, exposing their contacts without meaning to, often without even knowing it. You can listen to the story below or read the transcript.

Before the Internet, phone conversations were sometimes considered an acceptable risk for contacting sources. After all, tracing a phone call was something it generally took a court order to accomplish. The Internet, however, is a completely different beast. Depending on the communications software used, tracing the owner of an account can be accomplished very easily by just about anyone. Software such as Netglub or Maltego can be used to quickly gather Intel on someone, starting with something as small and simple as a single email address.

Email accounts are generally accessible from anywhere in the world, protected by only a username and password. Brute forcing software can be used to crack a password in a relatively short time allowing someone direct access to the mail stored in the account. And if the mail is sent in clear text, someone trying to identify the source can easily read email sent between the reporter and their source without anyone being the wiser.

Other accounts can be similarly attacked. The end result of identifying the source can be mere embarrassment, or perhaps the source losing their job. Or, as is often the case when foreign news sources are involved, the source can be hunted down and killed.

For a reporter, protecting a source has always been important, but in some cases, it’s a matter of life and death. In the past few years, unrest overseas in places such as Iran, Egypt, Syria, and others has shown that secure communication methods are necessary to help save the lives of those fighting for change. Governments have been ruthless in hunting down and eliminating those who would oppose them. Using secure methods for communication have become lifelines for opposition forces. Likewise, reporters and anyone else who interacts with these sorts of contacts should also be using whatever methods of security they can to ensure that their sources are protected.

Towards Building More Secure Networks

It is no surprise that security is at the forefront of everyone’s minds these days. With high profile breaches, to script kiddies wreaking havoc across the Internet, it is obvious that there are some weaknesses that need to be addressed.

In most cases, complete network redesigns are out of the question. This can be extremely invasive and costly. However, it may be possible to augment the existing network in such a manner as to add additional layers of security. It’s also possible that this may lead to the possibility of being able to make even more changes down the road.

So what do I mean by this? Allow me to explain…

Many networks are fairly simple with only a few subnets, typically a user and a server subnet. Sometimes there’s a bit of complexity on the user side, creating subnets per department, or subnets per building. Often this has more to do with manageability of users rather than security. Regardless, it’s a good practice that can be used to make a network more secure in the long run.

What is often neglected is the server side of things. Typically, there are one, maybe two subnets. Outside users are granted access to the standard web ports. Sometimes more ports such as ssh and ftp are opened for a variety of reasons. What administrators don’t realize, or don’t intend is that they’re allowing outsiders direct access to their core servers, without any sort of security in front of it. Sure, sure, there might be a firewall, but a firewall is there to ensure you only come in on the proper ports, right? If your traffic is destined for port 80, it doesn’t matter if it’s malicious or not, the firewall lets it through anyway.

But what’s the alternative? What can be done instead? Well, what about sending outside traffic to a separate network where the systems being accessed are less critical, and designed to verify traffic before passing it on to your core servers? What I’m talking about is creating a DMZ network and forcing all users through a proxy. Even a simple proxy can help to prevent many attacks by merely dropping illegal traffic and not letting it through to the core server. Proxies can also be heavily fortified with HIDS and other security software designed to look for suspicious traffic and block it.

By adding in this DMZ layer, you’ve put a barrier between your server core and the outside world. This is known as layered defense. You can add additional layers as time and resources allow. For instance, I recommend segmenting away database servers as well as identity management servers. Adding this additional segmentation can be done over time as new servers come online and old servers are retired. The end goal is to add this additional security without disrupting the network as a whole.

If you have the luxury of building a new network from the ground up, however, make sure you build this in from the start. There is, of course, a breaking point. It makes sense to create networks to segregate servers by security level, but it doesn’t make sense to segregate purely to segregate. For instance, you may segregate database and identity management servers away from the rest of the servers, but segregating Oracle servers away from MySQL servers may not add much additional security. There are exceptions, but I suggest you think long and hard before you make such an exception. Are you sure that the additional management overhead is worth the security? There’s always a cost/benefit analysis to perform.

Segregating networks is just the beginning. The purpose here is to enhance security. By segregating networks, you can significantly reduce the number of clients that need to access a particular server. The whole world may need to access your proxy servers, but only your proxy servers need to access the actual web application servers. Likewise, only your web application servers need access to your database servers. Using this information, you can tighten down your firewall. But remember, a firewall is just a wall with holes in it. The purpose is to deflect random attacks, but it does little to nothing to prevent attacks on ports you’ve opened. For that, there are other tools.

At the very edge, simplistic fire walling and generally loose HIDS can be used to deflect most attacks. As you move further within the network, additional security can be used. For instance, deploying an IPS at the very edge of the network can result in the IPS being quickly overwhelmed. Of course, you can buy a bigger, better IPS, but to what end? Instead, you can move the IPS further into the network, placing it where it be more effective. If you place it between the proxy and the web server, you’ve already ensured that the only traffic hitting the IPS is loosely validated HTTP traffic. With this knowledge, you can reduce the number of signatures the IPS needs to have, concentrating on high quality HTTP signatures. Likewise, an IPS between the web servers and database servers can be configured with high quality database signatures. You can, in general, direct the IPS to block any and all traffic that falls outside of those parameters.

As the adage goes, there is no silver bullet for security. Instead, you need to use every weapon in your arsenal and put together a solid defense. By combining all of these techniques together, you can defend against many attacks. But remember, there’s always a way in. You will not be able to stop the most determined attacker, you can only hope to slow him down enough to limit his access. And remember, securing your network is only one aspect of security. Don’t forget about the other low hanging fruit such as SQL injection, cross site scripting, and other common application holes. You may have the most secure network in existence, but a simple SQL injection attack can result in a massive data breach.

Mega Fail

So this happened :

Popular file-sharing website Megaupload shut down
Megaupload shut down by feds, seven charged, four arrested
Megaupload assembles worldwide criminal defense
Department of Justice shutdown of rogue site MegaUpload shows SOPA is unnecessary
And then.. This happened :

Megaupload Anonymous hacker retaliation, nobody wins

And, of course, the day before all of this happened was the SOPA/PIPA protest.

Wow.. The government, right? SOPA/PIPA isn’t even on the books, people are up in arms over it, and then they go and seize one of the largest file sharing websites on the planet! We should all band together and immediately protest this illegal seizure!

But wait.. hang on.. Since when does jumping to conclusions help? Let’s take a look and see what exactly is going on here.. According to the indictment, this case went before a grand jury before any takedown was performed. Additionally, this wasn’t an all-of-a-sudden thing. Megaupload had been contacted in the past about copyright violations and failed to deal with them as per established law.

There are a lot of people who are against this action. In fact, the hacktivist group, Anonymous, decided to display their dictate by performing DDoS attacks against high profile sites such as the US DoJ, MPAA, and RIAA. This doesn’t help things and may actually hurt the SOPA/PIPA protest in the long run.

Now I’m not going to say that the takedown was right and just, there’s just not enough information as of yet, and it may turn out that the government was dead wrong with this action. But at the moment, I have to disagree with those that point at this as an example of an illegal takedown. As a friend of mine put it, if the corner market is selling illegal bootleg videos, when they finally get raided, the store gets closed. Yes, there were legal uses of the services on the site, but the corner store sold milk too.

There are still many, many copyright and piracy issues to deal with. And it’s going to take a long time to deal with them. We need to be vigilant, and protesting when necessary does work. But jumping to conclusions like this, and then attacking sites such as the DoJ are not going to help the cause. There’s a time and a place for that, and I don’t believe we’re there yet.

Bringing Social To The Kernel

Imagine a world where you can login to your computer once and have full access to all of the functionality in your computer, plus seamless access to all of the web sites you visit on a daily basis. No more logging into each site individually, your computer’s operating system takes care of that for you.

That world may be coming quicker than you realize. I was listening to a recent episode of the PaulDotCom security podcast today. In this episode, they interviewed Jason Fossen, a SANS Security Faculty Fellow and instructor for SEC 505: Securing Windows. During the conversation, Jason mentioned some of the changes coming to the next version of Microsoft’s flagship operating system, Windows 8. What he described was, in a word, horrifying…

Not much information is out there about these changes yet, but it’s possible to piece together some of it. Jason mentioned that Windows 8 will have a broker system for passwords. Basically, Windows will store all of the passwords necessary to access all of the various services you interact with. Think something along the lines of 1Password or LastPass. The main difference being, this happens in the background with minimal interaction with the user. In other words, you never have to explicitly login to anything beyond your local Windows workstation.

Initially, Microsoft won’t have support for all of the various login systems out there. They seem to be focusing on their own service, Windows Live, and possibly Facebook. But the API is open, allowing third-parties to provide the necessary hooks to their own systems.

I’ve spent some time searching for more information and what I’m finding seems to indicate that what Jason was talking about is, in fact, the plan moving forward. TechRadar has a story about the Windows 8 Credential Vault, where website passwords are stored. The credential vault appears to be a direct competitor to 1Password and LastPass. As with other technologies that Microsoft has integrated in the past, this may be the death knell for password managers.

ReadWriteWeb has a story about the Windows Azure Access Control Service that is being used for Windows 8. Interestingly, this article seems to indicate that passwords won’t be stored on the Windows 8 system itself, but in a centralized “cloud” system. A system called the Access Control Service, or ACS, will store all of the actual login information, and the Windows 8 Password Broker will obtain tokens that are used for logins. This allows users to access their data from different systems, including tablets and phones, and retain full access to all of their login information.

Microsoft is positioning Azure ACS as a complete claims-based identity system. In short, this allows ACS to become a one-stop shop for single sign-on. I log into Windows and immediately have access to all of my accounts across the Internet.

Sounds great, right? In one respect, it is. But if you think about it, you’re making things REALLY easy for attackers. Now they can, with a single login and password, access every system you have access to. It doesn’t matter that you’ve used different usernames and passwords for your bank accounts. It doesn’t matter that you’ve used longer, more secure passwords for those sensitive sites. Once an attacker gains a foothold on your machine, it’s game over.

Jason also mentioned another chilling detail. You’ll be able to login to your local system using your Windows Live ID. So, apparently, if you forget your password for your local user, just login with your Windows Live ID. It’s all tied together. According to the TechRadar story, “if you forget your Windows password you can reset it from another PC using your Windows Live ID, so you don’t need to make a password restore USB stick any more.” They go on to say the following :

You’ll also have to prove your identity before you can ‘trust’ the PC you sync them to, by giving Windows Live a second email address or a mobile number it can text a security code to, so anyone who gets your Live ID password doesn’t get all your other passwords too – Windows 8 will make you set that up the first time you use your Live ID on a PC.

You can always sign in to your Windows account, even if you can’t get online – or if there’s a problem with your Live ID – because Windows 8 remembers the last password you signed in with successfully (again, that’s encrypted in the Password Vault).

With this additional tidbit of information, it would appear that an especially crafty attacker could even go as far as compromising your entire system, without actually touching your local machine. It may not be easy, but it looks like it’ll be significantly easier than it was before.

Federated identity is an interesting concept. And it definitely has its place. But, I don’t think tying everything together in this manner is a good move for security. Sure, you can use your Facebook ID (or Twitter, Google, OpenID, etc) already as a single login for many disparate sites. In fact, these companies are betting on you to do so. This ties all of your activity back to one central place where the data can be mined for useful and lucrative bits. And perhaps in the realm of a social network, that’s what you want. But I think there’s a limit to how wide a net you want to cast. But if what Jason says is true, Microsoft may be building the equivalent of the One Ring. ACS will store them all, ACS will verify them, ACS will authenticate them all, and to the ether supply them.