Becoming your own CA

SSL, as I mentioned in a previous blog entry, has some issues when it comes to trust. But regardless of the problems with SSL, it is a necessary part of the security toolchain. In certain situations, however, it is possible to overcome these trust issues.

Commercial providers are not the only entities that are capable of being a Certificate Authority. In fact, anyone can become a CA and the tools to do so are available for free. Becoming your own CA is a fairly painless process, though you might want to brush up on your openSSL skills. And lest you think you can just start signing certificates and selling them to third parties, it’s not quite that simple. The well-known certificate authorities have worked with browser vendors to have their root certificates added as part of the browser installation process. You’ll have to convince the browser vendors that they need to add your root certificate as well. Good luck.

Having your own CA provides you the means to import your own root certificate into your browser and use it to validate certificates you use within your network. You can use these SSL certificates for more than just websites as well. LDAP, RADIUS, SMTP, and other common applications use standard SSL certificates for encrypting traffic and validating remote connections. But as mentioned above, be aware that unless a remote user has a copy of your root certificate, they will be unable to validate the authenticity of your signed certificates.

Using certificates signed by your own CA can provide you that extra trust level you may be seeking. Perhaps you configured your mail server to use your certificate for the POP and IMAP protocols. This makes it more difficult for an attacker to masquerade as either of those services without obtaining your signing certificate so they can create their own. This is especially true if you configure your mail client such that your root certificate is the only certificate that can be used for validation.

Using your own signed certificates for internal, non-public facing services provides an even better use-case. Attacks such as DNS cache poisoning make it possible for attackers to trick devices into using the wrong address for an intended destination. If these services are configured to only use your certificates and reject connection attempts from peers with invalid certificates, then attackers will only be able to impersonate the destination if they can somehow obtain a valid certificate signed by your signing certificate.

Sound good? Well, how do we go about creating our own root certificate and all the various machinery necessary to make this work? Fortunately, all of the necessary tools are open-source and part of most Linux distributions. For the purposes of this blog post, I will be explaining how this is accomplished using the CentOS 6.x Linux distribution. I will also endeavor to break down each command and explain what each parameter does. Much of this information can be found in the man pages for the various commands.

OpenSSL is installed as part of a base CentOS install. Included in the install is a directory structure in /etc/pki. All of the necessary tools and configuration files are located in this directory structure, so instead of reinventing the wheel, we’ll use the existing setup.

To get started, edit the default openssl.cnf configuration file. You can find this file in /etc/pki/tls. There are a few options you want to change from their defaults. Search for the following headers and change the options listed within.

[CA_default]
default_md = sha256

[req]
default_bits = 4096
default_md = sha256
  • default_md : This option defined the default message digest to use. Switching this to sha256 result in a stronger message digest being used.
  • default_bits : This option defines the default key size. 2048 is generally considered a minimum these days. I recommend setting this to 4096.

Once the openssl.cnf file is set up, the rest of the process is painless. First, switch into the correct directory.

cd /etc/pki/tls/misc

Next, use the CA command to create a new CA.

[root@localhost misc]# ./CA -newca
CA certificate filename (or enter to create)

Making CA certificate ...
Generating a 4096 bit RSA private key
 ...................................................................................................................................................................................................................................................++
.......................................................................++
writing new private key to '/etc/pki/CA/private/./cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:cert.example.com
Email Address []:certadmin@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/./cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798347 (0xf837fc8d719b304b)
        Validity
            Not Before: Feb 13 18:37:14 2014 GMT
            Not After : Feb 12 18:37:14 2017 GMT
        Subject:
            countryName               = US
            stateOrProvinceName       = MyState
            organizationName          = My Company Inc.
            commonName                = cert.example.com
            emailAddress              = certadmin@example.com
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

            X509v3 Basic Constraints:
                CA:TRUE
Certificate is to be certified until Feb 12 18:37:14 2017 GMT (1095 days)

Write out database with 1 new entries
Data Base Updated

And that’s about it. The root certificate is located in /etc/pki/CA/cacert.pem. This file can be made public without compromising the security of your system. This is the same certificate you’ll want to import into your browser, email client, etc. in order to validate and certificates you may sign.

Now you can start signing certificates. First you’ll need to create a CSR on the server you want to install it on. The following command creates both the private key and the CSR for you. I recommend using the server name as the name of the CSR and the key.

openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr
  • openssl : The openSSL command itself
  • req : This option tells openSSL that we are performing a certificate signing request (CSR) operation.
  • -newkey : This option creates a new certificate request and a new private key. It will prompt the user for the relevant field values. The rsa:4096 argument indicates that we want to use the RSA algorithm with a key size of 4096 bits.
  • -keyout : This gives the filename to write the newly created private key to.
  • -out : This specifies the output filename to write to.
[root@localhost misc]# openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr Generating a 4096 bit RSA private key
.....................................................................................................................++
..........................................................................................................................................................................................................................................................................................................................................................................................................++
writing new private key to 'www.example.com.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:www.example.com
Email Address []:hostmaster@example.com

Once you have the CSR, copy it over to the server you’re using to sign certificates. Unfortunately, the existing tools don’t make it easy to merely name the CSR you’re trying to sign, so we need to create our own tool. First, create a new directory to put the CSRs in.

mkdir /etc/pki/tls/csr

Next, create the sign_cert.sh script in the directory we just created. This file needs to be executable.

#!/bin/sh

# Revoke last year's certificate first :
# openssl ca -revoke cert.crt

DOMAIN=$1
YEAR=`date +%Y`
rm -f newreq.pem
ln -s $DOMAIN.csr newreq.pem
/etc/pki/tls/misc/CA -sign
mv newcert.pem $DOMAIN.$YEAR.crt

That’s all you need to start signing certificates. Place the CSR you transferred from the other server into the csr directory and use script we just created to sign it.

[root@localhost csr]# ./sign_cert.sh www.example.com
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
        Subject:
            countryName = US
            stateOrProvinceName = MyState
            localityName = MyCity
            organizationName = My Company Inc.
            commonName = www.example.com
            emailAddress = hostmaster@example.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Comment:
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier:
                3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Certificate is to be certified until Feb 13 18:48:55 2015 GMT (365 days)
Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=MyState, O=My Company Inc., CN=cert.example.com/emailAddress=certadmin@example.com
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
    Subject: C=US, ST=MyState, L=MyCity, O=My Company Inc., CN=www.example.com/emailAddress=hostmaster@example.com
    Subject Public Key Info:
        Public Key Algorithm: rsaEncryption
            Public-Key: (4096 bit)
            Modulus:
                00:d9:5a:cc:87:f0:e5:1e:6f:a0:25:cd:fe:36:64:
                6c:68:ae:2f:3e:7e:93:93:a4:69:6f:f1:28:c1:c2:
                4d:5f:3c:3a:61:2e:4e:f0:90:89:54:48:d6:03:83:
                fb:ac:1e:7c:9a:e8:be:cf:c9:8f:93:41:27:3e:1b:
                66:63:db:a1:54:cb:f7:1d:0b:71:bc:5f:80:e1:30:
                e4:28:14:68:1c:09:ba:d0:aa:d3:e6:2b:24:cd:21:
                67:99:dc:8b:7a:2c:94:d0:ed:8e:02:5f:2f:52:06:
                09:0e:8a:b7:bf:64:e8:d7:bf:94:94:ad:80:34:57:
                32:89:51:00:fe:fd:8c:7d:17:35:4c:c7:5f:5b:58:
                f4:97:9b:21:42:9e:a9:6c:86:5f:f4:35:98:a5:81:
                62:9d:fa:15:07:9d:29:25:38:2b:5d:22:74:58:f8:
                58:56:1c:e9:65:a3:62:b5:a7:66:17:95:12:21:ca:
                82:12:90:b6:8a:8d:1f:79:e8:5c:f4:f9:6c:3a:44:
                f9:3a:3f:29:0d:2e:bf:51:98:9f:58:21:e5:d9:ee:
                78:54:ad:5a:a2:6f:d1:85:9a:bc:b9:21:92:e8:76:
                80:b8:0f:96:77:9a:99:5e:3b:06:bb:6f:da:1c:6e:
                f2:10:16:69:ba:2b:57:c8:1a:cc:b6:e4:0c:1d:b2:
                a6:b7:b9:6c:37:2e:80:13:46:a1:46:c3:ca:d6:2b:
                cd:f7:ba:38:98:74:15:7f:f1:67:03:8e:24:89:96:
                55:31:eb:d8:44:54:a5:11:04:59:e6:73:59:42:ed:
                aa:a3:37:13:ab:63:ab:ef:61:65:0a:af:2f:71:91:
                23:40:7d:f8:e8:a1:9d:cf:3f:e5:33:d9:5f:d2:4d:
                06:d0:2c:70:59:63:06:0f:2a:59:ae:ae:12:8d:f4:
                6c:fd:b2:33:76:e8:34:0f:1f:24:91:2a:a8:aa:1b:
                11:8a:0b:86:f3:67:b8:be:b7:a0:06:02:4a:76:ef:
                dd:ed:c4:a9:03:a1:8c:b0:39:9d:35:98:7f:04:1c:
                24:8a:1c:7c:6f:35:56:71:ee:b5:36:b7:3f:14:04:
                eb:48:a1:4f:6f:8e:43:7c:8b:36:4a:bf:ba:e9:8b:
                d9:38:0c:76:24:e9:a3:38:bf:4e:86:fd:31:4d:c3:
                6f:16:07:09:dd:d8:6b:0b:9d:4d:97:eb:1f:92:21:
                b2:a5:f9:d8:55:61:85:d2:99:97:bc:27:12:be:eb:
                55:86:ee:1f:f5:6f:a7:c5:64:2f:4e:c2:67:a3:52:
                97:7a:d9:66:89:05:6a:59:ed:69:7b:22:10:2b:a1:
                14:4e:5d:b8:f0:21:e9:11:d0:25:ae:bc:05:2b:c3:
                db:ad:cf
            Exponent: 65537 (0x10001)
    X509v3 extensions:
        X509v3 Basic Constraints:
            CA:FALSE
        Netscape Comment:
            OpenSSL Generated Certificate
        X509v3 Subject Key Identifier:
            3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
        X509v3 Authority Key Identifier:
            keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Signature Algorithm: sha256WithRSAEncryption
     ca:66:b2:55:64:e6:40:a5:85:19:11:66:0d:63:89:fb:0d:3a:
     0c:ec:fd:cb:5c:93:44:1e:3f:1b:ca:f5:3d:85:ab:0a:0b:dc:
     f3:18:1d:1f:ec:85:ff:f3:82:52:9e:c7:12:19:07:e9:6a:82:
     bd:32:f6:d1:19:b2:b7:09:1c:34:d7:89:45:7e:51:4d:42:d6:
     4e:78:b6:39:b3:76:58:f8:20:57:b3:d8:7b:e0:b3:2f:ce:9f:
     a2:59:de:f6:31:f2:09:1c:91:3b:7f:97:61:cb:11:a4:b4:73:
     ab:47:64:e8:93:07:98:d5:47:75:8d:9a:8f:a3:8f:e8:f4:42:
     7e:b8:1b:e8:36:72:13:93:f9:a8:cc:6d:b4:85:a7:af:94:fe:
     f3:6e:76:c2:4d:78:c3:c2:0b:a4:48:27:d3:eb:52:c3:46:14:
     c1:26:03:28:a0:53:c7:db:59:c9:95:b8:d9:f0:d9:a8:19:4a:
     a7:0f:81:ad:3c:e1:ec:f2:21:51:0d:bc:f9:f9:f6:b6:75:02:
     9f:43:de:e6:2f:9b:77:d3:c3:72:6f:f6:18:d7:a3:43:91:d2:
     04:2a:c8:bf:67:23:35:b7:41:3f:d1:63:fe:dc:53:a7:26:e9:
     f4:ee:3b:96:d5:2a:9c:6d:05:3d:27:6e:57:2f:c9:dc:12:06:
     2c:cf:0c:1b:09:62:5c:50:82:77:6b:5c:89:32:86:6b:26:30:
     d2:6e:33:20:fc:a6:be:5a:f0:16:1a:9d:b7:e0:d5:d7:bb:d8:
     35:57:d2:be:d5:07:98:b7:3c:18:38:f9:94:4c:26:3a:fe:f2:
     ad:40:e6:95:ef:4b:a9:df:b0:06:87:a2:6c:f2:6a:03:85:3b:
     97:a7:ef:e6:e5:d9:c3:57:87:09:06:ae:8a:5a:63:26:b9:35:
     29:a5:87:4b:7b:08:b9:63:1c:c3:65:7e:97:ae:79:79:ed:c3:
     a3:36:c3:87:1f:54:fe:0a:f1:1a:c1:71:3d:bc:9e:36:fc:da:
     03:2b:61:b5:19:0c:d7:4d:19:37:61:45:91:4c:c9:7a:5b:00:
     cd:c2:2d:36:f9:1f:c2:b1:97:2b:78:86:aa:75:0f:0a:7f:04:
     85:81:c5:8b:be:af:a6:a7:7a:d2:17:26:7a:86:0d:f8:fe:c0:
     27:a8:66:c7:92:cd:c5:34:99:c9:8e:c1:25:f3:98:df:4e:48:
     37:4a:ee:76:4a:fa:e4:66:b4:1f:cd:d8:e0:25:fd:c7:0b:b3:
     12:af:bb:b7:29:98:5e:86:f2:12:8e:20:c6:a9:40:6f:39:14:
     8b:71:9f:98:22:a0:5b:57:d1:f1:88:7d:86:ad:19:04:7b:7d:
     ee:f2:c9:87:f4:ca:06:07
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIJAPg3/I1xmzBMMA0GCSqGSIb3DQEBCwUAMHoxCzAJBgNV
BAYTAlVTMRAwDgYDVQQIDAdNeVN0YXRlMRgwFgYDVQQKDA9NeSBDb21wYW55IElu
Yy4xGTAXBgNVBAMMEGNlcnQuZXhhbXBsZS5jb20xJDAiBgkqhkiG9w0BCQEWFWNl
cnRhZG1pbkBleGFtcGxlLmNvbTAeFw0xNDAyMTMxODQ4NTVaFw0xNTAyMTMxODQ4
NTVaMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHTXlTdGF0ZTEPMA0GA1UEBwwG
TXlDaXR5MRgwFgYDVQQKDA9NeSBDb21wYW55IEluYy4xGDAWBgNVBAMMD3d3dy5l
eGFtcGxlLmNvbTElMCMGCSqGSIb3DQEJARYWaG9zdG1hc3RlckBleGFtcGxlLmNv
bTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANlazIfw5R5voCXN/jZk
bGiuLz5+k5OkaW/xKMHCTV88OmEuTvCQiVRI1gOD+6wefJrovs/Jj5NBJz4bZmPb
oVTL9x0LcbxfgOEw5CgUaBwJutCq0+YrJM0hZ5nci3oslNDtjgJfL1IGCQ6Kt79k
6Ne/lJStgDRXMolRAP79jH0XNUzHX1tY9JebIUKeqWyGX/Q1mKWBYp36FQedKSU4
K10idFj4WFYc6WWjYrWnZheVEiHKghKQtoqNH3noXPT5bDpE+To/KQ0uv1GYn1gh
5dnueFStWqJv0YWavLkhkuh2gLgPlneamV47Brtv2hxu8hAWaborV8gazLbkDB2y
pre5bDcugBNGoUbDytYrzfe6OJh0FX/xZwOOJImWVTHr2ERUpREEWeZzWULtqqM3
E6tjq+9hZQqvL3GRI0B9+Oihnc8/5TPZX9JNBtAscFljBg8qWa6uEo30bP2yM3bo
NA8fJJEqqKobEYoLhvNnuL63oAYCSnbv3e3EqQOhjLA5nTWYfwQcJIocfG81VnHu
tTa3PxQE60ihT2+OQ3yLNkq/uumL2TgMdiTpozi/Tob9MU3DbxYHCd3YawudTZfr
H5IhsqX52FVhhdKZl7wnEr7rVYbuH/Vvp8VkL07CZ6NSl3rZZokFalntaXsiECuh
FE5duPAh6RHQJa68BSvD263PAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJYIZIAYb4
QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBQ6
7is6c6bDXDmQ6oU/2nEze5FNfzAfBgNVHSMEGDAWgBQU/BS89KU+awxYO987JjVG
oL7s8TANBgkqhkiG9w0BAQsFAAOCAgEAymayVWTmQKWFGRFmDWOJ+w06DOz9y1yT
RB4/G8r1PYWrCgvc8xgdH+yF//OCUp7HEhkH6WqCvTL20RmytwkcNNeJRX5RTULW
Tni2ObN2WPggV7PYe+CzL86folne9jHyCRyRO3+XYcsRpLRzq0dk6JMHmNVHdY2a
j6OP6PRCfrgb6DZyE5P5qMxttIWnr5T+8252wk14w8ILpEgn0+tSw0YUwSYDKKBT
x9tZyZW42fDZqBlKpw+BrTzh7PIhUQ28+fn2tnUCn0Pe5i+bd9PDcm/2GNejQ5HS
BCrIv2cjNbdBP9Fj/txTpybp9O47ltUqnG0FPSduVy/J3BIGLM8MGwliXFCCd2tc
iTKGayYw0m4zIPymvlrwFhqdt+DV17vYNVfSvtUHmLc8GDj5lEwmOv7yrUDmle9L
qd+wBoeibPJqA4U7l6fv5uXZw1eHCQauilpjJrk1KaWHS3sIuWMcw2V+l655ee3D
ozbDhx9U/grxGsFxPbyeNvzaAythtRkM100ZN2FFkUzJelsAzcItNvkfwrGXK3iG
qnUPCn8EhYHFi76vpqd60hcmeoYN+P7AJ6hmx5LNxTSZyY7BJfOY305IN0rudkr6
5Ga0H83Y4CX9xwuzEq+7tymYXobyEo4gxqlAbzkUi3GfmCKgW1fR8Yh9hq0ZBHt9
7vLJh/TKBgc=
-----END CERTIFICATE-----
Signed certificate is in newcert.pem

The script automatically renamed the newly signed certificate. In the above example, the signed certificate is in www.example.com.2014.crt. Transfer this file back to the server it belongs on and you’re all set to start using it.

That’s it! You’re now a certificate authority with the power to sign your own certificates. Don’t let all that power go to your head!

SSL “Security”

SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990’s. Today, SSL, and it’s replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet.

SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI.

PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin.

When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate.

CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA.

Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated.

In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the “Green Address Bar” means that the website is completely trustworthy. And they’ve been pretty effective. But, as with most marketing, they didn’t quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it’s still possible that the certificate is fake.

There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any website the attacker wanted to hijack. That certificate plus some creative DNS poisoning and the attacker suddenly looks like your bank, or google, or whatever site the attacker wants to be. And, they’ll have a nice shiny green EV certificate.

So how do we fix this? Well, one way would be to use the certificate revocation system that already exists within the PKI infrastructure. If a certificate is stolen, or a false certificate is created, the CA has the ability to put the signature for that certificate into the revocation system. When a user tries to load a site with a bad certificate, a warning is displayed telling the user that the certificate is not to be trusted.

Checking revocation of a certificate takes time, and what happens if the revocation server is down? Should the browser let the user go to the site anyway? Or should it block by default? The more secure option is to block, of course, but most users won’t understand what’s going on. So most browser manufacturers have either disabled revocation checking completely, or they default to allowing a user to access the site when the revocation site is slow or unavailable.

Without the ability to verify if a certificate is valid or not, there can be no real trust in the security of the connection, and that’s a problem. Perhaps one way to fix this problem is to disconnect the revocation process from the process of loading the webpage. If the revocation check happened in parallel to the page loading, it shouldn’t interfere with the speed of the page load. Additional controls can be put into place to prevent any data from being sent to the remote site without a warning until the revocation check completes. In this manner, the revocation check can take a few seconds to complete without impeding the use of the site. And after the first page load, the revocation information is cached anyway, so subsequent page loads are unaffected.

Another option, floated by the browser builders themselves, is to have the browser vendors host the revocation information. This information is then passed on to the browsers when they’re loaded. This way the revocation process can be handled outside of the CAs, handling situations such as those caused by a CA being compromised. Another idea would be to use short term certificates that expire quickly, dropping the need for revocation checks entirely.

It’s unclear as to what direction the market will move with this issue. It has been over two years since the attacks on Diginotar and Comodo and the immediacy of this problem seems to have passed. At the moment, the only real fix for this is user education. But with the marketing departments for SSL vendors working to convince users of the security of SSL, this seems unlikely.

BSides Delaware 2013

The annual BSides Delaware conference took place this past weekend, November 8th and 9th. BSides Delaware is a free community driven security event that takes place at the Wilmington University New Castle campus. The community is quite open, welcoming seasoned professionals, newcomers, curious individuals, and even children. There were a number of families who attended, bringing their children with them to learn and have fun.

I was fortunate enough to be able to speak at last years BSides and was part of the staff for this years event. There were two tracks for talks, many of which were recorded and are already online thanks to Adrian Crenshaw, the IronGeek. Adrian has honed his video skills and was able to have every recording online by the closing ceremonies on Saturday evening.

In all there were more than 25 talks over the course of two days covering a wide variety of topics, logging, Bitcoins, forensics, and more. While most speakers were established security professionals, there were a few new speakers striving to make a name for themselves.

This year also included a FREE wireless essentials training class. The class was taught by a team of world-class instructors including Mike Kershaw (drag0rn), author of the immensely popular Kismet wireless tool, Russell Handorf from the FBI Cyber Squad, and Rick Farina, lead developer for Pentoo. The class covered everything from wireless basics to software-defined radio hacking. An absolutely amazing class.

In addition to the talks, BSides also features not one, but two lockpick villages. Both Digital Trust as well as Toool were present. The lockpick villages were a big hit with seasoned professionals as well as the very young. It’s amazing to see how adept a young child can be with a lockpick.

Hackers for Charity was present as well with a table of goodies for sale. They also held a silent (and not so silent) auction where all proceeds went to the charity. Hackers for Charity raises money to help with a variety of projects they engage in across the world. From their website :

We employ volunteer hackers and technologists through our Volunteer Network and engage their skills in short projects designed to help charities that can not afford traditional technical resources.

We’ve personally witnessed how one person can have a profound impact on the world. By giving of their skills, time and talent our volunteers are profoundly impacting the world, one “hacker” at a time.

BSides 2013 was an amazing experience. This was my second year at the conference and it’s amazing how it has grown. The dates for BSidesDE 2014 have already been announced, November 14th and 15th. Mark your calendars and make an effort to come join in the fun. It’s worth it.

Pebble Review

In April of 2012, a Kickstarter project was launched by a company aiming to create an electronic watch that served as a companion to your smartphone. A month later, the project exceeded it’s funding goal by over 100%, closing at over $10 million in pledges. Happily, I was one of the over 68,000 people that pledged. I received my Pebble about a month ago or so and I’ve been wearing it ever since.

The watch itself is fairly simple, a rectangular unit with an e-ink display, four buttons, and a rubberized plastic strap. The screen resolution is 144×168, plenty of pixels for some fairly impressive detail. The watch communicates with your mobile phone (Android or iPhone only) via a bluetooth connection. All software updates and app installation occurs over the bluetooth connection. There is a 3-axis accelerometer as well a a pretty standard vibrating motor for silent alerts.

According to the official Pebble FAQ, battery life is 7+ days on a single charge, but this depends on your overall use of the device. The more alerts your receive, the more the backlight comes on, and the more apps you use on the device, the shorter your battery life.

Pebble is still in the process of building the initial run of watches for backers. Black watches, being the majority of the orders, were built first. Other colors are coming online in more recent weeks. Pebble has a website where interested parties can track how many pebbles have been built and shipped.

I’ve been pretty impressed with the watch thus far. Pebble has been fairly responsive to inquiries I’ve made, and they seem dedicated to making sure they have a top quality product. Of course, as is typical on the Internet, not everyone is happy. There seem to be a lot of complaints about communication, how long it’s taking to get watches, and about the features themselves.

It’s hard to say whether these complaints have any merit, though. For starters, I can’t imagine it’s a simple task to design and build 68,000 watches in a short period of time. And to complicate matters further, it seems that many backers of Kickstarter projects don’t understand the difference between being a backer and being a customer.

When you back a Kickstarter project, you’re pledging money to help start the project. As a “reward” for contributing, if the project is successful, you are entitled to whatever the project owners have designated for your level of contribution. The key part of this being, if the project is successful. Some projects take longer than others, and times often slip. That said, I’ve only been part of one Kickstarter that has failed, and even that one is being resurrected by other interested parties.

But there are some legitimate complaints, some that can be addressed, and others that likely won’t. For instance, I’ve noticed that with recent firmware releases, the battery life on my watch had dropped considerably. Based on communication with the developers, they are aware of this and are actively working to resolve it. I’m not sure what the problem is, exactly, but I’m confident they’ll have it fixed in the next firmware update.

The battery indicator is a source of frequent discussion. Right now, there’s no indicator of battery life until the battery is running low. And that indicator doesn’t show on the watchface, it only shows when you are in other menus. This, in my opinion, is a poor UI choice. I’d much rather see a battery indicator option available for the watchface itself.

Menu layout was also a frequent source of frustration for users. In previous firmware releases, you had to actively go to the watchface you wanted. Recent releases changed this so that the watch was the default view and other screens were chosen as needed. The behavior of the navigation buttons on the watch were also updated to reflect this new choice.

So Pebble continues to improve over time. It’s an iterative process that will take some time to get right. I’m eager to see what future releases will bring. Next week, Pebble is scheduled to release the watch SDK, allowing users, for the first time, to start adding their own customizations to the watch.

The Pebble watch has a lot of potential. As the platform matures, I’m hoping to see a number of features I’m interested in come to fruition. Interaction between Pebble and other apps on iPhone devices would be a welcome addition. I would love to see an actigraphy app that uses the Pebble for sleep monitoring. From what I’ve read, sleep monitoring is even more accurate when the monitor is placed on the sleeper’s wrist. Seems like a perfect use for the Pebble.

I’d also like to see more of an open SDK, allowing users such as myself to write code for the Pebble. While I’m aware of the closed nature of the iPhone platform itself, it is still possible to add applications to the Pebble itself. I can’t wait to see what others build for this platform. Given a bit of time, I think this can grow into something even more amazing.

Customer Dis-Service

In general, I’m a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I’ll take a look, but unless there’s a compelling reason to change, I’ll stick with what I have.

But sometimes a change is forced upon me. Take, for instance, this last week. I’ve been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I’ve been using Verizon’s services for a long time. I’ve been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I’ve had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I’ve always been able to pay the bill and move on. Since that’s really the only thing I used their online service for, I was content to leave well enough alone.

In more recent months, we’ve been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon’s fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn’t nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn’t offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I’d rather not do business with if I could avoid it.

In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.

Ok, I thought, let’s dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.

Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn’t a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don’t know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn’t get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn’t understand my request.

Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I wanted to cancel my home phone and just keep the DSL. I also wanted to verify that I was not under contract so I wouldn’t end up with some crazy early termination fee. She explained that this was perfectly acceptable and that I could make these changes whenever I wanted. I verified again that I could keep the DSL without issue. She agreed, no problem.

Excellent! Off I went to the cell carrier, purchased (free with a contract) the new home phone box, and had them port the number. The representative cautioned that he saw DSL service listed when he was porting and suggested I contact Verizon to verify that the DSL service would be ok.

I called Verizon again to verify everything would work as intended. I explained what I had done, asked when the port would go through, and stressed that the DSL service was staying. The representative verified the port date and said that the DSL service would be fine.

You can guess where this is going, can’t you. On the day of the port, the phone line switched as expected. The new home phone worked perfectly and I made the necessary changes to the home wiring to ensure that the DSL connection was isolated away from the rest of the wiring. DSl was still up, phone ported, everything was great. Until the next morning.

I woke up the following morning and started my normal routine. Get dressed, go exercise, etc. Except that on the way to exercise, I noticed that the router light was blinking. Odd, I wonder what was going on. Perhaps something knocked the system online overnight? The DSL light on the modem was still on, so I had a connection to the DSLAM. No problem, reboot the router and we’ll be fine. So, I rebooted and walked away. After a few minutes I checked the system and noticed that I was still not able to get online. I walked through a mental checklist and decided that the username and password for the PPPoE connection must be failing. Time to call Verizon and see what’s wrong.

I contacted Verizon and first spoke to a sales rep who informed me that my services had been cancelled per my request. Wonderful. Al that work and they screw it up anyway. I explained what I had done and she took a deeper look into the account. Turns out the account was “being migrated” and she apologized for the mixup. Since I was no longer bundled, the DSL account had to be migrated. I talked with her some more about it and she decided to send me to technical support to verify everything was ok. Off I go to technical support, fully expecting them to ask be to reset my DSL modem. No such luck, however, the technical support rep explained that I had no DSL service.

And back to sales I went. I explained, AGAIN, what was going on. The representative confirmed my story, verified that the account was being migrated, and asked me to check the service again in a few hours. All told, I spent roughly an hour on the phone with Verizon and missed out on my morning exercise.

After rushing through the remainder of my morning routine and explaining to my wife why the Internet wasn’t working, I left for work. My wife checked in a few hours later to let me know that, no, we still did not have an Internet connection. So I called Verizon again. Again I’m told I have no service and that I have cancelled them. Again I explain the problem and what I had done. And this time, the representative explains to me that they do not offer unbundled DSL service anymore, they haven’t had that service in about a year. She goes on to offer me a bundled package with a phone line and explains that I don’t have to use the phone line, I just have to pay for it.

So all of the careful planning I had done was for naught. In an effort to make sure this didn’t happen to anyone else, the rep checked back on my account to see who had informed me about the DSL service. According to the notes, however, I had never called about such a thing. I called to complain about unsolicited phone calls and they referred me to their fraud and abuse office and explains about the magical phone code I could put in to block calls. Ugh! She then went on to detail every aspect of my problem, again so someone else didn’t have this problem.

This is the sort of situation that will, very rapidly, cause me to look elsewhere for service. And that’s exactly what I did. I’ve since cut all ties with Verizon and moved on to a different Internet service provider. I’m not happy with having to deal with this provider, but it’s the only alternative at the moment. Assuming I don’t have any major problems with the service, I’ll probably continue with them for a while. Of course, if I run into problems here, the decision becomes more difficult. A “lesser of two evils” situation, if you will. But for now, I’ll deal with what comes up.

Programming Note

In 2012 I posted a little over a dozen entries to this blog. I like to think that each entry was well thought out and time well spent. But only a dozen? That’s about one entry a month… I’d really like to do more.

So, new year, time to make some changes.. I spent a lot of time judging whether each post was “worth the effort” and “long enough to matter.” I need to get past that. My goal is to start posting a number of smaller entries. I definitely want the quality to be there, but I want to avoid agonizing over each and every entry.

So here’s to a new year and more content!

Derbycon 2012

I spent this past weekend in Louisville, KY attending a relatively new security conference called Derbycon. This year was the second year they held the conference and the first year I spoke there. It was amazing, to say the least.

I haven’t been to many conventions, and this is the only security-oriented convention I’ve attended. When I first attended last year, it was with come trepidation. I knew that some of the attendees I’d be seeing were truly rockstars in the security world. And, unfortunately, one of the people who was supposed to come with us was unable to attend. Of course, that person was the one person in our group who was connected within the security world and we were depending on them to introduce us to everyone.

It went well, nonetheless, and we were able to meet a lot of amazing people while we were there. Going back this year, we were able to rekindle friendships that started last year, and even make a few new ones. Derbycon has an absolutely amazing sense of family. Even the true rockstars of the con are down to earth enough to hang out with the newcomers.

And this year, I had the opportunity to speak. I submitted my CFP earlier in the year, not really expecting it to be chosen. Much to my surprise, though, it was. And so I spent some time putting together my talk and prepared to stand in front of the very people I looked up to. It was nerve-wracking to say the least. You can watch the video over on the Irongeek site, and you can find the slides in my presentation archive.

But I powered through it. I delivered my talk and while it may not have been the most amazing talk, it was an accomplishment. I think it’s given me a bit more confidence in my own abilities and I’m looking forward to giving another. In fact, I’ve since submitted a talk to BSides Deleware at the behest of the organizers. I haven’t heard back yet, but here’s hoping.

I’m already making plans to attend Derbycon 2013 and I hope to be a permanent fixture there for many years to come. Derbycon is an amazing place to go and something truly magnificent to experience. I may not be in the security industry, but they made me feel truly welcome despite my often dumb questions and inane comments. Rel1k, IronGeek, and Purehate have put together something special and I was proud to be a part of it again.

So you want to talk at a conference

Last year at this time I was attending an absolutely amazing conference known as DerbyCon. It was an amazing time where I met some absolutely amazing people and learned amazing things. Believe me, there was a lot of amazing.

I attended one talk that really got me thinking about blue-team security. That is, defensive security, basically what I’m all about these days. And I decided that I wanted to help the cause .. So, I started putting together the pieces in my head and decided I wanted to do a talk at the following DerbyCon ..

And so, when the CFP was placed, I submitted my thoughts and ideas. Honestly, while I hoped it would be accepted, I didn’t think I had a chance in hell given the talent that talked the previous year.. Boy was I wrong.. Talk accepted. And so I started putting things together, working on the talk itself, pushing forward the design I wanted for this new tool. I aimed high and came up a little short..

As luck would have it, this past summer was a beast. Just no time to work on anything in-depth .. And time went by. And before I knew it, DerbyCon was here.. I did a dry-run of my talk to get some feedback and suggestions. Total talk time? 15 minutes. Uhh.. That might be an issue.. 50 minute talk window and all..

So, back to the drawing board. Fortunately, I received some awesome feedback and expanded my talk a bit. The revised edition should be a bit longer, I would hope.. I’ll find out tomorrow. I’m talking at 2pm.

I’m terrified.

But I’m surrounded by some of the most awesome people I have ever met. I’ll be fine.. I hope..

The Future of Personal Computers

The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.

As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.

So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.

The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.

For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.

We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.

Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.

Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.

Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.

What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.

Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.

Jumping The Gap

I listened to a news story on NPR’s On The Media recently about “Cyber Warfare” and assessing it’s true threat. On the one hand, it seemed like another misguided report from a clueless news media. On the other hand, though, it did make me think a bit.

Much of the talk about Cyber Warfare revolves around attacking the various SCADA systems used to control the nation’s physical infrastructure. By today’s standards, many of these systems are quite primitive. Many of these systems are designed for a very specific purpose, rarely upgraded to run on modern operating systems, and very rarely, if ever, designed to be secure. The state of the art in security for many of these systems is to not allow outside access to the system.

Unfortunately, if numerous reports are to be believed, a good portion of the world’s infrastructure is connected to the Internet in one manner or another. The number of institutions that truly air gap their critical networks is alarmingly low. A researcher from IO Active, who provided some of the information for the aforementioned NPR article, used SHODAN to scour the Internet for SCADA systems. Why use SHODAN? Turns out, the simple act of scanning the Internet for these systems often resulted in the target systems crashing and going offline. If a simple network scan can kill one of these systems, then what hope do we have?

But, air gapping is by no means a guarantee against attacks since users of these systems may regularly switch between connected and non-connected systems and use some form of media to transfer files back and forth. There is precedence for this with the Stuxnet virus. According to reports, the Iranian nuclear facility was, in fact, air gapped. However, Stuxnet was designed to replicate onto USB drives and other media. Plug an infected USB drive into a targeted SCADA system and poof, instant infection across an air gapped system.

So what can be done here? How do we keep our infrastructure safe from attackers? Yes, even aging attackers…

Personally, I believe this comes down, again, to Defense in Depth. With the exception of not building it in the first place, I don’t believe that there is a way to prevent attacks. And any determined attacker will eventually get in, given time. So the only way to defend against this is to build a layered defense grid with a full monitoring back end. Expect that attackers will make it through one or two layers before being detected. Determined attackers may make it even further. But if you build you defenses with this in mind, you will stand a better chance at detecting and repelling these attacks.

I don’t believe that air gapping systems is a viable security strategy. If anything, it can result in a false sense of security for users and administrators. After all, if the system isn’t connected, how can it possibly be infected? Instead, start building in security from the start and deploy your defense in monitored layers. It works.