Bleeding Heart Security

April 11th, 2014

Unless you’ve been living under a rock the past few days, you’ve probably heard about the Heartbleed vulnerability in OpenSSL that was disclosed on Monday, April 7th. Systems and network administrators across the globe have spent the last few days testing for this vulnerability, patching systems, and probably rocking in the corner while crying. Yes, it’s that bad. What’s more, there are a number of reports that intelligence agencies may have known about this vulnerability for some time now.

The quick and dirty is that a buffer overflow bug in the code allows an attacker to remotely read memory of an affected system in 64k chunks. The only memory accessible to an attacker would be memory used by the process being connected to, but, depending on the process, there may be a LOT of useful data in there. For instance, Yahoo was leaking usernames and passwords until late Tuesday evening.

The fabulous web comic, xkcd, explains how the attack works in layman’s terms. If you’re interested in the real nitty gritty of this vulnerability, though, there’s an excellent write-up on the IOActive Labs blog. If you’re the type that likes to play, you can find proof-of-concept code here. And let’s not forget about the client side, there’s PoC code for that as well.

OpenSSL versions 1.0.1 through 1.0.1f as well as the 1.0.2 beta code are affected. The folks at OpenSSL released version 1.0.1g on Monday which fixed the problem. Or, at least, the current problem. There’s a bit of chatter about other issues that may be lurking in the OpenSSL codebase.

Now that a few days have passed, however, what remains to be done? After all, everyone has patched their servers, right? Merely patching doesn’t make the problem disappear, though. Vulnerable code is out there and mistakes can be made. For the foreseeable future, you should be regularly scanning your network for vulnerable systems with something like Nmap. The Nmap NSE for Heartbleed scanning is already available. Alternatively, you can use something like Nagios to regularly check your existing servers.

Patching immediately may not have prevented a breach, either. Since Heartbleed doesn’t leave much of a trace beyond some oddities that your IDS may have seen, there’s virtually no way to know if anything has been taken. The best way to deal with this is to just go ahead and assume that your private keys are compromised and start replacing them. New keys, new certs. It’s painful, it’s slow, but it’s necessary.

For end users, the best thing you can do is change your passwords. I’m not aware of any “big” websites that have not patched by now, so changing passwords should be relatively safe. However, that said, Wired and Engadget have some of the best advice I’ve seen about this. In short, change your passwords today, then change them again in a few weeks. If you’re really paranoid, change them a third time in about a month. By that time, any site that is going to patch will have already patched.

Unfortunately, I think the fun is just beginning. I expect we’ll start seeing a number of related attacks. Phishing attacks are the most likely in the beginning. If private keys were compromised, then attackers can potentially impersonate websites, including their SSL certificates. This would likely involve a DNS poisoning attack, but could also be accomplished by compromising a user’s local system and setting a hosts file entry. Certificate revocation is a potential defense against this, but since many browsers have CRL checks disabled by default, it probably won’t help. Users will have to watch what they click, where they go, and what software they run. Not much different from the advice given already.

Another possible source of threats are consumer devices. As Bruce Schneier put it, “An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.” What he’s referring to are the many embedded devices we use on a daily basis that may never receive updates to protect the end user. In other words, that router you purchased from the discount store? That may be affected and unless you replace it, you’ll continue to be vulnerable. Fortunately, most of these devices aren’t configured, by default, to face the Internet, so there may yet be hope.

The Heartbleed vulnerability is a serious contender for the worst security vulnerability ever released. I’m not sure of another vulnerability that exposes so many systems to such a degree as this one. Network and systems administrators will be cleaning up after this one for a while.

Looking into the SociaVirtualistic Future

March 29th, 2014

Let’s get this out of the way. One of the primary reasons I’m writing this is in response to a request by John Carmack for coherent commentary about the recent acquisition of Oculus VR by Facebook. My hope is that he does, in fact, read this and maybe drop a comment in response. <fanboy>Hi John!</fanboy> I’ve been a huge Carmack fan since the early ID days, so please excuse the fanboyism.

And I *just* saw the news that Michael Abrash has joined Oculus as well, which is also incredibly exciting. Abrash is an Assembly GOD. <Insert more fanboyism here />

Ok, on to the topic a hand. The Oculus Rift is a VR headset that got its public start with a Kickstarter campaign in September of 2012. It blew away it’s meager goal of $250,000 and raked in almost $2.5 Million. For a mere $275 and some patience, contributors would receive an unassembled prototype of the Oculus Rift. Toss in another $25 and you received an assembled version.

But what is the Oculus Rift? According to the Kickstarter campaign :

Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.

In short, the Rift is the culmination of every VR lover’s dreams. Put a pair of these puppies on and magic appears before your eyes.

For myself, Rift was interesting, but probably not something I could ever use. Unfortunately, I suffer from Amblyopia, or Lazy Eye as it’s commonly called. I’m told I don’t see 3D. Going to 3D movies pretty much confirms this for me since nothing ever jumps out of the screen. So as cool as VR sounds to me, I would miss out on the 3D aspect. Though it might be possible to “tweak” the headset and adjust the angles a bit to force my eyes to see 3D. I’m not sure if that’s good for my eyes, though.

At any rate, the Rift sounds like an amazing piece of technology. In the past year I’ve watched a number of videos demonstrating the capabilities of the Rift. From the Hak5 crew to Ben Heck, the reviews have all been positive.

And then I learned that John Carmack joined Oculus. I think that was about the time I realized that Oculus was the real deal. John is a visionary in so many different ways. One can argue that modern 3D gaming is largely in part to the work he did in the field. In more recent years, his visions have aimed a bit higher with his rocket company, Armadillo Aerospace. Armadillo started winding down last year, right about the time that John joined Oculus, leaving him plenty of time to deep dive into a new venture.

For anyone paying attention, Oculus was recently acquired by Facebook for a mere $2 Billion. Since the announcement, I’ve seen a lot of hatred being tossed around on Twitter. Some of this hatred seems to be Kickstarter backers who are under some sort of delusion that makes them believe they have a say in anything they back. I see this a lot, especially when a project is taking longer than they believe it should.

I can easily write several blog posts on my personal views about this, but to sum it up quickly, if you back a project, you’re contributing to make something a reality. Sometimes that works, sometimes it doesn’t. But Kickstarter clearly states that you’re merely contributing financial backing, not gaining a stake in a potential product and/or company. Nor are you guaranteed to receive the perks you’ve contributed towards. So suck it up and get over it. You never had control to begin with.

I think Notch, of Minecraft fame, wrote a really good post about his feeling on the subject. I think he has his head right. He contributed, did his part, and though it’s not working out the way he wanted, he’s still willing to wish the venture luck. He may not want to play in that particular sandbox, but that’s his choice.

VR in a social setting is fairly interesting. In his first Oculus blog post, Michael Abrash mentioned reading Neal Stephenson’s incredible novel, Snow Crash. Snow Crash provided me with a view of what virtual reality might bring to daily life. Around the same time, the movie Lawnmower Man was released. Again, VR was brought into the forefront of my mind. But despite the promises of books and movies, VR remained elusive.

More recently, I read a novel by Ernest Cline, Ready Player One. Without giving too much away, the novel centers around a technology called the OASIS. Funnily enough, the OASIS is, effectively, a massive social network that users interact with via VR rigs. OASIS was the first thing I thought about when I heard about the Facebook / Oculus acquisition.

For myself, my concern is Facebook. Despite being a massively popular platform, I think users still distrust Facebook quite a bit. I lasted about 2 weeks on Facebook before having my account deleted. I understand their business model and I have no interest in taking part. Unfortunately, I’m starting to miss out on some aspects of Internet life since some sites are requiring Facebook accounts for access. Ah well, I guess they miss out on me as well.

I have a lot of distrust in Facebook at the moment. They wield an incredible amount of information about users and, to be honest, they’re nowhere near transparent enough for me to believe what they say. Google is slightly better, but there’s some distrust there as well. But more than just the distrust, I’m afraid that Facebook is going to take something amazing and destroy it in a backwards attempt to monetize it. I’m afraid that Facebook is the IOI of this story. (It’s a Ready Player One reference. Go read it, you can thank me later)

Ultimately, I have no stake in this particular game. At least, not yet, anyway. Maybe I’m wrong and Facebook makes all the right moves. Maybe they become a power for good and are able to bring VR to the masses. Maybe people like Carmack and Abrash can protect Oculus and fend off any fumbling attempts Facebook may make at clumsy monetization. I’m not sure how this will play out, only time will tell.

How will we know how things are going? Well, for one, watching his Facebook interacts with this new property will be pretty telling. I think if Facebook is able to sit in the shadows and watch rather than kicking in the front door and taking over, maybe Oculus will have a chance to thrive. Watching what products are ultimately released by Oculus will be another telling aspect. While I fully expect that Oculus will add some sort of Facebook integration into the SDK over time, I’m also hoping that they continue to provide an SDK for standalone applications.

I sincerely wish Carmack, Abrash, and the rest of the Oculus team the best. I think they’re in a position where they can make amazing things happen, and I’m eager to see what comes next.

Keepin’ TCP Alive

February 20th, 2014

I was debugging an odd network issue lately that turned out to have a pretty simple explanation. A client on the network was intermittently experiencing significant delays in accessing the network. Upon closer inspection, it turned out that prior to the delay, the client was being left idle for long periods of time. With this additional information it was pretty easy to identify that there was likely a connection between the client and server that was being torn down for being idle.

So in the end, the cause of the problem itself was pretty simple to identify. The fix, however, is more of a conundrum. The obvious answer is to adjust the timers and prevent the connection from being torn down. But what timers should be adjusted? There are the keepalive timers on the client, the keepalive timers on the server, and the idle teardown timers on the firewall in the middle.

TCP keepalive handling varies between operating systems. If we look at the three major operating systems, Linux, Windows, and OS X, then we can make the blanket statement that, by default, keepalives are sent after two hours of idle time. But, most firewalls seem to have a default TCP teardown timer of one hour. These defaults are not conducive to keeping idle connections alive.

The optimal scenario for timeouts is for the clients to have a keepalive timer that fires at an interval lower than that of the idle tcp timeout on the firewall. The actual values to use, as well as which devices should be changed, is up for debate. The firewall is clearly the easier point at which to make such a change. Typically there are very few firewall devices that would need to be updated as compared to the larger number of client devices. Additionally, there will likely be fewer firewalls added to the network over time, so ensuring that timers are properly set is much easier. On the other hand, the defaults that firewalls are generally configured with have been chosen specifically by the vendor for legitimate reasons. So perhaps the clients should conform to the setting on the firewall? What is the optimal solution?

And why would we want to allow idle connections anyway? After all, if a connection is idle, it’s not being used. Clearly, any application that needed a connection to remain open would send some sort of keepalive, right? Is there a valid reason to allow these sorts of connections for an extended period of time?

As it turns out, there are valid reasons for connections to remain active, but idle. For instance, database connections are often kept for longer periods of time for performance purposes. The TCP handshake can take a considerable amount of time to perform as opposed to the simple matter of retrieving data from a database. So if the database connection remains established, additional data can be retrieved without the overhead of TCP setup. But in these instances, shouldn’t the application ensure that keepalives are sent so that the connection is not prematurely terminated by an idle timer somewhere along the data path? Well, yes. Sort of. Allow me to explain.

When I first discovered the source of the network problem we were seeing, I chalked it up to lazy programming. While it shouldn’t take much to add a simple keepalive system to a networked application, it is extra work. As it turns out, however, the answer isn’t quite that simple. All three major operating systems, Windows, Linux, and OS X, all have kernel level mechanisms for TCP keepalives. Each OS has a slightly different take on how keepalive timers should work.

Linux has three parameters related to tcp keepalives :

tcp_keepalive_time
The interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
tcp_keepalive_intvl
The interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
tcp_keepalive_probes
The number of unacknowledged probes to send before considering the connection dead and notifying the application layer

OS X works quite similar to Linux, which makes sense since they’re both *nix variants. OS X has four parameters that can be set.

keepidle
Amount of time, in milliseconds, that the connection must be idle before keepalive probes (if enabled) are sent. The default is 7200000 msec (2 hours).
keepintvl
The interval, in milliseconds, between keepalive probes sent to remote machines, when no response is received on a keepidle probe. The default is 75000 msec.
keepcnt
Number of probes sent, with no response, before a connection is dropped. The default is 8 packets.
always_keepalive
Assume that SO_KEEPALIVE is set on all TCP connections, the kernel will periodically send a packet to the remote host to verify the connection is still up.

Windows acts very differently from Linux and OS X. Again, there are three parameters, but they perform entirely different tasks. All three parameters are registry entries.

KeepAliveInterval
This parameter determines the interval between TCP keep-alive retransmissions until a response is received. Once a response is received, the delay until the next keep-alive transmission is again controlled by the value of KeepAliveTime. The connection is aborted after the number of retransmissions specified by TcpMaxDataRetransmissions have gone unanswered.
KeepAliveTime
The parameter controls how often TCP attempts to verify that an idle connection is still intact by sending a keep-alive packet. If the remote system is still reachable and functioning, it acknowledges the keep-alive transmission. Keep-alive packets are not sent by default. This feature may be enabled on a connection by an application.
TcpMaxDataRetransmissions
This parameter controls the number of times that TCP retransmits an individual data segment (not connection request segments) before aborting the connection. The retransmission time-out is doubled with each successive retransmission on a connection. It is reset when responses resume. The Retransmission Timeout (RTO) value is dynamically adjusted, using the historical measured round-trip time (Smoothed Round Trip Time) on each connection. The starting RTO on a new connection is controlled by the TcpInitialRtt registry value.

There’s a pretty good reference page with information on how to set these parameters that can be found here.

We still haven’t answered the question of optimal settings. Unfortunately, there doesn’t seem to be a correct answer. The defaults provided by most firewall vendors seem to have been chosen to ensure that the firewall does not run out of resources. Each connection through the firewall must be tracked. As a result, each connection uses up a portion of memory and CPU. Since both memory and CPU are finite resources, administrators must be careful not to exceed the limits of the firewall platform.

There is some good news. Firewalls have had a one hour tcp timeout timer for quite a while. As time has passed and new revisions of firewall hardware are released, the CPU has become more powerful and the amount of memory in each system has grown. The default one hour timer, however, has remained in place. This means that modern firewall platforms are much better prepared to handle an increase in the number of connections tracked. Ultimately, the firewall platform must be monitored and appropriate action taken if resource usage becomes excessive.

My recommendation would be to start by setting the firewall tcp teardown timer to a value slightly higher than that of the clients. For most networks, this would be slightly over two hours. The firewall administrator should monitor the number of connections tracked on the firewall as well as the resources used by the firewall. Adjustments should be made as necessary.

If longer lasting idle connections are unacceptable, then a slightly different tactic can be used. The firewall teardown timer can be set to a level comfortable to the administrator of the network. Problematic clients can be updated to send keepalive packets at a shorter interval. These changes will likely only be necessary on servers. Desktop systems don’t have the same need as servers for long-term establishment of idle connections.

Becoming your own CA

February 13th, 2014

SSL, as I mentioned in a previous blog entry, has some issues when it comes to trust. But regardless of the problems with SSL, it is a necessary part of the security toolchain. In certain situations, however, it is possible to overcome these trust issues.

Commercial providers are not the only entities that are capable of being a Certificate Authority. In fact, anyone can become a CA and the tools to do so are available for free. Becoming your own CA is a fairly painless process, though you might want to brush up on your openSSL skills. And lest you think you can just start signing certificates and selling them to third parties, it’s not quite that simple. The well-known certificate authorities have worked with browser vendors to have their root certificates added as part of the browser installation process. You’ll have to convince the browser vendors that they need to add your root certificate as well. Good luck.

Having your own CA provides you the means to import your own root certificate into your browser and use it to validate certificates you use within your network. You can use these SSL certificates for more than just websites as well. LDAP, RADIUS, SMTP, and other common applications use standard SSL certificates for encrypting traffic and validating remote connections. But as mentioned above, be aware that unless a remote user has a copy of your root certificate, they will be unable to validate the authenticity of your signed certificates.

Using certificates signed by your own CA can provide you that extra trust level you may be seeking. Perhaps you configured your mail server to use your certificate for the POP and IMAP protocols. This makes it more difficult for an attacker to masquerade as either of those services without obtaining your signing certificate so they can create their own. This is especially true if you configure your mail client such that your root certificate is the only certificate that can be used for validation.

Using your own signed certificates for internal, non-public facing services provides an even better use-case. Attacks such as DNS cache poisoning make it possible for attackers to trick devices into using the wrong address for an intended destination. If these services are configured to only use your certificates and reject connection attempts from peers with invalid certificates, then attackers will only be able to impersonate the destination if they can somehow obtain a valid certificate signed by your signing certificate.

Sound good? Well, how do we go about creating our own root certificate and all the various machinery necessary to make this work? Fortunately, all of the necessary tools are open-source and part of most Linux distributions. For the purposes of this blog post, I will be explaining how this is accomplished using the CentOS 6.x Linux distribution. I will also endeavor to break down each command and explain what each parameter does. Much of this information can be found in the man pages for the various commands.

OpenSSL is installed as part of a base CentOS install. Included in the install is a directory structure in /etc/pki. All of the necessary tools and configuration files are located in this directory structure, so instead of reinventing the wheel, we’ll use the existing setup.

To get started, edit the default openssl.cnf configuration file. You can find this file in /etc/pki/tls. There are a few options you want to change from their defaults. Search for the following headers and change the options listed within.

[CA_default]
default_md = sha256

[req]
default_bits = 4096
default_md = sha256
  • default_md : This option defined the default message digest to use. Switching this to sha256 result in a stronger message digest being used.
  • default_bits : This option defines the default key size. 2048 is generally considered a minimum these days. I recommend setting this to 4096.

Once the openssl.cnf file is set up, the rest of the process is painless. First, switch into the correct directory.

cd /etc/pki/tls/misc

Next, use the CA command to create a new CA.

[root@localhost misc]# ./CA -newca
CA certificate filename (or enter to create)

Making CA certificate ...
Generating a 4096 bit RSA private key
 ...................................................................................................................................................................................................................................................++
.......................................................................++
writing new private key to '/etc/pki/CA/private/./cakey.pem'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:cert.example.com
Email Address []:certadmin@example.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/./cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798347 (0xf837fc8d719b304b)
        Validity
            Not Before: Feb 13 18:37:14 2014 GMT
            Not After : Feb 12 18:37:14 2017 GMT
        Subject:
            countryName               = US
            stateOrProvinceName       = MyState
            organizationName          = My Company Inc.
            commonName                = cert.example.com
            emailAddress              = certadmin@example.com
        X509v3 extensions:
            X509v3 Subject Key Identifier:
                14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

            X509v3 Basic Constraints:
                CA:TRUE
Certificate is to be certified until Feb 12 18:37:14 2017 GMT (1095 days)

Write out database with 1 new entries
Data Base Updated

And that’s about it. The root certificate is located in /etc/pki/CA/cacert.pem. This file can be made public without compromising the security of your system. This is the same certificate you’ll want to import into your browser, email client, etc. in order to validate and certificates you may sign.

Now you can start signing certificates. First you’ll need to create a CSR on the server you want to install it on. The following command creates both the private key and the CSR for you. I recommend using the server name as the name of the CSR and the key.

openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr
  • openssl : The openSSL command itself
  • req : This option tells openSSL that we are performing a certificate signing request (CSR) operation.
  • -newkey : This option creates a new certificate request and a new private key. It will prompt the user for the relevant field values. The rsa:4096 argument indicates that we want to use the RSA algorithm with a key size of 4096 bits.
  • -keyout : This gives the filename to write the newly created private key to.
  • -out : This specifies the output filename to write to.
[root@localhost misc]# openssl req -newkey rsa:4096 -keyout www.example.com.key -out www.example.com.csr Generating a 4096 bit RSA private key
.....................................................................................................................++
..........................................................................................................................................................................................................................................................................................................................................................................................................++
writing new private key to 'www.example.com.key'
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:MyState
Locality Name (eg, city) [Default City]:MyCity
Organization Name (eg, company) [Default Company Ltd]:My Company Inc.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:www.example.com
Email Address []:hostmaster@example.com

Once you have the CSR, copy it over to the server you’re using to sign certificates. Unfortunately, the existing tools don’t make it easy to merely name the CSR you’re trying to sign, so we need to create our own tool. First, create a new directory to put the CSRs in.

mkdir /etc/pki/tls/csr

Next, create the sign_cert.sh script in the directory we just created. This file needs to be executable.

#!/bin/sh

# Revoke last year's certificate first :
# openssl ca -revoke cert.crt

DOMAIN=$1
YEAR=`date +%Y`
rm -f newreq.pem
ln -s $DOMAIN.csr newreq.pem
/etc/pki/tls/misc/CA -sign
mv newcert.pem $DOMAIN.$YEAR.crt

That’s all you need to start signing certificates. Place the CSR you transferred from the other server into the csr directory and use script we just created to sign it.

[root@localhost csr]# ./sign_cert.sh www.example.com
Using configuration from /etc/pki/tls/openssl.cnf
Enter pass phrase for /etc/pki/CA/private/cakey.pem:
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
        Subject:
            countryName = US
            stateOrProvinceName = MyState
            localityName = MyCity
            organizationName = My Company Inc.
            commonName = www.example.com
            emailAddress = hostmaster@example.com
        X509v3 extensions:
            X509v3 Basic Constraints:
                CA:FALSE
            Netscape Comment:
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier:
                3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
            X509v3 Authority Key Identifier:
                keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Certificate is to be certified until Feb 13 18:48:55 2015 GMT (365 days)
Sign the certificate? [y/n]:y

1 out of 1 certificate requests certified, commit? [y/n]y
Write out database with 1 new entries
Data Base Updated
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 17886042129551798348 (0xf837fc8d719b304c)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=MyState, O=My Company Inc., CN=cert.example.com/emailAddress=certadmin@example.com
        Validity
            Not Before: Feb 13 18:48:55 2014 GMT
            Not After : Feb 13 18:48:55 2015 GMT
    Subject: C=US, ST=MyState, L=MyCity, O=My Company Inc., CN=www.example.com/emailAddress=hostmaster@example.com
    Subject Public Key Info:
        Public Key Algorithm: rsaEncryption
            Public-Key: (4096 bit)
            Modulus:
                00:d9:5a:cc:87:f0:e5:1e:6f:a0:25:cd:fe:36:64:
                6c:68:ae:2f:3e:7e:93:93:a4:69:6f:f1:28:c1:c2:
                4d:5f:3c:3a:61:2e:4e:f0:90:89:54:48:d6:03:83:
                fb:ac:1e:7c:9a:e8:be:cf:c9:8f:93:41:27:3e:1b:
                66:63:db:a1:54:cb:f7:1d:0b:71:bc:5f:80:e1:30:
                e4:28:14:68:1c:09:ba:d0:aa:d3:e6:2b:24:cd:21:
                67:99:dc:8b:7a:2c:94:d0:ed:8e:02:5f:2f:52:06:
                09:0e:8a:b7:bf:64:e8:d7:bf:94:94:ad:80:34:57:
                32:89:51:00:fe:fd:8c:7d:17:35:4c:c7:5f:5b:58:
                f4:97:9b:21:42:9e:a9:6c:86:5f:f4:35:98:a5:81:
                62:9d:fa:15:07:9d:29:25:38:2b:5d:22:74:58:f8:
                58:56:1c:e9:65:a3:62:b5:a7:66:17:95:12:21:ca:
                82:12:90:b6:8a:8d:1f:79:e8:5c:f4:f9:6c:3a:44:
                f9:3a:3f:29:0d:2e:bf:51:98:9f:58:21:e5:d9:ee:
                78:54:ad:5a:a2:6f:d1:85:9a:bc:b9:21:92:e8:76:
                80:b8:0f:96:77:9a:99:5e:3b:06:bb:6f:da:1c:6e:
                f2:10:16:69:ba:2b:57:c8:1a:cc:b6:e4:0c:1d:b2:
                a6:b7:b9:6c:37:2e:80:13:46:a1:46:c3:ca:d6:2b:
                cd:f7:ba:38:98:74:15:7f:f1:67:03:8e:24:89:96:
                55:31:eb:d8:44:54:a5:11:04:59:e6:73:59:42:ed:
                aa:a3:37:13:ab:63:ab:ef:61:65:0a:af:2f:71:91:
                23:40:7d:f8:e8:a1:9d:cf:3f:e5:33:d9:5f:d2:4d:
                06:d0:2c:70:59:63:06:0f:2a:59:ae:ae:12:8d:f4:
                6c:fd:b2:33:76:e8:34:0f:1f:24:91:2a:a8:aa:1b:
                11:8a:0b:86:f3:67:b8:be:b7:a0:06:02:4a:76:ef:
                dd:ed:c4:a9:03:a1:8c:b0:39:9d:35:98:7f:04:1c:
                24:8a:1c:7c:6f:35:56:71:ee:b5:36:b7:3f:14:04:
                eb:48:a1:4f:6f:8e:43:7c:8b:36:4a:bf:ba:e9:8b:
                d9:38:0c:76:24:e9:a3:38:bf:4e:86:fd:31:4d:c3:
                6f:16:07:09:dd:d8:6b:0b:9d:4d:97:eb:1f:92:21:
                b2:a5:f9:d8:55:61:85:d2:99:97:bc:27:12:be:eb:
                55:86:ee:1f:f5:6f:a7:c5:64:2f:4e:c2:67:a3:52:
                97:7a:d9:66:89:05:6a:59:ed:69:7b:22:10:2b:a1:
                14:4e:5d:b8:f0:21:e9:11:d0:25:ae:bc:05:2b:c3:
                db:ad:cf
            Exponent: 65537 (0x10001)
    X509v3 extensions:
        X509v3 Basic Constraints:
            CA:FALSE
        Netscape Comment:
            OpenSSL Generated Certificate
        X509v3 Subject Key Identifier:
            3A:EE:2B:3A:73:A6:C3:5C:39:90:EA:85:3F:DA:71:33:7B:91:4D:7F
        X509v3 Authority Key Identifier:
            keyid:14:FC:14:BC:F4:A5:3E:6B:0C:58:3B:DF:3B:26:35:46:A0:BE:EC:F1

Signature Algorithm: sha256WithRSAEncryption
     ca:66:b2:55:64:e6:40:a5:85:19:11:66:0d:63:89:fb:0d:3a:
     0c:ec:fd:cb:5c:93:44:1e:3f:1b:ca:f5:3d:85:ab:0a:0b:dc:
     f3:18:1d:1f:ec:85:ff:f3:82:52:9e:c7:12:19:07:e9:6a:82:
     bd:32:f6:d1:19:b2:b7:09:1c:34:d7:89:45:7e:51:4d:42:d6:
     4e:78:b6:39:b3:76:58:f8:20:57:b3:d8:7b:e0:b3:2f:ce:9f:
     a2:59:de:f6:31:f2:09:1c:91:3b:7f:97:61:cb:11:a4:b4:73:
     ab:47:64:e8:93:07:98:d5:47:75:8d:9a:8f:a3:8f:e8:f4:42:
     7e:b8:1b:e8:36:72:13:93:f9:a8:cc:6d:b4:85:a7:af:94:fe:
     f3:6e:76:c2:4d:78:c3:c2:0b:a4:48:27:d3:eb:52:c3:46:14:
     c1:26:03:28:a0:53:c7:db:59:c9:95:b8:d9:f0:d9:a8:19:4a:
     a7:0f:81:ad:3c:e1:ec:f2:21:51:0d:bc:f9:f9:f6:b6:75:02:
     9f:43:de:e6:2f:9b:77:d3:c3:72:6f:f6:18:d7:a3:43:91:d2:
     04:2a:c8:bf:67:23:35:b7:41:3f:d1:63:fe:dc:53:a7:26:e9:
     f4:ee:3b:96:d5:2a:9c:6d:05:3d:27:6e:57:2f:c9:dc:12:06:
     2c:cf:0c:1b:09:62:5c:50:82:77:6b:5c:89:32:86:6b:26:30:
     d2:6e:33:20:fc:a6:be:5a:f0:16:1a:9d:b7:e0:d5:d7:bb:d8:
     35:57:d2:be:d5:07:98:b7:3c:18:38:f9:94:4c:26:3a:fe:f2:
     ad:40:e6:95:ef:4b:a9:df:b0:06:87:a2:6c:f2:6a:03:85:3b:
     97:a7:ef:e6:e5:d9:c3:57:87:09:06:ae:8a:5a:63:26:b9:35:
     29:a5:87:4b:7b:08:b9:63:1c:c3:65:7e:97:ae:79:79:ed:c3:
     a3:36:c3:87:1f:54:fe:0a:f1:1a:c1:71:3d:bc:9e:36:fc:da:
     03:2b:61:b5:19:0c:d7:4d:19:37:61:45:91:4c:c9:7a:5b:00:
     cd:c2:2d:36:f9:1f:c2:b1:97:2b:78:86:aa:75:0f:0a:7f:04:
     85:81:c5:8b:be:af:a6:a7:7a:d2:17:26:7a:86:0d:f8:fe:c0:
     27:a8:66:c7:92:cd:c5:34:99:c9:8e:c1:25:f3:98:df:4e:48:
     37:4a:ee:76:4a:fa:e4:66:b4:1f:cd:d8:e0:25:fd:c7:0b:b3:
     12:af:bb:b7:29:98:5e:86:f2:12:8e:20:c6:a9:40:6f:39:14:
     8b:71:9f:98:22:a0:5b:57:d1:f1:88:7d:86:ad:19:04:7b:7d:
     ee:f2:c9:87:f4:ca:06:07
-----BEGIN CERTIFICATE-----
MIIGBDCCA+ygAwIBAgIJAPg3/I1xmzBMMA0GCSqGSIb3DQEBCwUAMHoxCzAJBgNV
BAYTAlVTMRAwDgYDVQQIDAdNeVN0YXRlMRgwFgYDVQQKDA9NeSBDb21wYW55IElu
Yy4xGTAXBgNVBAMMEGNlcnQuZXhhbXBsZS5jb20xJDAiBgkqhkiG9w0BCQEWFWNl
cnRhZG1pbkBleGFtcGxlLmNvbTAeFw0xNDAyMTMxODQ4NTVaFw0xNTAyMTMxODQ4
NTVaMIGLMQswCQYDVQQGEwJVUzEQMA4GA1UECAwHTXlTdGF0ZTEPMA0GA1UEBwwG
TXlDaXR5MRgwFgYDVQQKDA9NeSBDb21wYW55IEluYy4xGDAWBgNVBAMMD3d3dy5l
eGFtcGxlLmNvbTElMCMGCSqGSIb3DQEJARYWaG9zdG1hc3RlckBleGFtcGxlLmNv
bTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBANlazIfw5R5voCXN/jZk
bGiuLz5+k5OkaW/xKMHCTV88OmEuTvCQiVRI1gOD+6wefJrovs/Jj5NBJz4bZmPb
oVTL9x0LcbxfgOEw5CgUaBwJutCq0+YrJM0hZ5nci3oslNDtjgJfL1IGCQ6Kt79k
6Ne/lJStgDRXMolRAP79jH0XNUzHX1tY9JebIUKeqWyGX/Q1mKWBYp36FQedKSU4
K10idFj4WFYc6WWjYrWnZheVEiHKghKQtoqNH3noXPT5bDpE+To/KQ0uv1GYn1gh
5dnueFStWqJv0YWavLkhkuh2gLgPlneamV47Brtv2hxu8hAWaborV8gazLbkDB2y
pre5bDcugBNGoUbDytYrzfe6OJh0FX/xZwOOJImWVTHr2ERUpREEWeZzWULtqqM3
E6tjq+9hZQqvL3GRI0B9+Oihnc8/5TPZX9JNBtAscFljBg8qWa6uEo30bP2yM3bo
NA8fJJEqqKobEYoLhvNnuL63oAYCSnbv3e3EqQOhjLA5nTWYfwQcJIocfG81VnHu
tTa3PxQE60ihT2+OQ3yLNkq/uumL2TgMdiTpozi/Tob9MU3DbxYHCd3YawudTZfr
H5IhsqX52FVhhdKZl7wnEr7rVYbuH/Vvp8VkL07CZ6NSl3rZZokFalntaXsiECuh
FE5duPAh6RHQJa68BSvD263PAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJYIZIAYb4
QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1UdDgQWBBQ6
7is6c6bDXDmQ6oU/2nEze5FNfzAfBgNVHSMEGDAWgBQU/BS89KU+awxYO987JjVG
oL7s8TANBgkqhkiG9w0BAQsFAAOCAgEAymayVWTmQKWFGRFmDWOJ+w06DOz9y1yT
RB4/G8r1PYWrCgvc8xgdH+yF//OCUp7HEhkH6WqCvTL20RmytwkcNNeJRX5RTULW
Tni2ObN2WPggV7PYe+CzL86folne9jHyCRyRO3+XYcsRpLRzq0dk6JMHmNVHdY2a
j6OP6PRCfrgb6DZyE5P5qMxttIWnr5T+8252wk14w8ILpEgn0+tSw0YUwSYDKKBT
x9tZyZW42fDZqBlKpw+BrTzh7PIhUQ28+fn2tnUCn0Pe5i+bd9PDcm/2GNejQ5HS
BCrIv2cjNbdBP9Fj/txTpybp9O47ltUqnG0FPSduVy/J3BIGLM8MGwliXFCCd2tc
iTKGayYw0m4zIPymvlrwFhqdt+DV17vYNVfSvtUHmLc8GDj5lEwmOv7yrUDmle9L
qd+wBoeibPJqA4U7l6fv5uXZw1eHCQauilpjJrk1KaWHS3sIuWMcw2V+l655ee3D
ozbDhx9U/grxGsFxPbyeNvzaAythtRkM100ZN2FFkUzJelsAzcItNvkfwrGXK3iG
qnUPCn8EhYHFi76vpqd60hcmeoYN+P7AJ6hmx5LNxTSZyY7BJfOY305IN0rudkr6
5Ga0H83Y4CX9xwuzEq+7tymYXobyEo4gxqlAbzkUi3GfmCKgW1fR8Yh9hq0ZBHt9
7vLJh/TKBgc=
-----END CERTIFICATE-----
Signed certificate is in newcert.pem

The script automatically renamed the newly signed certificate. In the above example, the signed certificate is in www.example.com.2014.crt. Transfer this file back to the server it belongs on and you’re all set to start using it.

That’s it! You’re now a certificate authority with the power to sign your own certificates. Don’t let all that power go to your head!

SSL “Security”

February 7th, 2014

SSL, a cryptographically secure protocol, was created by Netscape in the mid-1990’s. Today, SSL, and it’s replacement, TLS, are used by web browsers and other programs to create secure connections between devices across the Internet.

SSL provides the means to cryptographically secure a tunnel between endpoints, but there is another aspect of security that is missing. Trust. While a user may be confident that the data received from the other end of the SSL tunnel was sent by the remote system, the user can not be confident that the remote system is the system it claims to be. This problem was partially solved through the use of a Public Key Infrastructure, or PKI.

PKI, in a nutshell, provides the trust structure needed to make SSL secure. Certificates are issued by a certificate authority or CA. The CA cryptographically signs the certificate, enabling anyone to verify that the certificate was issued by the CA. Other PKI constructs offer validation of the registrant, indexing of the public keys, and a key revocation system. It is within these other constructs that the problems begin.

When SSL certificates were first offered for sale, the CAs spent a great deal of time and energy verifying the identity of the registrant. Often, paper copies of the proof had to be sent to the CA before a certificate would be issued. The process could take several days. More recently, the bar for entry has been lowered significantly. Certificates are now issued on an automated process requiring only that the registrant click on a link sent to one of the email addresses listed in the Whois information. This lack of thorough verification has significantly eroded the trust a user can place in the authenticity of a certificate.

CAs have responded to this problem by offering different levels of SSL certificates. Entry level certificates are verified automatically via the click of a link. Higher level SSL certificates have additional identity verification steps. And at the highest level, the Extended Validation, or EV certificate requires a thorough verification of the registrants identity. Often, these different levels of SSL certificates are marketed as stronger levels of encryption. The reality, however, is that the level of encryption for each of these certificates is exactly the same. The only difference is the amount of verification performed by the CA.

Despite the extra level of verification, these certificates are almost indistinguishable from one another. With the exception of EV certificates, the only noticeable difference between differing levels of SSL certificates are the identity details obtained before the certificate is issued. An EV certificate, on the other hand, can only be obtained from certain vendors, and shows up in a web browser with a special green overlay. The intent here seems to be that websites with EV certificates can be trusted more because the identity of the organization running the website was more thoroughly validated.

In the end, though, trust is the ultimate issue. Users have been trained to just trust a website with an SSL certificate. And trust sites with EV certificates even more. In fact, there have been a number of marketing campaigns targeted at convincing users that the “Green Address Bar” means that the website is completely trustworthy. And they’ve been pretty effective. But, as with most marketing, they didn’t quite tell the truth. sure, the EV certificate may mean that the site is more trustworthy, but it’s still possible that the certificate is fake.

There have been a number of well known CAs that have been compromised in recent years. Diginotar and Comodo being two of the more high profile ones. In both cases, it became possible for rogue certificates to be created for any website the attacker wanted to hijack. That certificate plus some creative DNS poisoning and the attacker suddenly looks like your bank, or google, or whatever site the attacker wants to be. And, they’ll have a nice shiny green EV certificate.

So how do we fix this? Well, one way would be to use the certificate revocation system that already exists within the PKI infrastructure. If a certificate is stolen, or a false certificate is created, the CA has the ability to put the signature for that certificate into the revocation system. When a user tries to load a site with a bad certificate, a warning is displayed telling the user that the certificate is not to be trusted.

Checking revocation of a certificate takes time, and what happens if the revocation server is down? Should the browser let the user go to the site anyway? Or should it block by default? The more secure option is to block, of course, but most users won’t understand what’s going on. So most browser manufacturers have either disabled revocation checking completely, or they default to allowing a user to access the site when the revocation site is slow or unavailable.

Without the ability to verify if a certificate is valid or not, there can be no real trust in the security of the connection, and that’s a problem. Perhaps one way to fix this problem is to disconnect the revocation process from the process of loading the webpage. If the revocation check happened in parallel to the page loading, it shouldn’t interfere with the speed of the page load. Additional controls can be put into place to prevent any data from being sent to the remote site without a warning until the revocation check completes. In this manner, the revocation check can take a few seconds to complete without impeding the use of the site. And after the first page load, the revocation information is cached anyway, so subsequent page loads are unaffected.

Another option, floated by the browser builders themselves, is to have the browser vendors host the revocation information. This information is then passed on to the browsers when they’re loaded. This way the revocation process can be handled outside of the CAs, handling situations such as those caused by a CA being compromised. Another idea would be to use short term certificates that expire quickly, dropping the need for revocation checks entirely.

It’s unclear as to what direction the market will move with this issue. It has been over two years since the attacks on Diginotar and Comodo and the immediacy of this problem seems to have passed. At the moment, the only real fix for this is user education. But with the marketing departments for SSL vendors working to convince users of the security of SSL, this seems unlikely.

BSides Delaware 2013

November 12th, 2013

The annual BSides Delaware conference took place this past weekend, November 8th and 9th. BSides Delaware is a free community driven security event that takes place at the Wilmington University New Castle campus. The community is quite open, welcoming seasoned professionals, newcomers, curious individuals, and even children. There were a number of families who attended, bringing their children with them to learn and have fun.

I was fortunate enough to be able to speak at last years BSides and was part of the staff for this years event. There were two tracks for talks, many of which were recorded and are already online thanks to Adrian Crenshaw, the IronGeek. Adrian has honed his video skills and was able to have every recording online by the closing ceremonies on Saturday evening.

In all there were more than 25 talks over the course of two days covering a wide variety of topics, logging, Bitcoins, forensics, and more. While most speakers were established security professionals, there were a few new speakers striving to make a name for themselves.

This year also included a FREE wireless essentials training class. The class was taught by a team of world-class instructors including Mike Kershaw (drag0rn), author of the immensely popular Kismet wireless tool, Russell Handorf from the FBI Cyber Squad, and Rick Farina, lead developer for Pentoo. The class covered everything from wireless basics to software-defined radio hacking. An absolutely amazing class.

In addition to the talks, BSides also features not one, but two lockpick villages. Both Digital Trust as well as Toool were present. The lockpick villages were a big hit with seasoned professionals as well as the very young. It’s amazing to see how adept a young child can be with a lockpick.

Hackers for Charity was present as well with a table of goodies for sale. They also held a silent (and not so silent) auction where all proceeds went to the charity. Hackers for Charity raises money to help with a variety of projects they engage in across the world. From their website :

We employ volunteer hackers and technologists through our Volunteer Network and engage their skills in short projects designed to help charities that can not afford traditional technical resources.

We’ve personally witnessed how one person can have a profound impact on the world. By giving of their skills, time and talent our volunteers are profoundly impacting the world, one “hacker” at a time.

BSides 2013 was an amazing experience. This was my second year at the conference and it’s amazing how it has grown. The dates for BSidesDE 2014 have already been announced, November 14th and 15th. Mark your calendars and make an effort to come join in the fun. It’s worth it.

Pebble Review

April 3rd, 2013

In April of 2012, a Kickstarter project was launched by a company aiming to create an electronic watch that served as a companion to your smartphone. A month later, the project exceeded it’s funding goal by over 100%, closing at over $10 million in pledges. Happily, I was one of the over 68,000 people that pledged. I received my Pebble about a month ago or so and I’ve been wearing it ever since.

The watch itself is fairly simple, a rectangular unit with an e-ink display, four buttons, and a rubberized plastic strap. The screen resolution is 144×168, plenty of pixels for some fairly impressive detail. The watch communicates with your mobile phone (Android or iPhone only) via a bluetooth connection. All software updates and app installation occurs over the bluetooth connection. There is a 3-axis accelerometer as well a a pretty standard vibrating motor for silent alerts.

According to the official Pebble FAQ, battery life is 7+ days on a single charge, but this depends on your overall use of the device. The more alerts your receive, the more the backlight comes on, and the more apps you use on the device, the shorter your battery life.

Pebble is still in the process of building the initial run of watches for backers. Black watches, being the majority of the orders, were built first. Other colors are coming online in more recent weeks. Pebble has a website where interested parties can track how many pebbles have been built and shipped.

I’ve been pretty impressed with the watch thus far. Pebble has been fairly responsive to inquiries I’ve made, and they seem dedicated to making sure they have a top quality product. Of course, as is typical on the Internet, not everyone is happy. There seem to be a lot of complaints about communication, how long it’s taking to get watches, and about the features themselves.

It’s hard to say whether these complaints have any merit, though. For starters, I can’t imagine it’s a simple task to design and build 68,000 watches in a short period of time. And to complicate matters further, it seems that many backers of Kickstarter projects don’t understand the difference between being a backer and being a customer.

When you back a Kickstarter project, you’re pledging money to help start the project. As a “reward” for contributing, if the project is successful, you are entitled to whatever the project owners have designated for your level of contribution. The key part of this being, if the project is successful. Some projects take longer than others, and times often slip. That said, I’ve only been part of one Kickstarter that has failed, and even that one is being resurrected by other interested parties.

But there are some legitimate complaints, some that can be addressed, and others that likely won’t. For instance, I’ve noticed that with recent firmware releases, the battery life on my watch had dropped considerably. Based on communication with the developers, they are aware of this and are actively working to resolve it. I’m not sure what the problem is, exactly, but I’m confident they’ll have it fixed in the next firmware update.

The battery indicator is a source of frequent discussion. Right now, there’s no indicator of battery life until the battery is running low. And that indicator doesn’t show on the watchface, it only shows when you are in other menus. This, in my opinion, is a poor UI choice. I’d much rather see a battery indicator option available for the watchface itself.

Menu layout was also a frequent source of frustration for users. In previous firmware releases, you had to actively go to the watchface you wanted. Recent releases changed this so that the watch was the default view and other screens were chosen as needed. The behavior of the navigation buttons on the watch were also updated to reflect this new choice.

So Pebble continues to improve over time. It’s an iterative process that will take some time to get right. I’m eager to see what future releases will bring. Next week, Pebble is scheduled to release the watch SDK, allowing users, for the first time, to start adding their own customizations to the watch.

The Pebble watch has a lot of potential. As the platform matures, I’m hoping to see a number of features I’m interested in come to fruition. Interaction between Pebble and other apps on iPhone devices would be a welcome addition. I would love to see an actigraphy app that uses the Pebble for sleep monitoring. From what I’ve read, sleep monitoring is even more accurate when the monitor is placed on the sleeper’s wrist. Seems like a perfect use for the Pebble.

I’d also like to see more of an open SDK, allowing users such as myself to write code for the Pebble. While I’m aware of the closed nature of the iPhone platform itself, it is still possible to add applications to the Pebble itself. I can’t wait to see what others build for this platform. Given a bit of time, I think this can grow into something even more amazing.

Customer Dis-Service

January 13th, 2013

In general, I’m a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I’ll take a look, but unless there’s a compelling reason to change, I’ll stick with what I have.

But sometimes a change is forced upon me. Take, for instance, this last week. I’ve been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I’ve been using Verizon’s services for a long time. I’ve been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I’ve had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I’ve always been able to pay the bill and move on. Since that’s really the only thing I used their online service for, I was content to leave well enough alone.

In more recent months, we’ve been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon’s fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn’t nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn’t offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I’d rather not do business with if I could avoid it.

In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.

Ok, I thought, let’s dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.

Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn’t a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don’t know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn’t get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn’t understand my request.

Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I wanted to cancel my home phone and just keep the DSL. I also wanted to verify that I was not under contract so I wouldn’t end up with some crazy early termination fee. She explained that this was perfectly acceptable and that I could make these changes whenever I wanted. I verified again that I could keep the DSL without issue. She agreed, no problem.

Excellent! Off I went to the cell carrier, purchased (free with a contract) the new home phone box, and had them port the number. The representative cautioned that he saw DSL service listed when he was porting and suggested I contact Verizon to verify that the DSL service would be ok.

I called Verizon again to verify everything would work as intended. I explained what I had done, asked when the port would go through, and stressed that the DSL service was staying. The representative verified the port date and said that the DSL service would be fine.

You can guess where this is going, can’t you. On the day of the port, the phone line switched as expected. The new home phone worked perfectly and I made the necessary changes to the home wiring to ensure that the DSL connection was isolated away from the rest of the wiring. DSl was still up, phone ported, everything was great. Until the next morning.

I woke up the following morning and started my normal routine. Get dressed, go exercise, etc. Except that on the way to exercise, I noticed that the router light was blinking. Odd, I wonder what was going on. Perhaps something knocked the system online overnight? The DSL light on the modem was still on, so I had a connection to the DSLAM. No problem, reboot the router and we’ll be fine. So, I rebooted and walked away. After a few minutes I checked the system and noticed that I was still not able to get online. I walked through a mental checklist and decided that the username and password for the PPPoE connection must be failing. Time to call Verizon and see what’s wrong.

I contacted Verizon and first spoke to a sales rep who informed me that my services had been cancelled per my request. Wonderful. Al that work and they screw it up anyway. I explained what I had done and she took a deeper look into the account. Turns out the account was “being migrated” and she apologized for the mixup. Since I was no longer bundled, the DSL account had to be migrated. I talked with her some more about it and she decided to send me to technical support to verify everything was ok. Off I go to technical support, fully expecting them to ask be to reset my DSL modem. No such luck, however, the technical support rep explained that I had no DSL service.

And back to sales I went. I explained, AGAIN, what was going on. The representative confirmed my story, verified that the account was being migrated, and asked me to check the service again in a few hours. All told, I spent roughly an hour on the phone with Verizon and missed out on my morning exercise.

After rushing through the remainder of my morning routine and explaining to my wife why the Internet wasn’t working, I left for work. My wife checked in a few hours later to let me know that, no, we still did not have an Internet connection. So I called Verizon again. Again I’m told I have no service and that I have cancelled them. Again I explain the problem and what I had done. And this time, the representative explains to me that they do not offer unbundled DSL service anymore, they haven’t had that service in about a year. She goes on to offer me a bundled package with a phone line and explains that I don’t have to use the phone line, I just have to pay for it.

So all of the careful planning I had done was for naught. In an effort to make sure this didn’t happen to anyone else, the rep checked back on my account to see who had informed me about the DSL service. According to the notes, however, I had never called about such a thing. I called to complain about unsolicited phone calls and they referred me to their fraud and abuse office and explains about the magical phone code I could put in to block calls. Ugh! She then went on to detail every aspect of my problem, again so someone else didn’t have this problem.

This is the sort of situation that will, very rapidly, cause me to look elsewhere for service. And that’s exactly what I did. I’ve since cut all ties with Verizon and moved on to a different Internet service provider. I’m not happy with having to deal with this provider, but it’s the only alternative at the moment. Assuming I don’t have any major problems with the service, I’ll probably continue with them for a while. Of course, if I run into problems here, the decision becomes more difficult. A “lesser of two evils” situation, if you will. But for now, I’ll deal with what comes up.

Programming Note

January 3rd, 2013

In 2012 I posted a little over a dozen entries to this blog. I like to think that each entry was well thought out and time well spent. But only a dozen? That’s about one entry a month… I’d really like to do more.

So, new year, time to make some changes.. I spent a lot of time judging whether each post was “worth the effort” and “long enough to matter.” I need to get past that. My goal is to start posting a number of smaller entries. I definitely want the quality to be there, but I want to avoid agonizing over each and every entry.

So here’s to a new year and more content!

Derbycon 2012

October 6th, 2012

I spent this past weekend in Louisville, KY attending a relatively new security conference called Derbycon. This year was the second year they held the conference and the first year I spoke there. It was amazing, to say the least.

I haven’t been to many conventions, and this is the only security-oriented convention I’ve attended. When I first attended last year, it was with come trepidation. I knew that some of the attendees I’d be seeing were truly rockstars in the security world. And, unfortunately, one of the people who was supposed to come with us was unable to attend. Of course, that person was the one person in our group who was connected within the security world and we were depending on them to introduce us to everyone.

It went well, nonetheless, and we were able to meet a lot of amazing people while we were there. Going back this year, we were able to rekindle friendships that started last year, and even make a few new ones. Derbycon has an absolutely amazing sense of family. Even the true rockstars of the con are down to earth enough to hang out with the newcomers.

And this year, I had the opportunity to speak. I submitted my CFP earlier in the year, not really expecting it to be chosen. Much to my surprise, though, it was. And so I spent some time putting together my talk and prepared to stand in front of the very people I looked up to. It was nerve-wracking to say the least. You can watch the video over on the Irongeek site, and you can find the slides in my presentation archive.

But I powered through it. I delivered my talk and while it may not have been the most amazing talk, it was an accomplishment. I think it’s given me a bit more confidence in my own abilities and I’m looking forward to giving another. In fact, I’ve since submitted a talk to BSides Deleware at the behest of the organizers. I haven’t heard back yet, but here’s hoping.

I’m already making plans to attend Derbycon 2013 and I hope to be a permanent fixture there for many years to come. Derbycon is an amazing place to go and something truly magnificent to experience. I may not be in the security industry, but they made me feel truly welcome despite my often dumb questions and inane comments. Rel1k, IronGeek, and Purehate have put together something special and I was proud to be a part of it again.