Does it do the thing?

January 4th, 2019

Back in 2012 I gave a talk at Derbycon 2.0. This was my first infosec talk and I was a little nervous, to say the least. Anyway, I described a system I wanted to write that handled distributed baseline scanning.

After a lot of starts and stops, I finished a basic 1.0 version in 2014. It’s still quite rough and I’ve since been working, intermittently, on making the system more robust and solid. I’ve been working on a python replacement for the GUI as well, instead of the current PHP one. The repository is located here, if you’re interested in taking a look.

Why am I telling you all of this? Well, as part of the updates I’m making, I wanted to do things the “right way” and make sure I have unit testing in place before I start making additional changes to the code. Problem is, while I learned about unit testing, I’ve never really implemented it in any meaningful way, so this is a bit new to me.

So why unit testing? Well, the hypothesis is that by creating tests that check every line of code, you ensure that the code is working as expected. Thus, if the tests pass, then the code should be solid and bug free. In reality, this is rarely the case. Tests can be just as flawed as any other code. Additionally, you may miss testing certain corner cases and miss potential bugs. In the end, the general consensus is that unit testing is a complicated religious argument.

Let’s assume that we want to unit test anyway and move on to the actual testing bits, shall we? We’ll start with a contrived example to make things easier. Assume we have the following code in a file called mytestcode.py:

#!/bin/python

def add(value1, value2):
    return value1 + value2

Simple enough, just a simple function to return the value of two numbers added together. Let’s create some test cases, shall we?

#!/bin/python

from mytestcode import add

class TestAdd(object):
    def test_add(self):
        assert add(1,1) == 2

    def test_add_fail(self):
        assert add(1,1) != 3

What we have here are two simple test cases. First, we test to make sure that if we call the add function with two values, 1 and 1, we get a 2 as a return value. Second, we test that providing the same values as input does not return a 3. Simple, right? But have we really tested all of the corner cases? What happens if we feed the function a negative? How about a non-numeric value? Are there cases where we can cause an exception?

To be fair, the original function is poorly written and is merely being used as a simple example. This is the problem with contrived examples, of course. They miss important details, often simply things too much, and can lead to beginners making big mistakes when using them as teaching tools. So please, be aware, the above code really isn’t very good code. It’s intended to be simple to understand.

Let’s take a look at some “real” code directly from my distributed scanner project. This particular code is something I found on Stack Overflow when I was looking for a way to identify whether a process was still running or not.

#!/usr/bin/python

import errno
import os
import sys

def pid_exists(pid):
    """Check whether pid exists in the current process table.
    UNIX only.
    """
    if pid < 0:
        return False
    if pid == 0:
        # According to "man 2 kill" PID 0 refers to every process
        # in the process group of the calling process.
        # On certain systems 0 is a valid PID but we have no way
        # to know that in a portable fashion.
        raise ValueError('invalid PID 0')
    try:
        os.kill(pid, 0)
    except OSError as err:
        if err.errno == errno.ESRCH:
            # ESRCH == No such process
            return False
        elif err.errno == errno.EPERM:
            # EPERM clearly means there's a process to deny access to
            return True
        else:
            # According to "man 2 kill" possible error values are
            # (EINVAL, EPERM, ESRCH)
            raise
    else:
        return True

Testing this code should be relatively straightforward, with the exception of the os.kill call. For that, we’ll need to delve into mock objects. Let’s tackle the simple cases first:

#!/usr/bin/env python

import pytest

from libs.funcs import pid_exists

class TestFuncs(object):
    def test_pid_negative(self):
        assert pid_exists(-1) == False

    def test_pid_zero(self):
        with pytest.raises(ValueError) as e_info:
            pid_exists(0)

    def test_pid_typeerror(self):
        with pytest.raises(TypeError):
            pid_exists('foo')
        with pytest.raises(TypeError):
            pid_exists(5.0)
        with pytest.raises(TypeError):
            pid_exists(1234.4321)

That’s relatively simple. We verify that False is returned for a negative PID and a ValueError is returned for a PID of zero. We also test that a TypeError is returned if we don’t provide an integer value. What’s left is handling a valid PID and testing that it returns True for a running process and False otherwise. In order to test the rest, we could go through a lot of elaborate setup to start a process, get the PID, and then test our code, but there’s a lot that can go wrong there. Additionally, we’re looking to test our logic and not the entirety of another module. So, what we really want is a way to provide an arbitrary return value for a given call. Enter the mock module.

The mock module is part of the unittest framework in python. Essentially, the mock module allows you to identify a call or an object that you want to create a fake version of, and then provide the behavior you’re expecting that mocked version to have. So, for instance, you can mock a function call and simply provide the return value you’re looking for instead of having to call the function directly. This functionality allows you to precisely test your logic versus doing a deeper integration test.

To finish up our testing code for the pid_exists() function, we want to mock the os.kill() function and have it return specific values so we can check the various branches of code we have.

    @patch('os.kill')
    def test_pid_exists(self, oskillobj):
        oskillobj.return_value = None
        assert pid_exists(100) == True

    @patch('os.kill')
    def test_pid_does_not_exist(self, oskillobj):
        oskillobj.side_effect = OSError(errno.ESRCH, 'No such process')
        assert pid_exists(1234) == False

    @patch('os.kill')
    def test_pid_no_permissions(self, oskillobj):
        oskillobj.side_effect = OSError(errno.EPERM, 'Operation not permitted')
        assert pid_exists(1234) == True

    @patch('os.kill')
    def test_pid_invalid(self, oskillobj):
        oskillobj.side_effect = OSError(errno.EINVAL, 'Invalid argument')
        with pytest.raises(OSError):
            pid_exists(2468)

    @patch('os.kill')
    def test_pid_os_typeerror(self, oskillobj):
        oskillobj.side_effect = TypeError('an integer is required (got type str)')
        with pytest.raises(TypeError):
            pid_exists(1234)

The above code tests all of the branching available in the rest of the code, verifying the logic we’ve written. The code should be pretty straightforward. The return_value attribute of a mock object directly defines what we want the mocked function to recall while the side_effect attribute allows us to throw an exception in response to the function call. With those two features of a mocked object, we’re able to successfully test the rest of the cases we need.

This little journey to learn how to write unit tests has been fun and informative. I just need to finish up the rest of the code, striving to hit as close to 100% coverage as I can while keeping the test cases reasonable. It’s taken a while to get going, but the more code I’ve been writing, the faster and more accurate I’m getting. As they say, “practice makes perfect,” though I’d settle with functionally complete and relatively bug-free.

One final word of caution. I’m a sole developer working on this code, so I’m the only one around to write test cases. In a larger shop, the originator of the logic should not be the one writing the test cases. The reason for this is that the original coder typically knows their code quite well and has expectations regarding how the code will be used. For instance, I’m expecting that anyone calling the add() function I wrote above to only supply numbers and I haven’t added any sort of type checking or input validation. As a result, I avoided adding test cases that supply invalid inputs, knowing that would fail. Someone else writing the test cases would likely have provided a number of different inputs and found that input validation was missing. So if you’re in a larger shop, do yourself a favor and have someone else write your test cases. And to ensure they provide robust test cases, only provide the function prototypes and not the full function definitions.

It’s docker, it’s a container, it’s… a process?

November 29th, 2018

In a previous post I discussed Docker from a high level. In this post, we’ll take a closer look at how processes run in a container and how it differs from the common view of the architecture that is used to explain Docker. Remember this?

Common Docker Architecture Overview

The problem with this image, however, is that while it helps conceptualize what we’re talking about, it doesn’t reflect reality. If you listed the processes outside of the container, one might think you’d see the docker daemon running and a bunch of additional processes that represent the containers themselves:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      4000     1  0 Oct15 ?        00:03:44 dockerd
root      4353  4000  0 Oct15 ?        00:03:44 myawesomecontainer1
root      4354  4000  0 Oct15 ?        00:03:44 myawesomecontainer2
root      4355  4000  0 Oct15 ?        00:03:44 myawesomecontainer3

And while this might be what you’d expect based on the image above, it does not represent reality. What you’ll actually see is the docker daemon running with a number of additional helper daemons to handle things like networking, and the processes that are running “inside” of the containers like this:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      1514     1  0 Oct15 ?        04:28:40 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nat
root      1673  1514  0 Oct15 ?        01:27:08 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout
root      4035  1673  0 Oct31 ?        00:00:07 /usr/bin/docker-containerd-shim-current d548c5b83fa61d8e3bd86ad42a7ffea9b7c86e3f9d8095c1577d3e1270bb9420 /var/run/docker/libcontainerd/
root      4054  4035  0 Oct31 ?        00:01:24 apache2 -DFOREGROUND
33        6281  4054  0 Nov13 ?        00:00:07 apache2 -DFOREGROUND
33        8526  4054  0 Nov16 ?        00:00:03 apache2 -DFOREGROUND
33       24333  4054  0 04:13 ?        00:00:00 apache2 -DFOREGROUND
root     28489  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.22.0.3 -container-port 443
root     28502  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.22.0.3 -container-port 80
33       19216  4054  0 Nov13 ?        00:00:08 apache2 -DFOREGROUND

Without diving too deep into this, the docker processes you see above serve a few processes. There’s the main dockerd process which is responsible for management of docker containers on this host. The containerd processes handle all of the lower level management tasks for the containers themselves. And finally, the docker-proxy processes are responsible for the networking layer between the docker daemon and the host.

You’ll also see a number of apache2 processes mixed in here as well. Those are the processes running within the container, and they look just like regular processes running on a linux system. The key difference is that a number of kernel features are being used to isolate these processes so they are isolated away from the rest of the system. On the docker host you can see them, but when viewing the world from the context of a container, you cannot.

What is this black magic, you ask? Well, it’s primarily two kernel features called Namespaces and cgroups. Let’s take a look at how these work.

Namespaces are essentially internal mapping mechanisms that allow processes to have their own collections of partitioned resources. So, for instance, a process can have a pid namespace allowing that process to start a number of additional processed that can only see each other and not anything outside of the main process that owns the pid namespace. So let’s take a look at our earlier process list example. Inside of a given container you may see this:

[root@dockercontainer ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Nov27 ?        00:00:12 apache2 -DFOREGROUND
www-data    18     1  0 Nov27 ?        00:00:56 apache2 -DFOREGROUND
www-data    20     1  0 Nov27 ?        00:00:24 apache2 -DFOREGROUND
www-data    21     1  0 Nov27 ?        00:00:22 apache2 -DFOREGROUND
root       559     0  0 14:30 ?        00:00:00 ps -ef

While outside of the container, you’ll see this:

[root@dockerhost ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Oct15 ?        00:02:40 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root         2     0  0 Oct15 ?        00:00:03 [kthreadd]
root         3     2  0 Oct15 ?        00:03:44 [ksoftirqd/0]
...
root      1514     1  0 Oct15 ?        04:28:40 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nat
root      1673  1514  0 Oct15 ?        01:27:08 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout
root      4035  1673  0 Oct31 ?        00:00:07 /usr/bin/docker-containerd-shim-current d548c5b83fa61d8e3bd86ad42a7ffea9b7c86e3f9d8095c1577d3e1270bb9420 /var/run/docker/libcontainerd/
root      4054  4035  0 Oct31 ?        00:01:24 apache2 -DFOREGROUND
33        6281  4054  0 Nov13 ?        00:00:07 apache2 -DFOREGROUND
33        8526  4054  0 Nov16 ?        00:00:03 apache2 -DFOREGROUND
33       24333  4054  0 04:13 ?        00:00:00 apache2 -DFOREGROUND
root     28489  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.22.0.3 -container-port 443
root     28502  1514  0 Oct31 ?        00:00:01 /usr/libexec/docker/docker-proxy-current -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.22.0.3 -container-port 8033       19216  4054  0 Nov13 ?        00:00:08 apache2 -DFOREGROUND

There are two things to note here. First, within the container, you’re only seeing the processes that the container runs. No systems, no docker daemons, etc. Only the apache2 and ps processes. From outside of the container, however, you see all of the processes running on the system, including those within the container. And, the PIDs listed inside if the container are different from those outside of the container. In this example, PID 4054 outside of the container would appear to map to PID 1 inside of the container. This provides a layer of security such that running a process inside of a container can only interact with other processes running in the container. And if you kill process 1 inside of a container, the entire container comes to a screeching halt, much as if you kill process 1 on a linux host.

PID namespaces are only one of the namespaces that Docker makes use of. There are also NET, IPC, MNT, UTS, and User namespaces, though User namespaces are disabled by default. Briefly, these namespaces provide the following:

  • NET
    • Isolates a network stack for use within the container. Network stacks can, and typically are, shared between containers.
  • IPC
    • Provides isolated Inter-Process Communications within a container, allowing a container to use features such as shared memory while keeping the communication isolated within the container.
  • MNT
    • Allows mount points to be isolated, preventing new mount points from being added to the host system.
  • UTS
    • Allows different host and domains names to be presented to containers
  • User
    • Allows a mapping of users and groups with container to the host system, thereby preventing a root user within a container from running as did 1 outside of the container.

The second piece of black magic used is Control Groups or cgroups. Cgroups isolates resource usage for a process. Where Namespaces creates a localized view of resources for a process, cgroups creates a limited pool of resources for a process. For instance, you can assign specific CPU, Memory, and Disk I/O limits to a container. With a cgroup is assigned, the process cannot exceed the limits put on it, thereby preventing processes from “running away” and exhausting system resources. Instead, the process either deals with the lower resource limits, or crashes.

By themselves, these features can be a bit daunting to set up for each process or group of processes. Docker conveniently packages this up, making deployment as simple as a docker run command. Combined with the packaging of a Docker container (which I’ll cover in a future post), Docker becomes a great way to deploy software in a reproducible, secure manner.

The obligatory Docker 101 post

November 19th, 2018

Welcome to the obligatory Docker 101 post. Before I dive into more technical posts on this subject, I thought it would be worth the time to explain what docker is and what I find exciting about it. If you’re familiar with Docker already, there likely won’t be anything new here for you, but I welcome any feedback you have.

So, what is Docker? Docker is a containerization technology first release as open source in 2013. But what is containerization? Containerization, or Operating System Level Virtualization, refers to the isolation, using kernel-level features, of a set of processes in which the processes only see a localized view of the system. This differs from Platform Virtualization in that Containerization is not presenting a set of virtual resources to the isolated processes, but is presenting real resources limited only by the configuration of that particular container.

One of the more common explanations of this architecture is shown in the following image:

Docker Layered Model

This image is a bit problematic in that it doesn’t truly represent what you actually see on a docker host, but we’ll save that for a later blog post. For now, trust that the above is a very simplified view of the docker world.

So why containerization and why Docker in particular? There are a number of benefits that containerization technology provides. Among these are immutability, portability, and security. Let’s touch briefly on each of these.

Immutability refers to the concept of something being unchangeable. In the case of containerization, a container is considered to be immutable. That is, once created, the container itself will remain unchanged for the duration of its life. But, it’s important to understand what this means in practice. The container image itself is immutable, but once running, the contents of the container can be changed within the parameters of its execution. The immutable piece of this comes into play when you destroy a running container and recreate it from the container image. That recreated image will have the exact same characteristics as the original container, assuming the same configuration is used to start the container. A notable exception to this is external volumes. Any volume external to the container is not guaranteed to be immutable as it’s not part of the original container image.

Portability refers to the ability to move containers between disparate systems and the container will run exactly the same, assuming no external dependencies. There are limitations to this such as requiring the same cpu architecture across the systems, but overall, a container can be moved from system to system and be expected to behave the same. In fact, this is part of the basis of orchestration and scalability of containers. In the event of a failure, or if additional instances of a container are necessary, they can be spun up on additional systems. And provided any external dependencies are available to all of the systems that the container is spun up on, the containers will run and behave the same.

Containers provide an additional layer of security over traditional virtual or physical hosts. Because the processes are isolated within the container, an attacker is left with a very limited attack surface. In the event of a compromise, the attacker only gets a foothold on that instance of the container and is generally left with very little tooling inside of the container with which to pivot to additional resources. If an attacker is able to make changes to the running container, the admin can simply destroy the container and spin up a new one which will no longer have the compromised changes. Obviously the admin needs to identify how the attacker got in and patch the container, but this ability to destroy and recreate a container is a powerful way to stop attackers from pivoting through your systems.

Finally, the internal networking of the docker system allows containers to run with no externally accessible ports. So, for instance, if you’re running some sort of dynamic site that requires a proxy, application, and database, the system can be set up such that the proxy is the only externally accessible container. All communication between the proxy, application, and database can be performed over the internal docker networking which has no externally accessible endpoint.

There’s a lot to be excited about here. Done correctly, the days of endlessly troubleshooting issues caused by server cruft are over. Deployment of resources because incredibly straightforward and rapid. Rollbacks become vastly simplified as you can just spin up the old version of the container. Containers provide developers a means to run their code locally, exactly as it will be run in production!

I’ve been working with containers for about 3 years now and the landscape just keeps expanding. There’s so much to learn and so many new tools to play with.

Finally, I’m going to leave you with a talk by Alice Goldfuss. Alice is an engineer that currently works for Github. She has a ton of container experience and a lot to say about it. Definitely worth a watch.

So, new digs?

November 15th, 2018

It looks a bit different around here lately. Sure, it’s roughly the same as what it was, but something is off.. A little bit here and there, so what changed?

Well, to tell the truth, I’ve switched blogging platforms. Don’t get me wrong, I love Serendipity. I’ve used it for years, love the features, love the simplicity. Unfortunately, Serendipity doesn’t have the greatest support for offline blogging, updates are relatively sparse, and it’s limited to just blogging. So I decided it’s time for a change.

Ok. Deep breath. I’ve switched to WordPress. Yes, yes, I know. I’ve decried WordPress as an insecure platform for a long time, but I’ve somewhat changed my thinking. The team at WordPress has done a great job ensuring the core platform is secure and they’re actively working to help older installations upgrade to newer releases. Plugins are where the majority of the security issues exist these days, and many of the more popular plugins are being actively scanned for security issues. So, overall, the platform has moved forward with respect to security and is more than viable.

I’ve also been leveraging Docker in recent years. We’ll definitely be talking about Docker in the coming days/weeks, so I won’t go into it here. Suffice it to say, Docker helps enhance the overall security of the system while simultaneously making it a breeze to deploy new software and keep it up to date.

So, enjoy the new digs, and hopefully more changes will be coming in the near future. WordPress is capable of doing more than just blogging and I’m planning on exploring some of those capabilities a bit more. This is very much a continuing transition, so if you see something that’s off, please leave a comment and I’ll take a look.

Rising from the ashes

November 13th, 2018

*cough* *cough*

Awfully dusty in here. Almost as if this place were abandoned. Of course, that was never the case, was it. Just a hiatus of sorts. A reprieve from the noise and the harshness of reality.

But it’s time, now. Time to whip this place back into shape. Time to put the pieces back together. Time to build something new and interesting.

I know it’s been a while, but it’s time to get back in the habit. I’ve learned a lot these past years and I want to start sharing it. Soon.

Yes, your iToaster needs security

March 24th, 2015

Let’s talk about your house for a moment. For the sake of argument, we’ll assume that you live in a nice house with doors, windows, the works. All of the various entries have the requisite locking devices. As with most homes, these help prevent unwanted entry, though a determined attacker can surely bypass them. For the moment, let’s ignore the determined attacker and just talk about casual attempts.

Throughout your time living in your home, casual attempts at illegal entry have been rebuffed. You may or may not even know about these attempts. They happen pretty randomly, but there’s typically not much in the way of evidence after the attacker gives up and leaves. So you’re pretty happy with how secure things are.

Recently, you’ve heard about this great new garage from a friend who has one. It’s really nice, low cost, and you have room for it on your property, so you decide to purchase one. You place the order and, after a few days, your new garage arrives. It’s everything you could have imagined. Plenty of room to store all the junk you have in the house, plus you can fit the car in there too!

You use the garage every day, moving boxes in and out of the garage as needed until one day you return home and, for some inexplicable reason, your car won’t fit all the way in. Well, that’s pretty weird, you think. You decide that maybe you stored too much in the garage, so you spend the rest of the day cleaning out the garage. You make some tough decisions and eventually you make enough room to put the car back in the garage.

Time passes and this happens a few more times. After a while you start to get a bit frustrated and decide that maybe you need to buy a bigger garage. You pull out your trusty measuring tape to verify the dimensions of the garage and, to your amazement, the garage is smaller than what you remember. You do some more checking and, to your amazement, the garage is bigger on the outside. So you call an expert to figure out what’s going on.

When the expert arrives, she takes one look at the situation and tells you she knows exactly what has happened. You watch with awe as she walks up to the closed garage, places her hand on the door, and the door opens by itself! Curious, you ask how she performed that little magic trick. She explains that this particular model of garage has a little known problem that allows the door to be opened by putting pressure on just the right place. Next, she head into the garage and starts poking around at the walls. After a few moments, one of the walls slides open revealing another room full of stuff you don’t recognize.

Your expert explains that obviously someone else knows about this weakness and has set up a false wall in your garage to hide their own stuff in. This is the source of the shrinking space and your frustration. She helps you clean up the mess and tear down the false wall. After everything is back to normal, she recommends you contact the manufacturer and see if they have a fix for the faulty door.

While this story may sound pretty far fetched when we’re talking about houses and garages, it’s an all too common story for consumer grade appliances. And as we move further into this new age of connected devices, commonly called the Internet of Things (IoT), it’s going to become and even bigger issue.

Network access itself is the first challenge. Many of the major home router vendors have already experienced problems with security. So right out of the gate, home networks are potentially vulnerable. This is a major problem, especially given the potentially sensitive nature of data being transmitted by a variety of new IoT devices.

Today’s devices are incredibly data-centric. From fitness trackers to environmental sensors, our devices are tracking everything. This data is collected and then transmitted to an internet-connected service where it is made available to the user in a variety of ways. Some users may find this data to be sensitive, hoping to keep it relatively private, available only to the user and, anonymously, to the service they subscribe to. Others may make this data public. But in the world of IoT, a security problem with a device compromises that choice.

Or maybe the attacker isn’t after your data at all. Perhaps, like our garage example, they’re looking for resources they can use. Maybe they want to store files, or maybe they’re looking to use your device to process their own data. Years ago, attackers would gain access to a remote system so they could take advantage of the space available on the system, typically storing data and setting up a warez site. That is, illegal copies of software available to those who know where to look. These days, however, storage is everywhere and there are many superior ways to transmit files between users. As a result, the old-school practice of setting up a warez site has mostly fallen by the wayside.

In today’s world, attackers want access to your devices for a variety of reasons. Some attackers use these devices as zombie systems for sending massive amounts of spam. Typically this just results in a slow Internet connection and possibly gets your IP banned from sending mail. Not a big deal for you, but it can be a real headache for those of us dealing with the influx of spam.

More and more, however, attackers are taking over machines to use them for their processing power, or for their connection to the Internet. For instance, some attackers compromise machines just so they can use them to mine bitcoins. It seems harmless enough, but it can be an inconvenience to the owner of the device when it doesn’t respond the way it should because it’s too busy working on something else.

Attackers are also using the Internet connections for nefarious purposes such as setting up denial of service hosts. They use your connection, and the connections of other systems they have compromised, to send massive amounts of data to a remote system. The entire purpose of this activity it to prevent the remote system from being accessible. It was widely reported that this sort of activity is what caused connectivity problems to both Microsoft’s Xbox Live service as well as the Playstation Network during Christmas of 2014.

So what can we do about this? Users clearly want this technology, so we need to do something to make it more secure. And to be clear, this problem goes beyond the vendors, it includes the users as well. Software has and will always have bugs. Some of these bugs can be exploited and result in a security problem. So the first step is ensuring that vendors are patching those bugs when they’re found. And, perhaps, vendors can be convinced to bolster their internal security teams such that secure coding practices are followed.

But vendors patching bugs isn’t the only problem, and in most cases, it’s the easy part of the problem. Once a patch exists, users have to apply that patch to their system. As we’ve seen over the years, patching isn’t something that users are very good at. Thus, automatic update systems such as those used by Microsoft and Apple, are commonplace. But this practice hasn’t carried over to devices yet. Vendors need to work on this and build these features into their hardware. Until they do, these security issues will remain a widespread problem.

So yes, your iToaster needs security. And we need vendors to take the next step and bake in automatic updating so security becomes the default. End users want devices that work without having to worry about how and when to update them. Not all manufacturers have the marketing savvy that Apple uses to make updating sexy. Maybe they can take a page out of the book Microsoft used with the Xbox One. Silent updates, automatically, overnight.

Hacker is not a dirty word

March 21st, 2015

Have you ever had to fix a broken item and you didn’t have the right parts? Instead of just giving up, you looked around and found something that would work for the time being. Occasionally, you come back later and fix it the right way, but more often than not, that fix stays in place indefinitely. Or, perhaps you’ve found a novel new use for a device. It wasn’t built for that purpose, but you figured out that it fit the exact use you had in mind.

Those are the actions of a hacker. No, really. If you look up the definition of a hacker, you get all sort of responses. Wikipedia has three separate entries for the word hacker in relation to technology :

Hacker – someone who seeks and exploits weaknesses in a computer system or computer network

Hacker – (someone) who makes innovative customizations or combinations of retail electronic and computer equipment

Hacker – (someone) who combines excellence, playfulness, cleverness and exploration in performed activities

Google defines it as follows :

1. a person who uses computers to gain unauthorized access to data.

(informal) an enthusiastic and skillful computer programmer or user.

2. a person or thing that hacks or cuts roughly.

And there are more. What’s interesting here is that depending on where you look, the word hacker means different things. It has become a pretty contentious word, mostly because the media has, over time, used it to describe the actions of a particular type of person. Specifically, hacker is often used to describe the criminal actions of a person who gains unauthorized access to computer systems. But make no mistake, the media is completely wrong on this and they’re using the word improperly.

Sure, the person who broke into that computer system and stole all of that data is most likely a hacker. But, first and foremost, that person is a criminal. Being a hacker is a lifestyle and, in many cases, a career choice. Much like being a lawyer or a doctor is a career choice. Why then is hacker used as a negative term to identify criminal activity and not doctor or lawyer? There are plenty of instances where doctors, lawyers, and people from a wide variety of professions have indulged in criminal activity.

Keren Elazari spoke in 2014 at TED about hackers, and their importance in our society. During her talk she discusses the role of hackers in our society, noting that there are hackers who use their skills for criminal activity, but many more who use their skills to better the world. From hacktivist groups like Anonymous to hackers like Barnaby Jack, these people have changed the world in positive ways, helping to identify weaknesses in systems to weaknesses in governments and laws. In her own words :

My years in the hacker world have made me realize both the problem and the beauty about hackers: They just can’t see something broken in the world and leave it be. They are compelled to either exploit it or try and change it, and so they find the vulnerable aspects in our rapidly changing world. They make us, they force us to fix things or demand something better, and I think we need them to do just that, because after all, it is not information that wants to be free, it’s us.

It’s time to stop letting the media use this word improperly. It’s time to take back what is ours. Hacker has long been a term used to describe those we look up to, those we seek to emulate. It is a term we hold dear, a term we seek to defend. When Loyd Blankenship was arrested in 1986, he wrote what has become known as the Hacker’s Manifesto. This document, often misunderstood, describes the struggle many of us went through, and the joy of discovering something we could call our own. Yes, we’re often misunderstood. Yes, we’ve been marginalized for a long time. But times have changed since then and our culture is strong and growing.

Network Enhanced Telepathy

March 18th, 2015

I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy. This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years.

According to Wikipedia, telepathy is “the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.“ In other words, you can think *at* someone and communicate. The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting. In this case, technology brings fantasy into reality.

Scientists have already developed methods to “read” thoughts from the human mind. These methods are by no means perfect, but they are a start. As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace. What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution.

What Singer describes in the book is one step further. If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious. Thus we have Network Enhanced Telepathy. And, of course, with that comes all of the baggage we associate with networks today. Everything from connectivity issues and lag to security problems.

The security issues associated with something like this range from inconvenient to downright horrifying. If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain. Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property. These same issues may exist with this new technology as well.

Stolen data is pretty straightforward. Could an exploit allow an attacker to arbitrarily read data from someone’s mind? How would this work? Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted? While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc. Pinpointing exact data could be absolutely devastating. Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind. Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror.

I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions. Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind? If so, what access would the attacker then attain? Could we be looking at the next big thing in brainwashing? Merely insert the new programming, directly into the user.

How about Denial of Service attacks or physical destruction? Could an attacker cause physical damage in their target? Is a connection to the mind enough access to directly modify the cognitive functions of the target? Could an attacker induce something like Locked-In syndrome in a user? What about blocking specific functions, preventing the user from being able to move limbs, or speak? Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target? These are truly scary scenarios and warrant serious thought and discussion.

Technology is racing ahead at breakneck speeds and the future is an exciting one. These technologies could allow humans to take that next evolutionary step. But as with all technology, we should be looking at it with a critical eye. As technology and biology become more and more intertwined, it is essential that we tread carefully and be sure to address potential problems long before they become a reality.

Suspended Visible Masses of Small Frozen Water Crystals

March 13th, 2015

The Cloud, hailed as a panacea for all your IT related problems. Need storage? Put it in the Cloud. Email? Cloud. Voice? Wireless? Logging? Security? The Cloud is your answer. The Cloud can do it all.

But what does that mean? How is it that all of these problems can be solved by merely signing up for various cloud services? What is the cloud, anyway?

Unfortunately, defining what the cloud actually is remains problematic. It means many things to many people. The cloud can be something “simple” like extra storage space or email. Google, Dropbox, and others offer a service that allows you to store files on their servers, making them available to you from “anywhere” in the world. Anywhere, of course, if the local government and laws allow you to access the services there. These services are often free for a small amount of space.

Google, Microsoft, Yahoo, and many, many others offer email services, many of them “free” for personal use. In this instance, though, free can be tricky. Google, for instance, has algorithms that “read” your email and display advertisements based on the results. So while you may not exchange money for this service, you do exchange a level of privacy.

Cloud can also be pure computing power. Virtual machines running a variety of operating systems, available for the end-user to access and run whatever software they need. Companies like Amazon have turned this into big business, offering a full range of back-end services for cloud-based servers. Databases, storage, raw computing power, it’s all there. In fact, they have developed APIs allowing additional services to be spun up on-demand, augmenting existing services.

As time goes on, more and more services are being added to the cloud model. The temptation to drop self-hosted services and move to the cloud is constantly increasing. The incentives are definitely there. Cloud services are affordable, and there’s no need for additional staff for support. All the benefits with very little of the expense. End-users have access to services they may not have had access to previously, and companies can save money and time by moving services they use to the cloud.

But as with any service, self-hosted or not, there are questions you should be asking. The answers, however, are sometimes a bit hard to get. But even without direct answers, there are some inferences you can make based on what the service is and what data is being transferred.

Data being accessible virtually anywhere, at any time, is one of major draws of cloud services. But there are downsides. What happens when the service is inaccessible? For a self-hosted service, you have control and can spend the necessary time to bring the service back up. In some cases, you may have the ability to access some or all of the data, even without the service being fully restored. When you surrender your data to the cloud, you are at the mercy of the service provider. Not all providers are created equal and you cannot expect uniform performance and availability across all providers. This means that in the event of an outage, you are essentially helpless. Keeping local backups is definitely an option, but oftentimes you’re using the cloud so that you don’t need those local backups.

Speaking of backups, is the cloud service you’re using responsible for backups? Will they guarantee that your data will remain safe? What happens if you accidentally delete a needed file or email? These are important issues that come up quite often for a typical office. What about the other side of the question? If the service is keeping backups, are those backups secure? Is there a way to delete data, permanently, from the service? Accidents happen, so if you’ve uploaded a file containing sensitive information, or sent/received an email with sensitive information, what recourse do you have? Dropbox keeps snapshots of all uploaded data for 30 days, but there doesn’t seem to be an official way to permanently delete a file. There are a number of articles out there claiming that this is possible, just follow the steps they provide, but can you be completely certain that the data is gone?

What about data security? Well, let’s think about the data you’re sending. For an email service, this is a fairly simple answer. Every email goes through that service. In fact, your email is stored on the remote server, and even deleted messages may hang around for a while. So if you’re using email for anything sensitive, the security of that information is mostly out of your control. There’s always the option of using some sort of encryption, but web-based services rarely support that. So data security is definitely an issue, and not necessarily an issue you have any control over. And remember, even the “big guys” make mistakes. Fishnet Security has an excellent list of questions you can ask cloud providers about their security stance.

Liability is an issue as well, though you may not initially realize it. Where, exactly, is your data stored? Do you know? Can you find out? This can be an important issue depending on what your industry is, or what you’re storing. If your data is being stored outside of your home country, it may be subject to the laws and regulations of the country it’s stored in.

There are a lot of aspects to deal with when thinking about cloud services. Before jumping into the fray, do your homework and make sure you’re comfortable with giving up control to a third party. Once you give up control, it may not be that easy to reign it back in.

Boldly Gone

February 27th, 2015

I have been and always shall be your friend.

It’s a sad day. We’ve lost a dear friend today, someone we grew up with, someone so iconic that he inspired generations. At the age of 83, Leonard Nimoy passed away. He will be missed.

It’s amazing to realize how much someone you’ve never met can mean to you. People larger than life, people who will live on in memory forever. I’ve been continually moved for hours at the outpouring of grief and love online for Leonard. He has meant so much for so many, and his memory will live on forever.

Of all the souls I have encountered in my travels, his was the most… human.