WoO Day 3 : Meet the agent

Now that you have a functional OSSEC installation it’s time to start configuring it. Proper configuration is really the heart and soul of OSSEC and will be the primary focus for the next few days. The primary OSSEC configuration files use an XML format. If you’re not familiar with XML, don’t worry too much. It’s a pretty easy format to wrap your head around, as you’ll see.

The primary configuration file is the /var/ossec/etc/ossec.conf file. The only exception to this is when you are using centralized agent configuration, which I highly recommend for large deployments. For a centralized setup, the server configuration is located in ossec.conf and the agent configuration is located in /var/ossec/etc/shared/agent.conf. All agents are configured using a single agent.conf file. I’ll explain how this works in a bit.

Before I get to the standard configuration options available to ossec.conf and agent.conf, let’s talk briefly about agents and centralized agent configurations. When using a centralized configuration, I recommend minimizing what you place in the agent’s ossec.conf file. The centralized agent.conf file overrides any conflicting option listed in the agent’s ossec.conf file. Besides, lingering configuration options in the agent’s ossec.conf can result in confusion when you’re not aware of those settings. To make general configuration simple and straightforward, the only option that should be present in the agent’s ossec.conf is the address of the server or servers to use.

<ossec_config>
<client>
<server-ip>192.168.0.1</server-ip>
</client>
</ossec_config>

The agent.conf file contains all of the configuration for every agent in the network. An agent will receive all of the configuration information relevant to that specific agent. The agent.conf file is actually sent to every agent in the network and each agent parses the file looking for configuration settings relevant to that local agent. This means that you can define an initial “generic” configuration block that will apply to all agents, and then use successive configuration blocks to define additional configuration options. Note: Configurations are cumulative.

It’s a bit easier to look at a simple configuration as an example. For now, don’t worry too much about what these options do, instead, concentrate on seeing how the agent builds its configuration.

<agent_config>
<syscheck>
<!– Frequency that syscheck is executed — default every 2 hours –>
<frequency>7200</frequency>

<!– Directories to check (perform all possible verifications) –>
<!– Files/directories to ignore –>
<directories check_all=”yes”>/etc</directories>
<ignore>/etc/adjtime</ignore>
</syscheck>
</agent_config>
<agent_config name=”agent1″>
<syscheck>
<directories check_all=”yes”>/usr/agent1</directories>
</syscheck>
</agent_config>

The above configuration is *very* tiny and very basic. However, it should help to illustrate how the agent config works. The first agent_config block defines a generic configuration that will be used for every agent in the network. The subsequent agent_config block defines an additional directory to check for agent1. So agent1 will run with the primary configuration from the first block, plus the configuration from the second block.

Each agent_config block can be defined specifically for an agent or os using the agent and os attributes. The agent attribute is simple the name (or names separates by a |) of the agent(s) to apply that agent_config block to. The os attribute is based on the uname of the agent operating system. Some examples include Linux, FreeBSD, Windows, and Darwin.

The only other real difference between a standalone or server config and an agent config is that there are some options that belong in the server config as opposed to the agent config. For instance, when we talk about active response later in the week, the active response settings are placed in the server config. Likewise, the rules are detailed in the server config and not in the agent config. This will make more sense when we cover these topics. For now, if you’re unsure as to where a particular configuration option belongs, be sure to check the OSSEC manual.

That’s all for now. Check in tomorrow when we’ll make OSSEC do something more than just sit there!

 

WoO Day 2 : In The Beginning …

As it turns out, before you can play with OSSEC and begin learning the intricacies of host-based intrusion detection, you need to install the software. Installation itself is pretty easy and fairly straightforward, provided you know how you want to install and run the system.

What do I mean by this? Well, there are three ways to install the OSSEC software. It can be installed as standalone, as a server, or as an agent. These are pretty much what they sound like. A standalone installation is generally used when you’re administering a single machine, or when circumstances prevent you from running a centralized server. A server/agent installation is for larger installations with centralized management.

For both installation types you need to start with the raw source. From there you can run the install script and choose the type of installation you want. Alternatively, if you’re installing on a linux distribution that uses RPM for package management, you can grab a copy of the source RPM for the latest version here. The RPM version only supports the server/agent type, though you can probably run the server as a standalone install. Note: I make no guarantees about the RPMs I provide, you are expected to know what you are installing and take appropriate precautions such as verifying what you have downloaded.

If you’re going with the raw source, grab a copy here. The current release as of this writing is version 2.5. Download the source, unpack it, and run the installation script. The installation script must be run as root. You probably want to check the md5 and/or sha1 sum prior to installation, just to make sure that the code is original.

# wget http://www.ossec.net/files/ossec-hids-2.5.tar.gz
# sha1sum ossec-hids-2.5.tar.gz
3da46b493f0e50b2453c43990b46ba43e61648bf
# tar zxvf ossec-hids-2.5.tar.gz

# cd ossec-hids-2.5
# ./install.sh

Just follow the prompts an the installer will take care of the rest, including compiling the software.

The RPM install is simplified a bit as you only have to compile the code once per server architecture you want to support. In other words, if you have both 32-bit and 64-bit installations, you’ll likely want to compile the software twice. Once compiled, you have three packages, ossec-hids, ossec-hids-server, and ossec-hids-client (client = agent). The ossec-hids package is installed on every system while the server and client packages go on the appropriate systems.

In order for the agents to talk to the server, you must have port 1514 open. Additionally, you need to register the agents with the server. This is a pretty simple process, though it has to be repeated for every agent. Detailed instructions can be found here. The short and simple version is as follows:

1) Run /var/ossec/bin/manage_agents and choose to add a new agent. Follow the prompts and enter the appropriate data.
2) While still in manage_agents, select the option to extract the agent authentication key. Copy this key as you need to paste it into the agent.
3) On the agent, run /var/ossec/bin/manage_agents. Choose to import the server key. Paste in the key you copied previously.

And that’s all there is to adding a new agent. There are ways to script this, but they are a bit out of scope for an introduction article. If you’re interested in scripting this process, please check the OSSEC mailing list for more details.

Finally, after installation comes configuration! Tune in tomorrow for more details!

 

WoO Day 1 : Introduction

Today marks the first day of the Week of OSSEC. What is OSSEC you ask? Well, I’m glad you asked. Allow me to explain.

According to the OSSEC home page, OSSEC is :

OSSEC is an Open Source Host-based Intrusion Detection System. It performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response.

It runs on most operating systems, including Linux, MacOS, Solaris, HP-UX, AIX and Windows.

But you have an IDS already, right? One of those big fancy all-in-one units that protects your network, has tons of signatures, and spits out endless reams of data. Or maybe you’re on the Open Source track and you’re using Snort combined with some fancy OSS front end for reporting.

Well, OSSEC is a bit different. It’s a HIDS, not an IDS. In short, this means that OSSEC is designed to monitor a single host and react to events occurring on that host. While this doesn’t sound very useful from the outset, I assure you that this is significantly more useful than you think.

For instance, let’s look at a typical brute-force attack. Your average IDS will allow the traffic through the network as perfectly valid traffic. There is nothing unusual happening at a network layer, so the IDS is content with passing the traffic as normal. Once it hits the server, however, the attack makes itself clear, trying to brute force its way into the server. You can ignore this and hope your password policies are strong enough to cope, but if an attacker is persistent enough, they may eventually worm their way in. Another way to deal with this is through session throttling via the server’s firewall. OSSEC deals with this by identifying the attack and blocking the attacker’s IP.

Sure, fail2ban and other packages offer similar capabilities, but can they also be used to monitor file integrity? How about rootkit detection? Can you centralize their capabilities, using a single host to push out configurations to remote systems? Can they be used in an agentless configuration where a host is monitored without having to install software on it?

OSSEC is extremely powerful and is gaining more capabilities over time. It is in active development with no signs of slowing. In fact, version 2.5 was just released on September 27th.

Over the next week I’ll be explaining some of OSSEC’s capabilities. Ultimately, I suggest you install it on a development system and start poking at it yourself. You can also join the OSSEC mailing list and join in the various on-going conversations.

 

Privacy … Or so you think

Ah, the Internet. What an incredible utility. I can be totally anonymous here, saying whatever I want and no one will be the wiser. I can open up a Facebook, MySpace, or Twitter account, abuse it by posting whatever I want about whomever I want, and no one can do anything about it. I’m completely anonymous! Ha! Try to track me down!

I can post comments on news items, send emails through “free” email services like HotMail, Yahoo, and Gmail. I can post pictures on Flickr and Tumblr. I can chat using AIM, ICW, Skype, or GTalk! The limits are endless, and you can’t find me! You have no idea who I am!

Wait, what’s that? You have my IP address? You have the email address I signed up with? You have my username and you’ve used that to link me to other sites? … And now you’re planning on suing me? I .. uhh… Oh boy…

Online anonymity is mostly a myth. There are ways to remain completely anonymous, but they are, at best, extremely cumbersome and difficult. With enough time and dedication, your identity can be tracked down. Don’t be too afraid, though. Typically, no one really cares who you are. There may be a few who take offense at what you have to say, but most don’t have the knowledge or access to obtain the information necessary to start their search.

There are those out there with the means and the access to figure out who you are, though. Take, for instance, the case of Judge Shirley Saffold. According to a newspaper in Cuyahoga county Ohio, Judge Saffold commented on a number of local articles, including articles about cases she had presided over. These comments ranged from simple, innocuous comments, to commentary about ongoing cases and those participating in them.

The Judge, of course, denies any involvement. Her daughter has stepped forward claiming that she is the one that made all of the posts. According to the newspaper, they traced activity back to the Judge’s computer at the courthouse, which they believe to be definitive proof that the Judge is the actual poster.

This is an excellent example of the lack of anonymity on the Internet. There are ways to track you down, and way to identify who you are. In the case of Judge Saffold, and editor for the paper was able to link an online identity to an email address. While I’m not entirely sure he should have had such access, and apparently that access has been removed, the fact remains that he did. This simple piece of information has sparked a massive debate about online privacy.

You, as a user of the Internet, need to understand that you don’t necessarily have anonymity. By merely coming to read this post, you have left digital footprints. The logs for this website have captured a good deal of information about you. What browser you’re using, what IP address you’ve access the site from, and sometimes the address of the last site you visited. It is even possible, though this site doesn’t do it, to send little bits of information back to you that can track your online presence, reporting back where you go from here and how long you stay there.

Believing you are truly anonymous on the Internet can be dangerous. While it may feel liberating to speak your mind, be cognizant that your identity can be obtained if necessary. Don’t go completely crazy, think before you post.

 

SSL MitM Appliance

SSL has been used for years to protect against Man in the Middle attacks. It has worked quite well and kept our secret transactions secure. However, that sense of security is starting to crumble.

At Black Hat USA 2009, Dan Kaminsky, security researcher, presented a talk outlining flaws in x.509 SSL certificates. In short, it is possible to trick a certificate authority into certifying a site as legitimate when the site may, in fact, be malicious. It’s not the easiest hack to pull off, but it’s there.

Once you have a legitimate certificate, pulling off a MitM attack is as simple as proxying the traffic through your own system. If you can trick the user into hitting your server instead of the legitimate server, *cough*DNSPOISONING*cough*, you can impersonate the legitimate server via proxy, and log everything the user does. And the only way the user can tell is if they actually look at the IP they’re hitting. How many people do you know that keep track of the IP of the server they’re trying to get to?

Surely there’s something that will prevent this, right? I mean, the fingerprint of the certificate has changed, so the browser will tell me that something is amiss, right? Well, actually, no. In fact, if you replace a valid certificate from one CA with a valid certificate from another CA, the end user typically sees no change at all. There may be options that can be set to alter this behavior, but I know of no browsers that will detect this by default. Ultimately, this means that if an attacker can obtain a valid certificate and redirect your traffic, he will own everything you do without you being the wiser.

And now, just to make things more interesting, we have this little beauty.

This is an SSL interception device sold by Packet Forensics. In short, you provide the fake certificate and redirect the user traffic and the box will take care of the rest. According to Packet Forensics, this box is sold exclusively to law enforcement agencies, though I’m sure there are ways to get a unit. For “testing,” of course.

The legal use of this device is actually unknown. In order to use it, Law Enforcement Organizations (LEO) will need to obtain legitimate certificates to impersonate the remote website, as well as obtain access to insert the device into a network. If the device is not placed directly in-line with the user, then potentially illegal hacking has to take place in order to redirect the traffic instead. Regardless, once these are obtained, the LEO has full access to the user’s traffic to and from the remote server.

The existence of this device merely drives home the ease with which MitM attacks are possible. In fact, in a paper published by two researchers from the EFF, this may already be happening. To date, there are no readily available tools to prevent this sort of abuse. However, the authors of the aforementioned paper are planning on releasing a Firefox plugin, dubbed CertLock, that will track SSL certificate information and inform the user when it changes. Ultimately, however, it would be great if browser manufacturers would incorporate these checks into the main browser logic.

So remember kiddies, just because you see the pretty lock icon, or the browser bar turns green, there is no guarantee you’re not being watched. Be careful out there, cyberspace is dangerous.

 

The Authentication Problem

Authentication is a tricky problem. The goal of authentication is to verify the identify of the person, device, machine, etc. that is attempting to gain access to the protected system. There are many factors to consider when designing an authentication system. Here is a brief sampling:

  • How much security is necessary?
  • Do we require username?
  • How strong should the password be?
  • Do we need multi-factor authentication?

The need for authentication typically means that the data being accessed is sensitive in some way. This can be something as simple as a todo list or a user’s email, or as important as banking or top secret information. It can also mean that the data being accessed is valuable in some way such as a site that requires a subscription. So, the security necessary is dependent on the data being protected.

Usually, authentication systems require a username and some form of a password. For more secure systems, multi-factor authentication is used. Multi-factor authentication means that multiple pieces of information are used to authenticate the user. These vary depending on the security required. In the United States, federal regulators recognize the following factors:

  • Something the user knows (e.g., password, PIN)
  • Something the user has (e.g., ATM card, smart card)
  • Something the user is (e.g., biometric characteristic such as a fingerprint)

A username and a password is an example of a single-factor authentication mechanism. When you use an ATM machine, you supply it with an ATM card and then use a PIN. This is an example of two-factor authentication.

The U.S. Federal Financial Institutions Examination Council (FFIEC) recommends the use of multi-factor authentication for financial institutions. Unfortunately, most of the authentication systems currently in place are still single-factor authentication systems, despite asking for several pieces of information. For example, if you log into your bank system you use a username and password. Once the username and password pass, you are often asked for additional information such as answers to challenge questions. These are all examples of things the user knows, thus only a single factor.

Some institutions have begun using additional factors to identify the user such as a one-time password sent to an email address or cell phone. This can be cumbersome, however, as it can often take additional time to receive this information. To combat this, browser cookies are used after the first successful authentication. After the user logs in for the first time, they are offered a chance to have the system place a “secure token” on their system. Subsequent logins use this secure token in addition to the username and password to authenticate the user. This is arguably a second factor as it’s something the user has, as opposed to something they know. On the other hand, it is extremely easy to duplicate or steal cookies.

There are other ways that two-factor authentication can be circumvented as well. Since most institutions only use a single communication mechanism, hijacking that communication medium can result in a security breach.

Man-in-the-middle attacks use fake websites to lure users in and steal the authentication information the user uses to authenticate. This can happen transparently to the user by forwarding the information to the actual institution and letting the user continue to access the system. More sophisticated attacks have the user “fail” authentication the first time and let them in on subsequent tries. The attacker can then use the first authentication attempt to gain access themselves.

Another method is the use of Trojans. If a user can be tricked into installing malicious software into their system, an attacker can ride on the user’s session, injecting their own transactions into the communications channel.

Defending against these attacks is not easy and may be impossible in many situations. For instance, requiring a second method of communication for authentication may help to authenticate the user, but if an attacker can hijack the main communication path, they can still obtain access to the user’s current session. Use of encryption and proper training of users can help mitigate these types of attacks, but ultimately, any system using a public communication mechanism is susceptible to hijacking.

Session Security

Once authentication is complete, session security comes into play. Why go through all the trouble of authenticating the user if you’re not protecting the data they’re accessing? Assuming that the data itself is protected, we need to focus on protecting the data being transferred to and from the user. Additionally, we need to protect the user’s session itself.

Session hijacking is the term used to identify the stealing of a user’s session information to gain access to the information the user is accessing. There are four primary method of session hijacking.

  • Session Fixation
  • Session Sidejacking
  • Physical Access
  • Cross-site Scripting

Physical access is pretty straightforward. This involves an attacker directly accessing the user’s computer terminal and copying the session data. Session data can be something as simple as an alphanumeric token displayed right in the URL of the site being accessed. Or, it can be a piece of data on the machine such as a browser cookie.

Session fixation refers to a method by which an attacker can trick a user into using a pre-determined session ID. Once the user authenticates, the attacker gains access by using the same session ID. The system recognized the session ID as an authenticated session and lets the user in without verification.

Session Sidejacking involves an attacker intercepting the traffic between a user and the system. If a session is not encrypted, the attacker can obtain the session ID or cookie used to identify the user’s session. Once this information is obtained, the attacker can use the same information to gain access to the user’s session.

Finally, cross-side scripting is when an attacker tricks the user’s computer into sending session information to the attacker. This can happen when a user accesses a website that contains malicious code. For instance, an attacker can create a website with a special link to a well-known site such as a bank. The link contains additional code that, when run, sends the user’s authentication or session information to the attacker.

Encryption of the communications channel can mitigate some of these attack scenarios, but not all of them. Programmers should ensure that additional information is used to verify a user’s session. For instance, something as simple as verifying the user’s source IP address in addition to a session cookie is often enough to mitigate both physical access and session sidejacking. Not allowing a pre-defined session ID can prevent session fixation. And finally, proper coding can prevent cross-side scripting.

Additionally, any session information stored on the remote system being accessed should be properly secured as well. Merely securing the data accessed isn’t enough if an attacker can access the remote system and steal session information.

Unauthentication

Finally, how and when should a user be unauthenticated? Unauthentication is often overlooked when designing a secure system. If the user fails to log out, then attacks such as session hijacking become easier. Unauthentication can be tricky, however. There a number of factors to consider such as:

  • How and when should a user’s session be closed?
  • Should a user’s session time out?
  • How long should the timer be?

Most unauthentication currently consists of a user’s session timing out. After a pre-determined period of inactivity, the system will log a user out, deleting their session. Depending on the situation, this can be incredibly disruptive. For example, if a user’s email system has a short time out, they run the risk of losing a long email they’ve been working on. Some systems can mitigate this by recording the user’s data prior to logging them out, making it available again upon login so the user doesn’t lose it. Regardless, the longer the time out, the less secure a session can be.

Other unauthentication mechanisms have been discussed as well. When a physical token such as a USB key is used, the user can be unauthenticated if the key is removed from the system. Or, a device with some sort of radio in it, such as bluetooth, can unauthenticate the user if it is removed from the proximity of the system. Unfortunately, user’s will likely end up leaving these devices behind, significantly reducing their effectiveness.

As with authentication, unauthentication methods can depend on the sensitivity of the data being protected. Ultimately, though, every system should have some form of automatic unauthentication.

Data security in general can be a difficult nut to crack. System designers are typically either very lax in their security design, often overlooking session security and unauthentication, or they can be very draconian, opting to make the system very secure at the expense of the user. Designing a user-friendly, but secure, system is difficult, at best.

 

Web Security

People use the web today for just about anything. We get our news from news sites and blogs, we play games, we view pictures, etc. Most of these activities are fairly innocuous and don’t require much in the way of security, beyond typical anti-viral and anti-spyware security. However, there are activities we engage in on the web where we want to keep our information private and secure. For instance, when we interact with our bank, we’d like to keep those transactions private. The same goes for other activities such as stock transfers and shopping.

And it’s not enough to merely keep it private, we also want to ensure that no one can inject anything into our sessions. Online banking wouldn’t be very useful if someone could inject phantom transfers into your session, draining your bank account. Likewise, having someone inject additional items into your order, or changing the delivery address, wouldn’t be very helpful.

Fortunately, Netscape developed a protocol to handle browser to server security called Secure Sockets Layer, or SSL. SSL was first released to the public in 1995 and updated a year later after several security flaws were uncovered. In 1999, SSL became TLS, Transport Layer Security. TLS has been updated twice since it’s inception and currently stands at version 1.2.

The purpose of SSL/TLS is pretty simple and straightforward, though the implementation details are enough to give anyone a headache. In short, when you connect to a remote site with your browser, the browser and web server negotiate a secure connection. Once established, everything you send back and forth is first encrypted locally and decrypted on the server end. Only the endpoints have the information required to both encrypt and decrypt, so the communication remains secure.

What about man-in-the-middle attacks? What if you were able to insert yourself between the browser and the server and then pass the messages back and forth. The browser would negotiate with you, and then you’d negotiate with the server. This way, you would have unencrypted access to the bits before you passed them on. That would work, wouldn’t it? Well, yes. Sort of. If the end-user allowed it or was tricked into allowing it.

When a secure connection is negotiated between a browser and server, the server presents the user with a certificate. The certificate identifies the server to the browser. While anyone can create a certificate, certificates can be signed by others to “prove” their authenticity. When the server is set up, the administrator requests a certificate from a well-known third party and uses that certificate to identify the server. When the browser receives the certificate, it can verify that the certificate is authentic by contacting the certificate signer and asking. If the certificate is not authentic, expired, or was not signed by a well known third party, the user is presented with an error dialog explaining the problem.

Unfortunately, the dialog presented isn’t always helpful and typically requires some knowledge of SSL/TLS to understand. Most browser vendors have “corrected” this by placing lots of red text, exclamation marks, and other graphics to indicate that something bad has happened. The problem here is that these messages are intended to be warnings. There are instances where certificates not signed by third parties are completely acceptable. In fact, it’s possible for you, as a user, to put together a valid certificate signing system that will provide users the exact same protections a third-party certificate provide. I’ll post a how-to a little later on the exact process. You can also use a self-signed certificate, one that has no root, and still provide the same level of encryption.

So if we can provide the same protection using our own signed or self-signed certificates, then why pay a third party to sign certificates for us? Well, there are a couple of reasons, though they’ve somewhat faded with time. First and foremost, the major third-party signers have their root certificates, used for validation, added to all of the major web browsers. In other words, you don’t need to install these certificates, they’re already there. And since most users don’t know how SSL works, let alone how to install a certificate, this makes third-party certificates very appealing. This is the one feature of third-party certificates that still makes sense.

Another reason is that your information is validated by the third-party provider. Or, at least, that’s how it used to be. Perhaps some providers still do, but since there is no standard across the board, SSL certificates as a de-facto identity check are broken. Some providers offer differing levels of validation for normal certificates, but there are no indicators within the browser to identify the level of validation. As a result, determining whether to trust a site or not falls completely on the shoulders of the user.

In response to this, an organization called the Certificate Authority/Browser Forum was created. This forum developed a set of guidelines that providers must adhere to in order to issue a new type of certificate, the Extended Validation, or EV, certificate. Audits are performed on an annual basis to ensure that providers continue to adhere to the guidelines. The end result is a certificate with special properties. When a browser visits a site that uses an EV certificate, the URL bar, or part of the URL bar turns green and displays the name of the company that owns the certificate. The purpose is to allow users a quick glance check to validate a site.

To a certain degree, I agree that these certificates provide a slight enhancement of security. However, I think this is more security theater than actual security. At its core, an EV certificate offers no better security than that of a self-signed certificate. The “value” lies in the vetting process a site has to go through in order to obtain such a certificate. It also relies on users being trained to recognize the green bar. Unfortunately, most of the training I’ve seen in this regard seem to teach the user that seeing a green URL bar instantly means they can trust the site with no further checking. I feel this is absolutely the wrong message to send. Users should be taught to verify website addresses as well as verifying SSL credentials.

Keeping our information private and secure goes way beyond the conversation between the browser and the server, however. Both before information is sent, and after it is received, it is available in some plain text format. If an attacker can infiltrate either end of the conversation, they can potentially retrieve this information. At the user’s end, security software such as an anti-virus, anti-spyware, and firewall, can be installed to protect the user. However, the user has absolutely no control over the server end.

To the user, the server is a mystery. The users trusts the administrator to keep their information safe and secure, but has no way of determining whether or not it is. Servers are protected much in the same way a user’s computer is. Firewalls are typically the main defense against intruders, though server firewalls are typically more advanced than those used on end-user computers. Data on the server can be stored using encryption, so even if a server is compromised, the data cannot be accessed.

Security on the Internet is a full-time job, both for the end user as well as the server administrator. Properly done, however, our data can be kept secure and private. All it takes is some due diligence and a little education.

 

Centralized Firewall Logging

I currently maintain a number of Linux-based servers. The number of servers is growing, and management becomes a bit wieldy after a while. One move I’ve made is to start using Spacewalk for general package and configuration management. This has turned out to be a huge benefit and I highly recommend it for anyone in the same position as I am. Sure, Spacewalk has its bugs, but it’s getting better with every release. Kudos to Redhat for bringing such a great platform to the public.

Another area of problematic management is the firewall. I firmly believe in defense in depth and I have several layers of protection on these various servers. One of those layers is the iptables configuration on the server itself. Technically, iptables is the program used to configure the filtering ruleset for a Linux kernel with the ip_tables packet filter installed. For the purposes of this article, iptables encompasses both the filter itself as well as the tools used to configure it.

Managing the ruleset on a bunch of disparate servers can be a daunting task. There are often a set of “standard” rules you want to deploy across all of the servers, as well as specialized rules based on what role the server plays. The standard rules are typically for management subnets, common services, and special filtering rules to drop malformed packets, source routes, and more. There didn’t seem to be an easy way to deploy such a ruleset, so I ended up rolling my own script to handle the configuration. I’ll leave that for a future blog entry, though.

In addition to centralized configuration, I wanted a way to monitor the firewall from a central location. There are several reasons for this. One of the major reasons is convenience. Having to wade through tons of logwatch reports, or manually access each server to check the local firewall rules is difficult and quickly reaches a point of unmanageability. What I needed was a way to centrally monitor the logs, adding and removing filters as necessary. Unfortunately, there doesn’t seem to be much out there. I stumbled across the iptablelog project, but it appears to be abandoned.

Good did come of this project, however, as it lead me to look into ulogd. The ulog daemon is a userspace logger for iptables. The iptables firewall can be configured to send security violations, accounting, and flow information to the ULOG target. Data sent to the ULOG target is picked up by ulogd and sent wherever ulogd is configured to send it. Typically, this is a text file or a sql database.

Getting started with ulogd was a bit of a problem for me, though. To start, since I’m using a centralized management system, I need to ensure that any new software I install uses the proper package format. So, my first step was to find an RPM version of ulogd. I can roll my own, of course, but why re-invent the wheel? Fortunately, Fedora has shipped with ulogd since about FC6. Unfortunately for me, however, I was unable to get the SRPM for the version that ships with Fedora 11 to install. I keep getting a cpio error. No problem, though, I just backed up a bit and downloaded a previous release. It appears that nothing much has changed as ulogd 1.24 has been released for some time.

Recompiling the ulog SRPM for my CentOS 5.3 system failed, however, complaining about linker problems. Additionally, there were errors when the configure script was run. So before I could get ulogd installed and running, I had to get it to compile. It took me a while to figure it out as I’m not a linker expert, but I came up with the following patch, which I added to the RPM spec file.

— ./configure 2006-01-25 06:15:22.000000000 -0500
+++ ./configure 2009-09-10 22:37:24.000000000 -0400
@@ -1728,11 +1728,11 @@
EOF

MYSQLINCLUDES=`$d/mysql_config –include`
– MYSQLLIBS=`$d/mysql_config –libs`
+ MYSQLLIBS=`$d/mysql_config –libs | sed s/-rdynamic//`

DATABASE_DIR=”${DATABASE_DIR} mysql”

– MYSQL_LIB=”${DATABASE_LIB} ${MYSQLLIBS} ”
+ MYSQL_LIB=”${DATABASE_LIB} -L/usr/lib ${MYSQLLIBS}”
# no change to DATABASE_LIB_DIR, since –libs already includes -L

DATABASE_DRIVERS=”${DATABASE_DRIVERS} ../mysql/mysql_driver.o ”
@@ -1747,7 +1747,8 @@
echo $ac_n “checking for mysql_real_escape_string support””… $ac_c” 1>&6
echo “configure:1749: checking for mysql_real_escape_string support” >&5

– MYSQL_FUNCTION_TEST=`strings ${MYSQLLIBS}/libmysqlclient.so | grep mysql_real_escape_string`
+ LIBMYSQLCLIENT=`locate libmysqlclient.so | grep libmysqlclient.so$`
+ MYSQL_FUNCTION_TEST=`strings $LIBMYSQLCLIENT | grep mysql_real_escape_string`

if test “x$MYSQL_FUNCTION_TEST” = x
then

In short, this snippet modifies the linker flags to add /usr/lib as a directory and removed the -rdynamic flag which mysql_config seems to errantly present. Additionally, it modifies how the script identifies whether the mysql_real_escape_string function is present in the version of MySQL installed. Both of these changes resolved my compile problem.

After getting the software to compile, I was able to install it and get it running. Happily enough, the SRPM I started with included patches to add an init script as well as a logrotate script. This makes life a bit easier when getting things running. So now I had a running userspace logger as well as a standardized firewall. Some simple changes to the firewall script added ULOG support. You can download the SRPM here.

At this point I have data being sent to both the local logs as well as a central MySQL database. Unfortunately, I don’t have any decent tools for manipulating the data in the database. I’m using iptablelog as a starting point and I’ll expand from there. To make matters more difficult, ulogd version 2 seems to make extensive changes to the database structure, which I’ll need to keep in mind when building my tools.

I will, however, be releasing them to the public when I have something worth looking at. Having iptablelog as a starting point should make things easier, but it’s still going to take some time. And, of course, time is something I have precious little of to begin with. Hopefully, though, I’ll have something worth releasing before the end of this year. Here’s hoping!

 

Inexpensive Two Factor Authentication

Two Factor authentication is a means by which a user’s identity can be confirmed in a more secure manner. Typically, the user supplies a username and password, the first factor, and then an additional piece of information, the second factor. In theory, providing this additional information proves the user is who they say they are. Two different types of factors should be used to maximize security.

There are three general types of factors that are used. They are as follows (quoting from Wikipedia):

  • Human factors are inherently bound to the individual, for example [[biometrics]] (“Something you are”).
  • Personal factors are otherwise mentally or physically allocated to the individual as for example learned code numbers. (“Something you know”)
  • Technical factors are bound to physical means as for example a pass, an ID card or a token. (“Something you have”)

While Two Factor authentication can be secure, the security is often compromised through the use of less secure second factors. For instance, many institutions use a series of questions as a second factor. While this is somewhat more secure than a single username and password, these questions are often generic enough that they can be obtained through social engineering. This is an example of using the same factor twice, in this case, Personal factors. Personal factors are inexpensive, however, often free to the institution requiring the security.

On the other hand, use of either Human or Technical factors is often cost prohibitive. Biometrics, for instance, requires some sort of interface to read the biometric data and convert it to something the computer can understand. Technical factors are typically physical electronic devices with a cost per device. As a result, institutions are unwilling to put forth the cost necessary to protect their data.

Banks, in particular, are unwilling to provide this enhanced security due to their large customer base and the prohibitive cost of providing physical hardware. But, banks may be willing to provide a more cost effective second factor, if one existed. Australian inventor, Matt Walker, may be able to provide such a solution.

Passwindow is a new authentication method consisting of a transparent window with seemingly random markings on it. The key is to combine these markings with similar markings provided by the application requiring authentication. The markings are similar to those on an LED clock and combining the two sources reveals a series of numbers, effectively creating a one-time password. The Passwindow provides a Physical factor, making it an excellent second factor. The following video demonstrates how Passwindow works.

What makes Passwindow so incredible is the how inexpensive it is to implement it. The bulk of the cost is in providing users with their portion of the pattern. This can easily be added to new bank cards as they are sent out, or provided as a second card to customers until they require a new card. There is sufficient space on existing bank cards to integrate a clear window with the pattern on it.

Passwindow seems to be secure for a number of reasons. It’s a physical device, something that cannot be socially engineered. In order for it to be compromised, an attacker needs to have a copy of the segment pattern on your specific card. While users generally have a difficult time keeping passwords safe, they are exceedingly good at keeping physical objects secure.

If an attacker observes the user entering the generated number, the user remains secure because the number is a one-time password. While it is theoretically possible for that number to come up again, it is highly unlikely. A well written generator will ensure truly random patterns, ensuring they can’t be predicted. Additional security can be added by having the user rotate the card into various positions or adding additional lines to the card.

If Passwindow can find traction, I can see it being integrated into most bank cards, finally providing a more secure means of authentication. Additionally, it brings an inexpensive second factor to the table, giving other institutions the ability to use enhanced security. This is some pretty cool technology, I’m eager to see it implemented in person.

 

Scanning the crowd

RFID, or Radio Frequency Identifier, chips are used throughout the commercial world to handle inventory, identify employees, and even more. Ever heard of EZPass? EZPass is an RFID tag you attach to your car window. When you pass near an EZPass receiver, the receiver records your ID and decrements your EZPass account an amount equivalent to the toll of the road or bridge you have just used.

One of the key concepts here is that the RFID tag can be read remotely. Depending on the type of RFID tag, remotely can be a few feet to several yards. In fact, under certain conditions, it may be possible to read RFID tags at extreme distances of hundreds and possibly thousands of feet. It is this “feature” of RFID that attracts the attention of security researchers, and others with more… nefarious intentions.

Of course, if we want to talk security and hacking, we need to talk about Defcon. Defcon, one of the largest, and most electronically hostile, hacking conventions in the US, and possibly the world, took place this past weekend, July 30th – August 2nd. Events range from presentations about security and hacking, to lock picking contests, a jeopardy-style event, and more. One of the more interesting panels this year was “Meet The Feds,” where attendees were given the opportunity to ask questions of the various federal representatives.

Of course, the “official” representatives on the panel weren’t the only feds attending Defcon. There are always others in the crowd, both out in the open and undercover. And while there is typically a “Spot the Fed” event at Defcon, there are other reasons Feds may be around that don’t want their identities known. Unfortunately, the federal government is one of the many organizations that have jumped on the RFID bandwagon, embedding tags into their ID cards.

So, what do you get when you combine a supposedly secure technology with a massive gathering of technology-oriented individuals who like to tinker with just about everything? In the case of Defcon 17, you get an RFID scanner, complete with camera.

According to Threat Level, a column in Wired magazine, a group of researchers set up a table at Defcon where they captured and displayed credentials from RFID chips. When the data was captured, they snapped a picture of the person the data was retrieved from, essentially tying a face to the data stream.

So what’s the big deal? Well, the short story is that this data can be used to impersonate the individual it was captured from. In some cases, it may be possible to alter the data and gain increased privileges at the target company. In other cases, the data can be used to identify the individual, which was the fear at Defcon. Threat Level goes into detail about how it works, but to summarize their report, it is sometimes possible to identify where a person works by the identification strings sent from an RFID chip. Thus, it might be possible to identify undercover federal agents, purely based on the data captured in a passive scan.

The Defcon organizers asked the researchers to destroy the captured data, but this incident just goes to prove how insecure this information truly is. Even with encryption and other security measures in place, identification of an individual is still possible. And to make matters worse, RFID chips are popping up everywhere. They’re attached to items you buy at the store, embedded into identification cards for companies, and even embedded in passports. And while the US government has gone so far as to provide special passport covers that block the RFID signal, there are still countless reasons to open your passport, even for a few moments. Remember, it only takes a few seconds to capture all the data.

Beware… You never know who’s watching …