The Zero-Day Conundrum

Last week, another “zero-day” vulnerability was reported, this time in Adobe’s Acrobat PDF reader. Anti-virus company, Symantec, reports that this vulnerability is being used as an attack vector against defense contractors, chemical companies, and others. Obviously, this is a big deal for all those being targeted, but is it really something you need to worry about? Are “zero-days” really something worth defending against?

What is a zero-day anyway? Wikipedia has this to say:

A zero-day (or zero-hour or day zero) attack or threat is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.

So, in short, a zero-day is an unknown vulnerability in a piece of software. Now, how do we defend against this? We have all sorts of tools on our side, surely there’s one that will catch these before they become a problem, right? IDS/IPS systems have heuristic filters for detecting anomalous activity. Of course, you wouldn’t want your IPS blocking arbitrary traffic, so that might not be a good idea. Anti-virus software also has heuristic filters, so that should help, right? Well… When’s the last time your heuristic filter caught something that wasn’t a false positive? So yeah, that’s probably not going to work either. So what’s a security engineer to do?

My advice? Don’t sweat it. Don’t get me wrong, zero-days are dangerous and can cause all sorts of problems, but unless you have an unlimited budget with an unlimited amount of time, trying to defend against an unknown attack is a pointless exercise in futility. But don’t despair, there is hope.

Turns out, if you spend your time securing your network properly, you’ll defend against most attacks out there. Let’s look at this latest attack, for instance. Let’s assume you’ve spent millions and have the latest and greatest hardware with all the cutting edge signatures and software. Someone sends the CEO’s secretary an innocuous PDF, which she promptly opens, and all that hard work goes out the window.

On the other hand, let’s assume you spent the small budget you have defending the critical data you store and spend the time you’ve saved not decoding those advanced heuristics manuals on training the staff. This time the CEO’s secretary looks twice, realizes this is an unsolicited email, and doesn’t open the PDF. No breach, the world is saved.

Seriously, though, spending your time and effort safe-guarding your data and training your staff will get you much further than worrying about every zero-day that comes along. Of course, you should be watching for these sorts of reports. In this case, for instance, you can alert your staff that there’s a critical flaw in this particular software and that they need to be extra careful. Or, if the flaw is in a web application, you can add the necessary signatures to look for it. But in the end, it’s very difficult, if not impossible, to defend against something you’re not aware of. Network and system security is complex and difficult enough without having to worry about the unknown.

Reflections on DerbyCon

On September 30th, 2011, over 1000 people from a variety of backgrounds descended on Louisville, Kentucky to attend the first DerbyCon. DerbyCon is a security conference put together by three security professionals, Dave Kennedy, Martin Bos, and Adrian Crenshaw. Along with a sizable crew of security and administrative staff, they hosted an absolutely amazing conference.

During the three day conference, DerbyCon sported amazing speakers such as Kevin Mitnick, HD Moore, Chris Nickerson, and others. Talks covered topics such as physical penetration testing, lock picking, and network defense techniques. There were training sessions covering Physical Penetration, Metasploit, Social Engineering, and more. A lock pick village was available to both learn and show off your skills, as well as a hardware village where you could learn how to solder among other things. And, of course, there were late-night parties.

For me, this was my first official security conference. By all accounts, I couldn’t have chosen a better conference. All around me I heard unanimous praise for the conference, how it was planned, and how it was run. There were a few snafus here and there, but really nothing worth griping about.

The presentations I was able to attend were incredible and I came home with a ton of knowledge and new ideas. During the closing of the conference, Dave mentioned some ideas for next years conference such as a newbie track. This has inspired me to think about possibly presenting at next years conference. I have an idea already, something I’ve started working on. If all goes well, I’ll have something to present.

DerbyCon was definitely one of the highlights of my year. I’m already eager to return next year.

Audit Insanity

<RANT>

It’s amazing, but the deeper I dive into security, the more garbage security theater I uncover. Sure, there’s insanity everywhere, but I didn’t expect to come across some of this craziness…

One of the most recent activities I’ve been party to has been the response to an independent audit. When I inquired as to the reasoning behind the audit, the answer I’ve received has been that this is a recommended yearly activity. It’s possible that this information is incorrect, but I suspect that it’s truer than I’d like to believe.

Security audits like this are standard practice all over the US and possibly the world. Businesses are led to believe that getting audited is a good thing and that they should be repeated often. My main gripe here is that while audits can be good, they need to be done for the right reasons, not just because someone tells you they’re needed. Or, even better, the audits that are forced on a company by their insurance company, or their payment processor. These sorts of audits are there to pass the blame if something bad happens.

Let’s look a little deeper. The audit I participated in was a typical security audit. An auditor contacts you with a spreadsheet full of questions for you to answer. You will, of course, answer them truthfully. Questions included inquiries about the password policy, how security policies are distributed, and how logins are handled. They delve into areas such as logging, application timeouts, IDS/IPS use, and more. It’s fairly in-depth, but ultimately just a checklist. The auditor goes through their list, interpreting your answers, and applying checkmarks where appropriate. The auditor then generates a list of items you “failed” to comply with and you have a chance to respond. This is all incorporated into a final report which is presented to whoever requested the audit.

Some audits will include a scanning piece as well. The one I’m most familiar with in this aspect is the SecurityMetrics PCI scan. Basically, you fill out a simplified yes/no questionnaire about your security and then they run a Nessus scan against whatever IP(s) you provide to them. It’s a completely brain-dead scan, too. Here’s a perfect example. I worked for a company who processed credit cards. The system they used to do this was on a private network using outbound NAT. There were both IDS and firewall systems in place. For the size of the business and the frequency of credit card transactions, this was considerable security. But, because there was a payment card processor in the mix, they were required to perform a quarterly PCI scan. The vendor of choice, SecurityMetrics.

So, the security vendor went through their checklist and requested the IP of the server. I explained that it was behind a one-way NAT and inaccessible from the outside world. They wanted the IP of the machine, which I provided to them. 10.10.10.1. Did I mention that the host in question was behind a NAT? These “security professionals” then loaded that IP into their automated scanning system. And it failed to contact the host. Go figure. Again, we went around and around until they finally said that they needed the IP of the device doing the NAT. I explained that this was a router and wouldn’t provide them with any relevant information. The answer? We don’t care, we just need something to scan. So, they scanned a router. For years. Hell, they could still be doing it for all I know. Like I said, brain dead security.

What’s wrong with a checklist, though? The problem is, it’s a list of “common” security practices not tailored to any specific company. So, for instance, the audit may require that a company uses hardware-based authentication devices in addition to standard passwords. The problem here is that this doesn’t account for non-hardware solutions. The premise here is that two-factor authentication is more secure than just a username and password. Sure, I whole-heartedly agree. But, I would argue that public key authentication provides similar security. It satisfies the “What You Have” and “What You Know” portions of two-factor authentication. But it’s not hardware! Fine, put your key on a USB stick. (No, really, don’t. That’s not very secure.)

Other examples include the standard “Password Policy” crap that I’ve been hearing for years. Basically, you should expire passwords every 90 days or so, passwords should be “strong”, and you should prevent password reuse by remembering a history of passwords. So let’s look at this a bit. Forcing password changes every 90 days results in bad password habits. The reasoning is quite simple, and there have been studies that show this. This paper (pdf) from the University of North Carolina is a good example. Another decent write up is this article from Cryptosmith. Allow me to summarize. Forcing password expiration results in people making simpler passwords, writing passwords down, or using simplistic algorithms to generate “complex” passwords. In short, cracking these “fresh” passwords is often easier than well thought out ones.

The so-called “strong” password problem can be summarized by a rather clever XKCD comic. The long and short here is that truly complex passwords that cannot be easily cracked are either horribly complex mishmashes of numbers, letters, and symbols, or they’re long strings of generic words. Seriously, “correct horse battery staple” is significantly stronger than using a completely random 11 digit string.

And, of course, password history. This sort of goes hand-in-hand with password expiration, but not always. If it’s used in conjunction with password expiration, then it generally results in single character variation in passwords. Your super-secure “complex” password of “Password1” (seriously, it meets the criteria.. Uppercase, lowercase, number) becomes a series of passwords where the 1 is changed to a 2, then 3, then 4, etc. until the history is exceeded and the user can return to 1 again. It’s easier to remember that way and the user doesn’t have to do much extra work.

So even the standard security practices on the checklist can be questioned. The real answer here is to tweak each audit to the needs of the requestor of the audit, and to properly evaluate the responses based on the security posture of the responder. There do need to be baselines, but they should be sane baselines. If you don’t get all of the checkmarks on an audit, it may not mean you’re not secure, it may just mean you’re securing your network in a way the auditor didn’t think of. There’s more to security than fancy passwords and firewalls. A lot more.

</RANT>

Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Helpful Rules for OSSEC

OSSEC has quickly become a primary weapon in my security toolkit.  It’s flexible, fast, and very easy to use.  I’d like to share a few rules I’ve found useful as of late.

I primarily use OSSEC in a server/client setup.  One side effect of this is that when I make changes to the agent’s configuration, it takes some time for it to push out to all of the clients.  Additionally, clients don’t restart automatically when a new agent config is received.  However, it’s fairly easy to remedy this.

First, make sure you have syscheck enabled and that you’re monitoring the OSSEC directory for changes.  I recommend monitoring all of /var/ossec and ignoring a few specific directories where files change regularly. You’ll need to add this to both the ossec.conf as well as the agent.conf.

<directories check_all="yes">/var</directories>
<ignore type="sregex">^/var/ossec/queue/</ignore>
<ignore type="sregex">^/var/ossec/logs/</ignore>
<ignore type="sregex">^/var/ossec/stats/</ignore>

The first time you set this up, you’ll have to manually restart the clients after the new config is pushed to them. All new clients should work fine, however.

Next, add the following rules to your local_rules.xml file (or whatever scheme you’re using).

<rule level="12" id="100005">
   <if_matched_group>syscheck</if_matched_group>
   <description>agent.conf changed, restarting OSSEC</description>
   <match>/var/ossec/etc/shared/agent.conf</match>
</rule>

This rule looks for changes to the agent.conf file and triggers a level 12 alert. Now we just need to capture that alert and act on it. To do that, you need to add the following to your ossec.conf file on the server.

<command>
    <name>restart-ossec</name>
    <executable>restart-ossec.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>no</timeout_allowed>
</command>
<active-response>
    <command>restart-ossec</command>
    <location>local</location>
    <rules_id>100005</rules_id>
</active-response>

You need to add this to the top of your active response section, above any other rules. OSSEC matches the first active-response block and ignores any subsequent ones. The restart-ossec.sh script referenced in the command section should exist already in your active-response/bin directory as it’s part of the distribution.

And that’s all there is to it. Whenever the agent.conf file changes on a client, it’ll restart the OSSEC agent, reading in the new configuration.

Next up, a simple DoS prevention rule for apache web traffic. I’ve had a few instances where a single IP would hammer away at a site I’m responsible for, eating up resources in the process. Generally speaking, there’s no real reason for this. So, one solution is to temporarily block IPs that are abusive.

Daniel Cid, the author of OSSEC, helped me out a little on this one. It turned out to be a little less intuitive than I expected.

First, you need to group together all of the “normal” error response codes. The actual error responses (400/500 errors) are handled with other, more aggressive rules, so you can ignore most of them. For our purposes, we want to trigger on 200/300/400 error codes.

<rule id="131105" level="1">
      <if_sid>31101, 31108, 31100</if_sid>
      <description>Group of all "normal" 200/300/400 error codes.</description>
</rule>

Next, we want to create a composite rule that will fire after a set frequency and time limit. In short, we want this rule to fire if X matches are made in Y seconds.

<rule id="131106" level="10" frequency="500" timeframe="60">
      <if_matched_sid>131105</if_matched_sid>
      <same_source_ip />
      <description>Excessive access, Temporary block</description>
</rule>

That should be all you need provided you have active response already enabled. You can also add a specific active response for this rule that blocks for a shorter, or longer, period of time. That’s the beauty of OSSEC, the choice is in your hands.

I hope you find these rules helpful. If you have any questions or comments, feel free to post them below.

Hey KVM, you’ve got your bridge in my netfilter…

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
COMMIT

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”

 

Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC=192.168.1.2 DST=192.168.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.

WoO Day 7 : Tidbits

And so a week of fun comes to an end. But before we part company, there are a few more items to discuss. There are portions of OSSEC that are often overlooked because they require little or no configuration, and they “just work.” Other features are either more involved, or require external products. Let’s take a look at a few of these.

Rootkit Detection

First up, rootkit detection. OSSEC ships with a set of rules and “signatures” designed to detect common rootkits. The default OSSEC installation uses two files, rootkit_files.txt and rootkit_trojans.txt, as the base of the rootkit detection.

The rootkit_files.txt file contains a list of files commonly found with rootkit infections. Using various system calls, OSSEC tries to detect if any of these files are installed on the machine and sends an alert if they are found. Multiple system calls are used because some rootkits hide themselves by altering system binaries and sometimes by altering system calls.

The rootkit_trojans.txt file contains a list of commonly trojaned files as well as patterns found in those files when they have been compromised. OSSEC will scan each file and compare it to the list of patterns. If a match is found, an alert is sent to the administrator.

There are also additional rootkit files shipped with OSSEC. For Windows clients there are three files containing signatures for common malware, signatures to detect commonly banned software packages, and signatures for checking windows policy settings. On the Linux side are a number of files for auditing system security and adhering to CIS policy. CIS policy auditing will be covered later.

Rootkit detection also goes beyond these signature-based methods. Other detection methods include scanning /dev for unusual entries, searching for hidden processes, and searching for hidden ports. Rootkit scanning is pretty in-depth and more information can be found in the OSSEC Manual.

CIS Auditing

CIS, the Center for Internet Security, publishes a benchmark tool for auditing system security on various operating systems. OSSEC can assist with compliance to the CIS guidelines by monitoring systems for non-conformity and alerting as necessary. Shipped by default with OSSEC are three cis-based signature sets for Redhat Linux and Debian. Creating new tests is fairly straightforward and the existing tests can be adapted as needed.

OSSEC Splunk

One thing that OSSEC lacks is an easy way to pour through the OSSEC logs, get visual information on alerts, etc. There was a project, the OSSEC-WUI, that aimed to resolve this, but that project has mostly died. Last I heard, there were no plans to revive this project.

There is an alternative, however. A commercial product, Splunk, can handle the heavy lifting for you. Yes, yes, Splunk is commercial. But, good news! They have a free version that can do the same thing on a smaller scale, without all of the extra shiny. There is a plugin for Splunk, specifically designed to handle OSSEC as well. It’s worth checking out, you can find it over at splunkbase.

Alert Logging

And finally, alert logging. Because OSSEC is tightly secured, it can sometimes be a challenge to deal with alert logs. For instance, what if you want to put the logs in an alternate location outside of /var/ossec? There are alternatives, though. For non-application specific output you can use syslog or database output. OSSEC also supports output to Prelude and beta support exists for PicViz. I believe you can use multiple output methods if you desire, though I’d have to test that out to be sure.

The configuration for syslog output is very straightforward. You can define both the destination syslog server as well as the level of the alerts to send. Syslog output is typically what you would use in conjunction with Splunk.

Database configuration is a bit more in-depth and requires that OSSEC be compiled with certain options enabled. Both MySQL and PostgreSQL are supported. The database configuration block in the ossec.conf ile contains all of the options you would expect, database name, username and password, and hostname of the database server. Additionally you need to specify the database type, MySQL or PostgreSQL.

Prelude and PicViz support have their own specific configuration parameters. More information on this support can be found in the OSSEC Manual.

Final Thoughts

OSSEC is an incredible security product. I still haven’t reached the limits of what it can do and I’ve been learning new techniques for using it all week. Hopefully the information provided here over the last 7 days proves to be helpful to you. There’s a lot more information out there, though and an excellent place to start is the OSSEC home page. There’s also the OSSEC mailing list where you can find a great deal of information as well as a number of very knowledgeable, helpful users.

The best way to get started is to get a copy of OSSEC installed and start playing. Dive right in, the water’s fine.

WoO Day 6 : Layin’ Down The Law

In a previous entry we discussed OSSEC Decoders and how they work. Decoders are only the first step in the log analysis train, though. Once log entries have been decoded, we need to do something with the results. That something is to match them up with rules so that OSSEC sends alerts and/or provides the proper response.

OSSEC rules ultimately determine what log entries will be reported and what entries will trigger active responses. And, as with decoders, rules can build upon one another. You can also chain rules together to alter the ultimate response based on conditions such as frequency and time. So let’s take a look at a simple rule.

<rule noalert=”1″ level=”0″ id=”5700″>
<decoded_as>sshd</decoded_as>
<description>SSHD messages grouped.</description>
</rule>

This is one of the default rules that ships with OSSEC. Each rule is defined with a unique ID. To prevent custom rules from interfering with core rules, the developers have reserved a range of IDs for custom rules. That said, there is nothing preventing you from using whatever ID numbering you desire. Just keep in mind that if you use an ID that is reserved for something else by the developers, new versions of OSSEC may interfere.

Also listed in the rule tag is a level and a noalert attribute. The noalert attribute is pretty straightforward, this rule won’t send an alert when it’s matched. Typically, the noalert tag is used on rules that are designed to be built upon. The level tag determines what level alert this rule will trigger when matched. Because we’re combining it with the noalert tag, the level ends up not meaning a whole lot.

The decoded_as tag identifies the parent decoder for which this rule is written. Only rules with a decoded_as tag matching the decoder used to decode the current log entry will be scanned for a match. This prevents OSSEC from having to scan every rule for a match.

The description tag is a human readable description of what the rule is. When you receive and alert, or when the alert is added to the OSSEC log, this description is added along with it. In the case of this rule, the description identifies its purpose. This rule was defined purely to group together sshd alerts. The intention is that other rules will handle the alerts for any alert-worthy ssh log entries that are detected.

Now let’s look at something that builds on this basic rule. Again, choosing a rule from the default ruleset, we have this:

<rule id=”5702″ level=”5″>
<if_sid>5700</if_sid>
<match>^reverse mapping</match>
<regex>failed – POSSIBLE BREAK</regex>
<description>Reverse lookup error (bad ISP or attack).</description>
</rule>

The first thing to note about this rule is the if_sid tag. The if_sid tag says if the rule number defined in this tag has already matched the incoming log entry, then we want to see if this rule matches. In other words, this rule is a child of the rule identified in the if_sid tag.

The match tag defines a string we’re looking for within the log entry. The regex tag also defines a string we’re trying to match, but regex can use the full set of regular expressions that OSSEC supports. If both the match and the regex are found in the log entry, then this rule matches and will alert at a level of 5.

Finally, let’s look at a more advanced rule. This rule also builds on the previous rules mentioned, but contains a few extras that make it even more powerful.

<rule timeframe=”360″ frequency=”4″ level=”10″ id=”5703″>
<if_matched_sid>5702</if_matched_sid>
<description>Possible breakin attempt </description>
<description>(high number of reverse lookup errors).</description>
</rule>

The intention of this rule is to identify repeated log entries that point at a more severe problem than a simple error. In this case we have multiple incoming ssh connections with bad reverse DNS entries. The frequency and timeframe attributes define how many times within a specific timespan a particular rule has to fire before this rule will kick in.

Notice that instead of the if_sid tag, we’re using the if_matched_sid tag. This is because we’re not necessarily making this rule a child of another, but instead making it a composite rule. In other words, this rule is fired if another rule fires multiple times based on the setting within this rule. As a result, this rule fires with a level of 10.

But now that we have rules and alerts being generated, what else can we do? The answer is that we can trigger active responses based on those rules. Generally, an active response fires when an alert comes in with a level equal to or higher than the alert level of the active response definition. There are ways to alter this, but let’s keep things simple for now.

To configure active response, first you need to define the commands that active-response will use. Note: All command and active-response configuration is placed in the ossec.conf on the server.

<command>
<name>firewall-drop</name>
<executable>firewall-drop.sh</executable>
<expect>srcip</expect>
<timeout_allowed>yes</timeout_allowed>
</command>

This snippet defines a command called firewall-drop. The command depends on an executable called firewall-drop.sh. This file must exist on all agents that will run this command. Unfortunately, there is currently no mechanism to push these files out to agents automatically, but perhaps there will be in the future? (*HINT*)

Th expect tag determines what variables are required from the alert in order to fire this command. Since this particular command adds a block rule to the server firewall, the srcip is required. And finally, timeout_allowed tells OSSEC that this command supports a timeout option. In other words, the firewall block is added and then removed after a given timeout.

Once the command is in place you need to set up the active response itself.

<active-response>
<command>firewall-drop</command>
<location>local</location>
<level>6</level>
<timeout>3600</timeout>
</active-response>

The active-response block identifies the command that will be run as well as the level at which it will fire. For this example, the firewall-drop command is run for any alert with a level of 6 or higher. We have also specified a timeout of 3600 which tells OSSEC that the command needs to be run again after 3600 seconds to remove the firewall-drop.

Also included is a location tag. This tells OSSEC where to run the command. Here we have specified local, which may be slightly confusing. This means that firewall-drop is run on the local agent that triggered the alert. So, if agent 002 triggers an ssh alert with a level of 6, then OSSEC tells agent 002 to run the firewall-drop command with a timeout of 3600 seconds.

You can also specify other options for location that allow you to run commands on the server or on specific agents. For instance, what if you want to block an IP at the edge whenever you get an alert of level 10 or higher. Perhaps you create a command called edge-block and it connects to your edge router to update an ACL located on the router. Running this on every agent is unwieldy at best and probably constitutes a significant security threat. Instead, you can have this script run on the server, or even a specific agent designed to handle this connection.

And that covers the basics for rules. I encourage you to write your own rules and test them out with the ossec-logtest program located in /var/ossec/bin. Learning to write rules is essential to running and tuning an OSSEC installation.

Tune in tomorrow for the final wrap-up of this year’s Week of OSSEC!

WoO Day 5 : Decoders Unite!

One of the strongest features of OSSEC is its log analysis engine. The log collector process monitors all of the specified logs and passes each log entry off to the analysis engine for decoding and rule matching. For agents, the log entries are transmitted to the server for processing.

There are multiple steps to identifying actionable matches starting with the decoding of each log entry. First, pre-decoding breaks apart a single log entry into manageable pieces. Decoders are then used to identify specific programs and log entry types. Decoders can build on one another allowing the operator to target specific patterns within a log entry with ease.

Let’s take a look at a simple example first. The following is a sample log entry from a typical /var/log/secure log. This log uses the syslog format for log entries.

Oct 21 00:01:00 dev sshd[31409]: Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2

Pre-decoding breaks apart the syslog format into three pieces, the hostname, program_name, and log. Using the ossec-logtest program provided with OSSEC, we can view the process OSSEC goes through for decoding and then rule matching. Pre-decoding this log entry produces the following :

[root@dev bin]# ./ossec-logtest
2010/10/21 00:01:00 ossec-testrule: INFO: Reading local decoder file.
2010/10/21 00:01:00 ossec-testrule: INFO: Started (pid: 1106).
ossec-testrule: Type one log per line.

Oct 21 00:01:00 dev sshd[31409]: Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2

**Phase 1: Completed pre-decoding.
full event: ‘Oct 21 00:01:00 dev sshd[31409]: Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2’
hostname: ‘dev’
program_name: ‘sshd’
log: ‘Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2’

As you can see, the log entry ends up with three parts. Further decoding uses pattern matching on these three parts to further identify and categorize log entries.

<decoder name=”sshd”>
<program_name>^sshd</program_name>
</decoder>

This is about as simple as a decoder can get. The decoder block starts with an attribute identifying the name of the decoder. In this case, that name is sshd. Within the decoder block is a program_name tag that contains a regular expression used to match against the program_name from pre-decoding. If the regex matches, OSSEC will use that decoder to match against any defined rules.

As I mentioned before, however, decoders can build on each other. The first decoder above reduces the number of subsequent decoders that need to be checked before decoding is complete. For example, look at the following decoder definition :

<decoder name=”ssh-invfailed”>
<parent>sshd</parent>
<prematch>^Failed \S+ for invalid user|^Failed \S+ for illegal user</prematch>
<regex offset=”after_prematch”>from (\S+) port \d+ \w+$</regex>
<order>srcip</order>
</decoder>

This decoder builds on the first as can be seen via the parent tag. Decoders work on a first match basis. In other words, the first decoder to match is used to find secondary decoders (children of the first), the first secondary decoder is used to find tertiary (children of the second), etc. If the matching decoder has no children, then that decoder is the final decoder and the decoded information is passed on to rules processing.

There are three other tags within this decoder block worth looking at. First, the prematch tag. Prematch is used as a quick way to determine if the rest of the decoder should be run. Prematches should be written so that the portion of the entry they match can be skipped by the rest of the decoder. For instance, in the decoder example above, the prematch will match the phrase “Failed password for invalid user” in the log entry. This portion of the log contains enough information to identify the type of log entry without requiring us to parse it again to extract information. The remaining part of the log entry has the information we want to capture.

Which brings us to the regex. The regex, or regular expression, is a string used to match and pull apart a log entry. The regex expression in the example is used to extract the source ip address from the log so we can use it in an active response later. The order tag identifies what the extracted information is.

Now, using these two decoders, let’s run ossec-logtest again :

2010/10/21 00:01:00 ossec-testrule: INFO: Reading local decoder file.
2010/10/21 00:01:00 ossec-testrule: INFO: Started (pid: 28358).
ossec-testrule: Type one log per line.

Oct 21 00:01:00 dev sshd[31409]: Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2

**Phase 1: Completed pre-decoding.
full event: ‘Oct 21 00:01:00 dev sshd[31409]: Failed password for invalid user postfix
from 189.126.97.181 port 57608 ssh2’
hostname: ‘dev’
program_name: ‘sshd’
log: ‘Failed password for invalid user postfix from 189.126.97.181 port 57608 ssh2’

**Phase 2: Completed decoding.
decoder: ‘sshd’
srcip: ‘189.126.97.181’

As you can see, the decoding phase has identified the decoder as sshd. The logtest program reports the name of the parent decoder used, rather than the ultimate matching decoder.

Hopefully this has given you enough information to start writing your own decoders. The decoder.xml file that comes with OSSEC is a great place to look at examples of how to craft decoders. This is a more advanced task, however, and the existing decoders cover most of the standard log entries you’ll see on a Linux or Windows machine.

For more information on decoders, please see the OSSEC manual. You might also check out chapter 4 of the OSSEC book. The book is a little outdated now, but the information within is still quite accurate. Syngress released a few chapters of the book that you can download here.

 

WoO Day 4 : Spot the Difference

One of the simplest functions that OSSEC can perform is integrity monitoring. The premise is pretty simple, let the admin know if a file has changed. More often than not, the admin will already know that the file has changed because the admin is the one that changed it. But sometimes files change because of problems in the system that the admin doesn’t know about. Or, the files may change because the server has been compromised by an outside party that has installed rogue software. Either way, the admin needs this information.

OSSEC can be configured to look at a few different aspects of a file in order to determine if it has changed or not, depending on how you configure it. But before we get to that, let’s configure OSSEC to send us alerts to begin with.

There are a number of ways OSSEC can send alerts. Alerts can be sent via syslog, email, stored in a database, or sent to third-party programs such as Prelude and Picviz. To make things a bit simpler, I’m only detailing how to set up email. If you’re interested in other alert setups, please check the OSSEC manual.

Setting up email alerts is as simple as adding two sections of configuration to the ossec.conf file. This configuration is set on the server, or on a standalone installation. In a server/agent setup, the server sends all alerts, including those for agents.

<global>
<email_notification>yes</email_notification>
<email_to>myaddress@example.com</email_to>
<smtp_server>smtp.example.com</smtp_server>
<email_from>ossec@example.com</email_from>
</global>

This first bit of configuration defines the To: and From: addresses as well as the SMTP server address. This configuration goes into the global config section which has a number of other options as well.

<alerts>
<log_alert_level>1</log_alert_level>
<email_alert_level>6</email_alert_level>
</alerts>

This portion of the configuration defines what level alerts should be sent via email and what level alerts should be logged. Don’t worry too much about what a level is, you’ll learn this in a later blog entry when we discuss rules and active response. For now, the config as shown above is enough.

Now that we’ll receive alerts that OSSEC generates, let’s set something up to send an alert!

As I mentioned before, integrity monitoring is pretty straightforward. OSSEC uses a number of different characteristics to identify when a file changes. It’s pretty easy to determine that a file has changed if the owner, permissions, or size changes. OSSEC can also be configured to check the sha1 and/or md5 hash values as well. Hashing is a way of producing a “signature” for a file that is mostly unique. It is possible to create another file with the same hash, but it’s very difficult. Combining all of these checks together makes it very improbable that an intruder can replace a file without you knowing.

Enabling syscheck is done on a per-host basis. What I mean by this is that the config is added to the ossec.conf file for server or standalone systems, and the config is added to the agent.conf for agents. As with other agent configurations, you can specify syscheck blocks for all agents as well as cumulatively for specific agents.

<syscheck>
<frequency>7200</frequency>

<directories check_all=”yes”>/etc</directories>
<ignore>/etc/adjtime</ignore>
</syscheck>

The above config is an example of how to enable syscheck. The frequency block specifies the time, in seconds, between syscheck runs. The example above runs syscheck every 2 hours. Moving further down in the config is the directories tag. Simply put, this tag identifies what directories syscheck should be checking. The directories tag can define multiple directories, separated by a comma. The check_all attribute indicates that all of the previously mentioned checks should be run. The OSSEC manual details what other attributes are available. One other attribute worth mentioning is the realtime attribute. This attribute directs OSSEC to use inotify to alert when files changes, in real time, for quicker notification. Linux is the only OS, that I am aware of, that supports this option. Please check the manual for more information on realtime use.

There are times when you want to ignore files within a directory, or even certain subdirectories. The ignore tag allows you to accomplish this. Without defining any attributes, the ignore tag must define an exact match. For instance, in the example above, the file /etc/adjtime will be ignored. However, /etc/adjtime.new or /etc/adjtime0 will not be. To ignore these files, you will need to either add explicit ignore blocks for them, or you can use a regular expression to grab them all. The type attribute allows you to specify that this tag contains a regular expression. For instance :

<ignore type=”sregex”>^/etc/adjtime</ignore>

This ignore block will drop any file (including path) that starts with /etc/adjtime. Information on the regular expression syntax is in the OSSEC Manual.

There are a number of other tags available for syscheck that you can find in the manual. Among these are some more useful ones such as alert_new_files and, if you’re in a Windows environment, windows_registry. I encourage you to check out these options and identify which, if any, would benefit your environment.

When OSSEC detects that a file has changed, it will send an alert. Alerts will be reported via the alert mechanism you have defined within the configuration. Syscheck alerts are, by default, set to a level of 7, so our alert settings will result in an email being sent. An example alert is below :

OSSEC HIDS Notification.
2010 Oct 13 08:08:05

Received From: (Example) 192.168.0.1->syscheck
Rule: 550 fired (level 7) -> “Integrity checksum changed.”
Portion of the log(s):

Integrity checksum changed for: ‘/etc/adjtime’
Old md5sum was: ‘7234e9b2255e62178c5650982bae9cbc’
New md5sum is : ‘01210c2018146c2a9ca89505118c42f8’
Old sha1sum was: ‘df60021e39119b42a4ae508ad19b65019df089af’
New sha1sum is : ‘694b403b74a2aa339ba323b65a6d724aa8129e3b’

–END OF NOTIFICATION

OSSEC makes some attempts at identifying false positives by automatically ignoring files that change frequently. By default, ossec will begin ignoring any file that has changed three times. You can change this behavior by creating custom rules that override the defaults. I’ll be covering rules in my next blog post, so stay tuned!