Derbycon 2012

I spent this past weekend in Louisville, KY attending a relatively new security conference called Derbycon. This year was the second year they held the conference and the first year I spoke there. It was amazing, to say the least.

I haven’t been to many conventions, and this is the only security-oriented convention I’ve attended. When I first attended last year, it was with come trepidation. I knew that some of the attendees I’d be seeing were truly rockstars in the security world. And, unfortunately, one of the people who was supposed to come with us was unable to attend. Of course, that person was the one person in our group who was connected within the security world and we were depending on them to introduce us to everyone.

It went well, nonetheless, and we were able to meet a lot of amazing people while we were there. Going back this year, we were able to rekindle friendships that started last year, and even make a few new ones. Derbycon has an absolutely amazing sense of family. Even the true rockstars of the con are down to earth enough to hang out with the newcomers.

And this year, I had the opportunity to speak. I submitted my CFP earlier in the year, not really expecting it to be chosen. Much to my surprise, though, it was. And so I spent some time putting together my talk and prepared to stand in front of the very people I looked up to. It was nerve-wracking to say the least. You can watch the video over on the Irongeek site, and you can find the slides in my presentation archive.

But I powered through it. I delivered my talk and while it may not have been the most amazing talk, it was an accomplishment. I think it’s given me a bit more confidence in my own abilities and I’m looking forward to giving another. In fact, I’ve since submitted a talk to BSides Deleware at the behest of the organizers. I haven’t heard back yet, but here’s hoping.

I’m already making plans to attend Derbycon 2013 and I hope to be a permanent fixture there for many years to come. Derbycon is an amazing place to go and something truly magnificent to experience. I may not be in the security industry, but they made me feel truly welcome despite my often dumb questions and inane comments. Rel1k, IronGeek, and Purehate have put together something special and I was proud to be a part of it again.

Jumping The Gap

I listened to a news story on NPR’s On The Media recently about “Cyber Warfare” and assessing it’s true threat. On the one hand, it seemed like another misguided report from a clueless news media. On the other hand, though, it did make me think a bit.

Much of the talk about Cyber Warfare revolves around attacking the various SCADA systems used to control the nation’s physical infrastructure. By today’s standards, many of these systems are quite primitive. Many of these systems are designed for a very specific purpose, rarely upgraded to run on modern operating systems, and very rarely, if ever, designed to be secure. The state of the art in security for many of these systems is to not allow outside access to the system.

Unfortunately, if numerous reports are to be believed, a good portion of the world’s infrastructure is connected to the Internet in one manner or another. The number of institutions that truly air gap their critical networks is alarmingly low. A researcher from IO Active, who provided some of the information for the aforementioned NPR article, used SHODAN to scour the Internet for SCADA systems. Why use SHODAN? Turns out, the simple act of scanning the Internet for these systems often resulted in the target systems crashing and going offline. If a simple network scan can kill one of these systems, then what hope do we have?

But, air gapping is by no means a guarantee against attacks since users of these systems may regularly switch between connected and non-connected systems and use some form of media to transfer files back and forth. There is precedence for this with the Stuxnet virus. According to reports, the Iranian nuclear facility was, in fact, air gapped. However, Stuxnet was designed to replicate onto USB drives and other media. Plug an infected USB drive into a targeted SCADA system and poof, instant infection across an air gapped system.

So what can be done here? How do we keep our infrastructure safe from attackers? Yes, even aging attackers…

Personally, I believe this comes down, again, to Defense in Depth. With the exception of not building it in the first place, I don’t believe that there is a way to prevent attacks. And any determined attacker will eventually get in, given time. So the only way to defend against this is to build a layered defense grid with a full monitoring back end. Expect that attackers will make it through one or two layers before being detected. Determined attackers may make it even further. But if you build you defenses with this in mind, you will stand a better chance at detecting and repelling these attacks.

I don’t believe that air gapping systems is a viable security strategy. If anything, it can result in a false sense of security for users and administrators. After all, if the system isn’t connected, how can it possibly be infected? Instead, start building in security from the start and deploy your defense in monitored layers. It works.

Who’s Problem Is It Anyway?

This week, Adobe released a security patch for their CS5 product line. While Adobe releasing security patches isn’t really that surprising given their track record with vulnerable products, what is somewhat surprising are the circumstances surrounding the patch. Adobe released the patch somewhat reluctantly.

Sometime in May, possibly earlier, Adobe was made aware of a fairly severe security vulnerability in their CS5 product line. A specially crafted image file was enough to compromise the victim’s computer. Obviously this is a pretty severe flaw and should be fixed ASAP, right? Well, Adobe didn’t really see it that way. Their initial response to the problem was that users who wanted a fixed version would have to pay to upgrade to the CS6 product line, in which the flaw was patched. Eventually they decided to backport the patch to the CS5 version.

Adobe’s initial response and their eventual capitulation leads to a broader discussion. Given any security problem, or even any bug in general, who is responsible for fixing it? The vendor, of course, right? Well… Maybe?

In a perfect world, there would be no bugs, security or otherwise. In a slightly less perfect world, all bugs would be resolved before a product is retired. But neither world exists and bugs seem to prevail. So, given that, who’s problem is it anyway?

There are a lot of justifications vendors make as to when they’ll patch, how they’ll support something, and, of course, excuses. It’s not an easy problem for vendors, though, and some vendors put a lot of thought into their policies. They don’t always get them right, and there’s never a way to make everyone happy.

Patching generally follows a product lifecycle. While the product is supported, patching happens as a normal course of business. When a product is retired, some companies put together a support plan with For instance, when Cisco announces that a product has entered the End-of-Life cycle, they lay out a multi-year plan for support. Typically this involves regular software maintenance for a year, security releases for 2-3 years, and then hardware maintenance for the remainder. This gives businesses ample time to deal with finding a suitable replacement.

Unfortunately, not all vendors act responsibly and often customers are left high and dry when a product is suddenly obsoleted. Depending on the vendor, this sometimes leads to discussions about the possibility of legislation forcing vendors to support products, or to at least address security vulnerabilities. If something like this were to pass, where does it end? Are vendors forced to support products forever? Should they only have to fix severe security problems? And what constitutes a severe security problem?

There are a multitude of reasons that bugs, security or otherwise, are not dealt with. Some justifiable, others not. Working in networking, the primary excuse I’ve heard from hardware vendors over the year is that the management interface of their product is not intended to be on a public network where it can be attacked. Or that the management interfaces should be put behind a firewall where it can’t be attacked. These excuses are garbage, of course, but some vendors just continue to give them. And, unfortunately, you’re not always in a position to drop a vendor and move elsewhere. So, we do what we can to secure the systems and move on.

And sometimes the problem isn’t the vendor, but the customer. How long has it been since Microsoft phased out older versions of it’s Windows operating system? Windows XP is relatively recent, but it’s been a number of years since Windows 2000 was phased out. Or how about Windows 98, 95, and even Windows NT? And customers still have these deployed in their networks. Hell, I know of at least one OS/2 Warp system that’s still deployed in a Telco Central Office!

There is a basis for some regulation, however, and it may affect vendors. When the security of a particular product can significantly impact the public, it can be argued that regulation is necessary. The poster child for this argument are SCADA systems which seem to be perpetually riddled with security holes, mostly due to outdated operating systems.

SCADA systems are what typically control the electrical grid or nuclear power plants. For obvious reasons, security problems with these systems are a deadly serious problem. I often hear that these systems should be air gapped from the Internet, but the lure of easy access and control often pushes users to ignore this advice.

So should SCADA systems be regulated? It’s obvious that the regulations in place already for the industries they are used in aren’t working, so what makes us think that more regulation will help? And if we regulate and force vendors to provide patches for security problems, what makes us think that industries will install them?

This is a complex problem and there are no easy answers. The best we can hope for is a competent administrator who knows how to handle security and deal with threats properly. Until then, let’s hope for incompetent criminals.

Protecting Sources in the 21st Century

Trust is key in many situations. This can be especially true for journalists interested in reporting on sensitive matters. If journalists couldn’t be trusted to protect the identity of their confidential sources, many news items we take for granted would never have been written, or perhaps they wouldn’t have included some of the crucial information they revealed. For instance, much of the critical information about the Watergate scandal was given to reporters by a confidential source who went by the name of Deep Throat.

Until recently, reporters made contact with their sources via anonymous phone calls, often from pay phones, secret meetings, and dead drops. The identify of sources could be kept secret fairly easily, especially if the meetings were carefully conducted in such a manner as to leave little or no trail for anyone to follow. This meant avoiding the use of phones as they were traceable. Additionally, many journalists were willing to risk jail time instead of revealing their sources.

With the advent of the Internet, it became possible to contact sources, both local and distant, quickly and conveniently via email or some form of instant messaging. The ability to reach out to a source and get an almost immediate answer means journalists can quickly deal with rapidly evolving stories. The anonymity of the Internet means that sources stay anonymous. It’s a win-win situation.

Or is it…

I was listening to an On The Media podcast recently and they featured a story about how reporters using the Internet are, in some cases, exposing their contacts without meaning to, often without even knowing it. You can listen to the story below or read the transcript.

Before the Internet, phone conversations were sometimes considered an acceptable risk for contacting sources. After all, tracing a phone call was something it generally took a court order to accomplish. The Internet, however, is a completely different beast. Depending on the communications software used, tracing the owner of an account can be accomplished very easily by just about anyone. Software such as Netglub or Maltego can be used to quickly gather Intel on someone, starting with something as small and simple as a single email address.

Email accounts are generally accessible from anywhere in the world, protected by only a username and password. Brute forcing software can be used to crack a password in a relatively short time allowing someone direct access to the mail stored in the account. And if the mail is sent in clear text, someone trying to identify the source can easily read email sent between the reporter and their source without anyone being the wiser.

Other accounts can be similarly attacked. The end result of identifying the source can be mere embarrassment, or perhaps the source losing their job. Or, as is often the case when foreign news sources are involved, the source can be hunted down and killed.

For a reporter, protecting a source has always been important, but in some cases, it’s a matter of life and death. In the past few years, unrest overseas in places such as Iran, Egypt, Syria, and others has shown that secure communication methods are necessary to help save the lives of those fighting for change. Governments have been ruthless in hunting down and eliminating those who would oppose them. Using secure methods for communication have become lifelines for opposition forces. Likewise, reporters and anyone else who interacts with these sorts of contacts should also be using whatever methods of security they can to ensure that their sources are protected.

Towards Building More Secure Networks

It is no surprise that security is at the forefront of everyone’s minds these days. With high profile breaches, to script kiddies wreaking havoc across the Internet, it is obvious that there are some weaknesses that need to be addressed.

In most cases, complete network redesigns are out of the question. This can be extremely invasive and costly. However, it may be possible to augment the existing network in such a manner as to add additional layers of security. It’s also possible that this may lead to the possibility of being able to make even more changes down the road.

So what do I mean by this? Allow me to explain…

Many networks are fairly simple with only a few subnets, typically a user and a server subnet. Sometimes there’s a bit of complexity on the user side, creating subnets per department, or subnets per building. Often this has more to do with manageability of users rather than security. Regardless, it’s a good practice that can be used to make a network more secure in the long run.

What is often neglected is the server side of things. Typically, there are one, maybe two subnets. Outside users are granted access to the standard web ports. Sometimes more ports such as ssh and ftp are opened for a variety of reasons. What administrators don’t realize, or don’t intend is that they’re allowing outsiders direct access to their core servers, without any sort of security in front of it. Sure, sure, there might be a firewall, but a firewall is there to ensure you only come in on the proper ports, right? If your traffic is destined for port 80, it doesn’t matter if it’s malicious or not, the firewall lets it through anyway.

But what’s the alternative? What can be done instead? Well, what about sending outside traffic to a separate network where the systems being accessed are less critical, and designed to verify traffic before passing it on to your core servers? What I’m talking about is creating a DMZ network and forcing all users through a proxy. Even a simple proxy can help to prevent many attacks by merely dropping illegal traffic and not letting it through to the core server. Proxies can also be heavily fortified with HIDS and other security software designed to look for suspicious traffic and block it.

By adding in this DMZ layer, you’ve put a barrier between your server core and the outside world. This is known as layered defense. You can add additional layers as time and resources allow. For instance, I recommend segmenting away database servers as well as identity management servers. Adding this additional segmentation can be done over time as new servers come online and old servers are retired. The end goal is to add this additional security without disrupting the network as a whole.

If you have the luxury of building a new network from the ground up, however, make sure you build this in from the start. There is, of course, a breaking point. It makes sense to create networks to segregate servers by security level, but it doesn’t make sense to segregate purely to segregate. For instance, you may segregate database and identity management servers away from the rest of the servers, but segregating Oracle servers away from MySQL servers may not add much additional security. There are exceptions, but I suggest you think long and hard before you make such an exception. Are you sure that the additional management overhead is worth the security? There’s always a cost/benefit analysis to perform.

Segregating networks is just the beginning. The purpose here is to enhance security. By segregating networks, you can significantly reduce the number of clients that need to access a particular server. The whole world may need to access your proxy servers, but only your proxy servers need to access the actual web application servers. Likewise, only your web application servers need access to your database servers. Using this information, you can tighten down your firewall. But remember, a firewall is just a wall with holes in it. The purpose is to deflect random attacks, but it does little to nothing to prevent attacks on ports you’ve opened. For that, there are other tools.

At the very edge, simplistic fire walling and generally loose HIDS can be used to deflect most attacks. As you move further within the network, additional security can be used. For instance, deploying an IPS at the very edge of the network can result in the IPS being quickly overwhelmed. Of course, you can buy a bigger, better IPS, but to what end? Instead, you can move the IPS further into the network, placing it where it be more effective. If you place it between the proxy and the web server, you’ve already ensured that the only traffic hitting the IPS is loosely validated HTTP traffic. With this knowledge, you can reduce the number of signatures the IPS needs to have, concentrating on high quality HTTP signatures. Likewise, an IPS between the web servers and database servers can be configured with high quality database signatures. You can, in general, direct the IPS to block any and all traffic that falls outside of those parameters.

As the adage goes, there is no silver bullet for security. Instead, you need to use every weapon in your arsenal and put together a solid defense. By combining all of these techniques together, you can defend against many attacks. But remember, there’s always a way in. You will not be able to stop the most determined attacker, you can only hope to slow him down enough to limit his access. And remember, securing your network is only one aspect of security. Don’t forget about the other low hanging fruit such as SQL injection, cross site scripting, and other common application holes. You may have the most secure network in existence, but a simple SQL injection attack can result in a massive data breach.

Bringing Social To The Kernel

Imagine a world where you can login to your computer once and have full access to all of the functionality in your computer, plus seamless access to all of the web sites you visit on a daily basis. No more logging into each site individually, your computer’s operating system takes care of that for you.

That world may be coming quicker than you realize. I was listening to a recent episode of the PaulDotCom security podcast today. In this episode, they interviewed Jason Fossen, a SANS Security Faculty Fellow and instructor for SEC 505: Securing Windows. During the conversation, Jason mentioned some of the changes coming to the next version of Microsoft’s flagship operating system, Windows 8. What he described was, in a word, horrifying…

Not much information is out there about these changes yet, but it’s possible to piece together some of it. Jason mentioned that Windows 8 will have a broker system for passwords. Basically, Windows will store all of the passwords necessary to access all of the various services you interact with. Think something along the lines of 1Password or LastPass. The main difference being, this happens in the background with minimal interaction with the user. In other words, you never have to explicitly login to anything beyond your local Windows workstation.

Initially, Microsoft won’t have support for all of the various login systems out there. They seem to be focusing on their own service, Windows Live, and possibly Facebook. But the API is open, allowing third-parties to provide the necessary hooks to their own systems.

I’ve spent some time searching for more information and what I’m finding seems to indicate that what Jason was talking about is, in fact, the plan moving forward. TechRadar has a story about the Windows 8 Credential Vault, where website passwords are stored. The credential vault appears to be a direct competitor to 1Password and LastPass. As with other technologies that Microsoft has integrated in the past, this may be the death knell for password managers.

ReadWriteWeb has a story about the Windows Azure Access Control Service that is being used for Windows 8. Interestingly, this article seems to indicate that passwords won’t be stored on the Windows 8 system itself, but in a centralized “cloud” system. A system called the Access Control Service, or ACS, will store all of the actual login information, and the Windows 8 Password Broker will obtain tokens that are used for logins. This allows users to access their data from different systems, including tablets and phones, and retain full access to all of their login information.

Microsoft is positioning Azure ACS as a complete claims-based identity system. In short, this allows ACS to become a one-stop shop for single sign-on. I log into Windows and immediately have access to all of my accounts across the Internet.

Sounds great, right? In one respect, it is. But if you think about it, you’re making things REALLY easy for attackers. Now they can, with a single login and password, access every system you have access to. It doesn’t matter that you’ve used different usernames and passwords for your bank accounts. It doesn’t matter that you’ve used longer, more secure passwords for those sensitive sites. Once an attacker gains a foothold on your machine, it’s game over.

Jason also mentioned another chilling detail. You’ll be able to login to your local system using your Windows Live ID. So, apparently, if you forget your password for your local user, just login with your Windows Live ID. It’s all tied together. According to the TechRadar story, “if you forget your Windows password you can reset it from another PC using your Windows Live ID, so you don’t need to make a password restore USB stick any more.” They go on to say the following :

You’ll also have to prove your identity before you can ‘trust’ the PC you sync them to, by giving Windows Live a second email address or a mobile number it can text a security code to, so anyone who gets your Live ID password doesn’t get all your other passwords too – Windows 8 will make you set that up the first time you use your Live ID on a PC.

You can always sign in to your Windows account, even if you can’t get online – or if there’s a problem with your Live ID – because Windows 8 remembers the last password you signed in with successfully (again, that’s encrypted in the Password Vault).

With this additional tidbit of information, it would appear that an especially crafty attacker could even go as far as compromising your entire system, without actually touching your local machine. It may not be easy, but it looks like it’ll be significantly easier than it was before.

Federated identity is an interesting concept. And it definitely has its place. But, I don’t think tying everything together in this manner is a good move for security. Sure, you can use your Facebook ID (or Twitter, Google, OpenID, etc) already as a single login for many disparate sites. In fact, these companies are betting on you to do so. This ties all of your activity back to one central place where the data can be mined for useful and lucrative bits. And perhaps in the realm of a social network, that’s what you want. But I think there’s a limit to how wide a net you want to cast. But if what Jason says is true, Microsoft may be building the equivalent of the One Ring. ACS will store them all, ACS will verify them, ACS will authenticate them all, and to the ether supply them.

The Zero-Day Conundrum

Last week, another “zero-day” vulnerability was reported, this time in Adobe’s Acrobat PDF reader. Anti-virus company, Symantec, reports that this vulnerability is being used as an attack vector against defense contractors, chemical companies, and others. Obviously, this is a big deal for all those being targeted, but is it really something you need to worry about? Are “zero-days” really something worth defending against?

What is a zero-day anyway? Wikipedia has this to say:

A zero-day (or zero-hour or day zero) attack or threat is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.

So, in short, a zero-day is an unknown vulnerability in a piece of software. Now, how do we defend against this? We have all sorts of tools on our side, surely there’s one that will catch these before they become a problem, right? IDS/IPS systems have heuristic filters for detecting anomalous activity. Of course, you wouldn’t want your IPS blocking arbitrary traffic, so that might not be a good idea. Anti-virus software also has heuristic filters, so that should help, right? Well… When’s the last time your heuristic filter caught something that wasn’t a false positive? So yeah, that’s probably not going to work either. So what’s a security engineer to do?

My advice? Don’t sweat it. Don’t get me wrong, zero-days are dangerous and can cause all sorts of problems, but unless you have an unlimited budget with an unlimited amount of time, trying to defend against an unknown attack is a pointless exercise in futility. But don’t despair, there is hope.

Turns out, if you spend your time securing your network properly, you’ll defend against most attacks out there. Let’s look at this latest attack, for instance. Let’s assume you’ve spent millions and have the latest and greatest hardware with all the cutting edge signatures and software. Someone sends the CEO’s secretary an innocuous PDF, which she promptly opens, and all that hard work goes out the window.

On the other hand, let’s assume you spent the small budget you have defending the critical data you store and spend the time you’ve saved not decoding those advanced heuristics manuals on training the staff. This time the CEO’s secretary looks twice, realizes this is an unsolicited email, and doesn’t open the PDF. No breach, the world is saved.

Seriously, though, spending your time and effort safe-guarding your data and training your staff will get you much further than worrying about every zero-day that comes along. Of course, you should be watching for these sorts of reports. In this case, for instance, you can alert your staff that there’s a critical flaw in this particular software and that they need to be extra careful. Or, if the flaw is in a web application, you can add the necessary signatures to look for it. But in the end, it’s very difficult, if not impossible, to defend against something you’re not aware of. Network and system security is complex and difficult enough without having to worry about the unknown.

Reflections on DerbyCon

On September 30th, 2011, over 1000 people from a variety of backgrounds descended on Louisville, Kentucky to attend the first DerbyCon. DerbyCon is a security conference put together by three security professionals, Dave Kennedy, Martin Bos, and Adrian Crenshaw. Along with a sizable crew of security and administrative staff, they hosted an absolutely amazing conference.

During the three day conference, DerbyCon sported amazing speakers such as Kevin Mitnick, HD Moore, Chris Nickerson, and others. Talks covered topics such as physical penetration testing, lock picking, and network defense techniques. There were training sessions covering Physical Penetration, Metasploit, Social Engineering, and more. A lock pick village was available to both learn and show off your skills, as well as a hardware village where you could learn how to solder among other things. And, of course, there were late-night parties.

For me, this was my first official security conference. By all accounts, I couldn’t have chosen a better conference. All around me I heard unanimous praise for the conference, how it was planned, and how it was run. There were a few snafus here and there, but really nothing worth griping about.

The presentations I was able to attend were incredible and I came home with a ton of knowledge and new ideas. During the closing of the conference, Dave mentioned some ideas for next years conference such as a newbie track. This has inspired me to think about possibly presenting at next years conference. I have an idea already, something I’ve started working on. If all goes well, I’ll have something to present.

DerbyCon was definitely one of the highlights of my year. I’m already eager to return next year.

Audit Insanity

<RANT>

It’s amazing, but the deeper I dive into security, the more garbage security theater I uncover. Sure, there’s insanity everywhere, but I didn’t expect to come across some of this craziness…

One of the most recent activities I’ve been party to has been the response to an independent audit. When I inquired as to the reasoning behind the audit, the answer I’ve received has been that this is a recommended yearly activity. It’s possible that this information is incorrect, but I suspect that it’s truer than I’d like to believe.

Security audits like this are standard practice all over the US and possibly the world. Businesses are led to believe that getting audited is a good thing and that they should be repeated often. My main gripe here is that while audits can be good, they need to be done for the right reasons, not just because someone tells you they’re needed. Or, even better, the audits that are forced on a company by their insurance company, or their payment processor. These sorts of audits are there to pass the blame if something bad happens.

Let’s look a little deeper. The audit I participated in was a typical security audit. An auditor contacts you with a spreadsheet full of questions for you to answer. You will, of course, answer them truthfully. Questions included inquiries about the password policy, how security policies are distributed, and how logins are handled. They delve into areas such as logging, application timeouts, IDS/IPS use, and more. It’s fairly in-depth, but ultimately just a checklist. The auditor goes through their list, interpreting your answers, and applying checkmarks where appropriate. The auditor then generates a list of items you “failed” to comply with and you have a chance to respond. This is all incorporated into a final report which is presented to whoever requested the audit.

Some audits will include a scanning piece as well. The one I’m most familiar with in this aspect is the SecurityMetrics PCI scan. Basically, you fill out a simplified yes/no questionnaire about your security and then they run a Nessus scan against whatever IP(s) you provide to them. It’s a completely brain-dead scan, too. Here’s a perfect example. I worked for a company who processed credit cards. The system they used to do this was on a private network using outbound NAT. There were both IDS and firewall systems in place. For the size of the business and the frequency of credit card transactions, this was considerable security. But, because there was a payment card processor in the mix, they were required to perform a quarterly PCI scan. The vendor of choice, SecurityMetrics.

So, the security vendor went through their checklist and requested the IP of the server. I explained that it was behind a one-way NAT and inaccessible from the outside world. They wanted the IP of the machine, which I provided to them. 10.10.10.1. Did I mention that the host in question was behind a NAT? These “security professionals” then loaded that IP into their automated scanning system. And it failed to contact the host. Go figure. Again, we went around and around until they finally said that they needed the IP of the device doing the NAT. I explained that this was a router and wouldn’t provide them with any relevant information. The answer? We don’t care, we just need something to scan. So, they scanned a router. For years. Hell, they could still be doing it for all I know. Like I said, brain dead security.

What’s wrong with a checklist, though? The problem is, it’s a list of “common” security practices not tailored to any specific company. So, for instance, the audit may require that a company uses hardware-based authentication devices in addition to standard passwords. The problem here is that this doesn’t account for non-hardware solutions. The premise here is that two-factor authentication is more secure than just a username and password. Sure, I whole-heartedly agree. But, I would argue that public key authentication provides similar security. It satisfies the “What You Have” and “What You Know” portions of two-factor authentication. But it’s not hardware! Fine, put your key on a USB stick. (No, really, don’t. That’s not very secure.)

Other examples include the standard “Password Policy” crap that I’ve been hearing for years. Basically, you should expire passwords every 90 days or so, passwords should be “strong”, and you should prevent password reuse by remembering a history of passwords. So let’s look at this a bit. Forcing password changes every 90 days results in bad password habits. The reasoning is quite simple, and there have been studies that show this. This paper (pdf) from the University of North Carolina is a good example. Another decent write up is this article from Cryptosmith. Allow me to summarize. Forcing password expiration results in people making simpler passwords, writing passwords down, or using simplistic algorithms to generate “complex” passwords. In short, cracking these “fresh” passwords is often easier than well thought out ones.

The so-called “strong” password problem can be summarized by a rather clever XKCD comic. The long and short here is that truly complex passwords that cannot be easily cracked are either horribly complex mishmashes of numbers, letters, and symbols, or they’re long strings of generic words. Seriously, “correct horse battery staple” is significantly stronger than using a completely random 11 digit string.

And, of course, password history. This sort of goes hand-in-hand with password expiration, but not always. If it’s used in conjunction with password expiration, then it generally results in single character variation in passwords. Your super-secure “complex” password of “Password1” (seriously, it meets the criteria.. Uppercase, lowercase, number) becomes a series of passwords where the 1 is changed to a 2, then 3, then 4, etc. until the history is exceeded and the user can return to 1 again. It’s easier to remember that way and the user doesn’t have to do much extra work.

So even the standard security practices on the checklist can be questioned. The real answer here is to tweak each audit to the needs of the requestor of the audit, and to properly evaluate the responses based on the security posture of the responder. There do need to be baselines, but they should be sane baselines. If you don’t get all of the checkmarks on an audit, it may not mean you’re not secure, it may just mean you’re securing your network in a way the auditor didn’t think of. There’s more to security than fancy passwords and firewalls. A lot more.

</RANT>

WoO Day 7 : Tidbits

And so a week of fun comes to an end. But before we part company, there are a few more items to discuss. There are portions of OSSEC that are often overlooked because they require little or no configuration, and they “just work.” Other features are either more involved, or require external products. Let’s take a look at a few of these.

Rootkit Detection

First up, rootkit detection. OSSEC ships with a set of rules and “signatures” designed to detect common rootkits. The default OSSEC installation uses two files, rootkit_files.txt and rootkit_trojans.txt, as the base of the rootkit detection.

The rootkit_files.txt file contains a list of files commonly found with rootkit infections. Using various system calls, OSSEC tries to detect if any of these files are installed on the machine and sends an alert if they are found. Multiple system calls are used because some rootkits hide themselves by altering system binaries and sometimes by altering system calls.

The rootkit_trojans.txt file contains a list of commonly trojaned files as well as patterns found in those files when they have been compromised. OSSEC will scan each file and compare it to the list of patterns. If a match is found, an alert is sent to the administrator.

There are also additional rootkit files shipped with OSSEC. For Windows clients there are three files containing signatures for common malware, signatures to detect commonly banned software packages, and signatures for checking windows policy settings. On the Linux side are a number of files for auditing system security and adhering to CIS policy. CIS policy auditing will be covered later.

Rootkit detection also goes beyond these signature-based methods. Other detection methods include scanning /dev for unusual entries, searching for hidden processes, and searching for hidden ports. Rootkit scanning is pretty in-depth and more information can be found in the OSSEC Manual.

CIS Auditing

CIS, the Center for Internet Security, publishes a benchmark tool for auditing system security on various operating systems. OSSEC can assist with compliance to the CIS guidelines by monitoring systems for non-conformity and alerting as necessary. Shipped by default with OSSEC are three cis-based signature sets for Redhat Linux and Debian. Creating new tests is fairly straightforward and the existing tests can be adapted as needed.

OSSEC Splunk

One thing that OSSEC lacks is an easy way to pour through the OSSEC logs, get visual information on alerts, etc. There was a project, the OSSEC-WUI, that aimed to resolve this, but that project has mostly died. Last I heard, there were no plans to revive this project.

There is an alternative, however. A commercial product, Splunk, can handle the heavy lifting for you. Yes, yes, Splunk is commercial. But, good news! They have a free version that can do the same thing on a smaller scale, without all of the extra shiny. There is a plugin for Splunk, specifically designed to handle OSSEC as well. It’s worth checking out, you can find it over at splunkbase.

Alert Logging

And finally, alert logging. Because OSSEC is tightly secured, it can sometimes be a challenge to deal with alert logs. For instance, what if you want to put the logs in an alternate location outside of /var/ossec? There are alternatives, though. For non-application specific output you can use syslog or database output. OSSEC also supports output to Prelude and beta support exists for PicViz. I believe you can use multiple output methods if you desire, though I’d have to test that out to be sure.

The configuration for syslog output is very straightforward. You can define both the destination syslog server as well as the level of the alerts to send. Syslog output is typically what you would use in conjunction with Splunk.

Database configuration is a bit more in-depth and requires that OSSEC be compiled with certain options enabled. Both MySQL and PostgreSQL are supported. The database configuration block in the ossec.conf ile contains all of the options you would expect, database name, username and password, and hostname of the database server. Additionally you need to specify the database type, MySQL or PostgreSQL.

Prelude and PicViz support have their own specific configuration parameters. More information on this support can be found in the OSSEC Manual.

Final Thoughts

OSSEC is an incredible security product. I still haven’t reached the limits of what it can do and I’ve been learning new techniques for using it all week. Hopefully the information provided here over the last 7 days proves to be helpful to you. There’s a lot more information out there, though and an excellent place to start is the OSSEC home page. There’s also the OSSEC mailing list where you can find a great deal of information as well as a number of very knowledgeable, helpful users.

The best way to get started is to get a copy of OSSEC installed and start playing. Dive right in, the water’s fine.