Bringing Social To The Kernel

Imagine a world where you can login to your computer once and have full access to all of the functionality in your computer, plus seamless access to all of the web sites you visit on a daily basis. No more logging into each site individually, your computer’s operating system takes care of that for you.

That world may be coming quicker than you realize. I was listening to a recent episode of the PaulDotCom security podcast today. In this episode, they interviewed Jason Fossen, a SANS Security Faculty Fellow and instructor for SEC 505: Securing Windows. During the conversation, Jason mentioned some of the changes coming to the next version of Microsoft’s flagship operating system, Windows 8. What he described was, in a word, horrifying…

Not much information is out there about these changes yet, but it’s possible to piece together some of it. Jason mentioned that Windows 8 will have a broker system for passwords. Basically, Windows will store all of the passwords necessary to access all of the various services you interact with. Think something along the lines of 1Password or LastPass. The main difference being, this happens in the background with minimal interaction with the user. In other words, you never have to explicitly login to anything beyond your local Windows workstation.

Initially, Microsoft won’t have support for all of the various login systems out there. They seem to be focusing on their own service, Windows Live, and possibly Facebook. But the API is open, allowing third-parties to provide the necessary hooks to their own systems.

I’ve spent some time searching for more information and what I’m finding seems to indicate that what Jason was talking about is, in fact, the plan moving forward. TechRadar has a story about the Windows 8 Credential Vault, where website passwords are stored. The credential vault appears to be a direct competitor to 1Password and LastPass. As with other technologies that Microsoft has integrated in the past, this may be the death knell for password managers.

ReadWriteWeb has a story about the Windows Azure Access Control Service that is being used for Windows 8. Interestingly, this article seems to indicate that passwords won’t be stored on the Windows 8 system itself, but in a centralized “cloud” system. A system called the Access Control Service, or ACS, will store all of the actual login information, and the Windows 8 Password Broker will obtain tokens that are used for logins. This allows users to access their data from different systems, including tablets and phones, and retain full access to all of their login information.

Microsoft is positioning Azure ACS as a complete claims-based identity system. In short, this allows ACS to become a one-stop shop for single sign-on. I log into Windows and immediately have access to all of my accounts across the Internet.

Sounds great, right? In one respect, it is. But if you think about it, you’re making things REALLY easy for attackers. Now they can, with a single login and password, access every system you have access to. It doesn’t matter that you’ve used different usernames and passwords for your bank accounts. It doesn’t matter that you’ve used longer, more secure passwords for those sensitive sites. Once an attacker gains a foothold on your machine, it’s game over.

Jason also mentioned another chilling detail. You’ll be able to login to your local system using your Windows Live ID. So, apparently, if you forget your password for your local user, just login with your Windows Live ID. It’s all tied together. According to the TechRadar story, “if you forget your Windows password you can reset it from another PC using your Windows Live ID, so you don’t need to make a password restore USB stick any more.” They go on to say the following :

You’ll also have to prove your identity before you can ‘trust’ the PC you sync them to, by giving Windows Live a second email address or a mobile number it can text a security code to, so anyone who gets your Live ID password doesn’t get all your other passwords too – Windows 8 will make you set that up the first time you use your Live ID on a PC.

You can always sign in to your Windows account, even if you can’t get online – or if there’s a problem with your Live ID – because Windows 8 remembers the last password you signed in with successfully (again, that’s encrypted in the Password Vault).

With this additional tidbit of information, it would appear that an especially crafty attacker could even go as far as compromising your entire system, without actually touching your local machine. It may not be easy, but it looks like it’ll be significantly easier than it was before.

Federated identity is an interesting concept. And it definitely has its place. But, I don’t think tying everything together in this manner is a good move for security. Sure, you can use your Facebook ID (or Twitter, Google, OpenID, etc) already as a single login for many disparate sites. In fact, these companies are betting on you to do so. This ties all of your activity back to one central place where the data can be mined for useful and lucrative bits. And perhaps in the realm of a social network, that’s what you want. But I think there’s a limit to how wide a net you want to cast. But if what Jason says is true, Microsoft may be building the equivalent of the One Ring. ACS will store them all, ACS will verify them, ACS will authenticate them all, and to the ether supply them.

The Zero-Day Conundrum

Last week, another “zero-day” vulnerability was reported, this time in Adobe’s Acrobat PDF reader. Anti-virus company, Symantec, reports that this vulnerability is being used as an attack vector against defense contractors, chemical companies, and others. Obviously, this is a big deal for all those being targeted, but is it really something you need to worry about? Are “zero-days” really something worth defending against?

What is a zero-day anyway? Wikipedia has this to say:

A zero-day (or zero-hour or day zero) attack or threat is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.

So, in short, a zero-day is an unknown vulnerability in a piece of software. Now, how do we defend against this? We have all sorts of tools on our side, surely there’s one that will catch these before they become a problem, right? IDS/IPS systems have heuristic filters for detecting anomalous activity. Of course, you wouldn’t want your IPS blocking arbitrary traffic, so that might not be a good idea. Anti-virus software also has heuristic filters, so that should help, right? Well… When’s the last time your heuristic filter caught something that wasn’t a false positive? So yeah, that’s probably not going to work either. So what’s a security engineer to do?

My advice? Don’t sweat it. Don’t get me wrong, zero-days are dangerous and can cause all sorts of problems, but unless you have an unlimited budget with an unlimited amount of time, trying to defend against an unknown attack is a pointless exercise in futility. But don’t despair, there is hope.

Turns out, if you spend your time securing your network properly, you’ll defend against most attacks out there. Let’s look at this latest attack, for instance. Let’s assume you’ve spent millions and have the latest and greatest hardware with all the cutting edge signatures and software. Someone sends the CEO’s secretary an innocuous PDF, which she promptly opens, and all that hard work goes out the window.

On the other hand, let’s assume you spent the small budget you have defending the critical data you store and spend the time you’ve saved not decoding those advanced heuristics manuals on training the staff. This time the CEO’s secretary looks twice, realizes this is an unsolicited email, and doesn’t open the PDF. No breach, the world is saved.

Seriously, though, spending your time and effort safe-guarding your data and training your staff will get you much further than worrying about every zero-day that comes along. Of course, you should be watching for these sorts of reports. In this case, for instance, you can alert your staff that there’s a critical flaw in this particular software and that they need to be extra careful. Or, if the flaw is in a web application, you can add the necessary signatures to look for it. But in the end, it’s very difficult, if not impossible, to defend against something you’re not aware of. Network and system security is complex and difficult enough without having to worry about the unknown.

Reflections on DerbyCon

On September 30th, 2011, over 1000 people from a variety of backgrounds descended on Louisville, Kentucky to attend the first DerbyCon. DerbyCon is a security conference put together by three security professionals, Dave Kennedy, Martin Bos, and Adrian Crenshaw. Along with a sizable crew of security and administrative staff, they hosted an absolutely amazing conference.

During the three day conference, DerbyCon sported amazing speakers such as Kevin Mitnick, HD Moore, Chris Nickerson, and others. Talks covered topics such as physical penetration testing, lock picking, and network defense techniques. There were training sessions covering Physical Penetration, Metasploit, Social Engineering, and more. A lock pick village was available to both learn and show off your skills, as well as a hardware village where you could learn how to solder among other things. And, of course, there were late-night parties.

For me, this was my first official security conference. By all accounts, I couldn’t have chosen a better conference. All around me I heard unanimous praise for the conference, how it was planned, and how it was run. There were a few snafus here and there, but really nothing worth griping about.

The presentations I was able to attend were incredible and I came home with a ton of knowledge and new ideas. During the closing of the conference, Dave mentioned some ideas for next years conference such as a newbie track. This has inspired me to think about possibly presenting at next years conference. I have an idea already, something I’ve started working on. If all goes well, I’ll have something to present.

DerbyCon was definitely one of the highlights of my year. I’m already eager to return next year.

In Memorium – Steve Jobs – 1955-2011

Somewhere in the early 1980’s, my father took me to a bookstore in Manhattan. I don’t remember why, exactly, we were there, but it was a defining moment in my life. On display was a new wonder, a Macintosh computer.

Being young, I wasn’t aware of social protocol. I was supposed to be awed by this machine, afraid to touch it. Instead, as my father says, I pushed my way over, grabbed the mouse, and went to town. While all of the adults around me looked on in horror, I quickly figured out the interface and was able to make the machine do what I wanted.

It would be over 20 years before I really became a Mac user, but that first experience helped define my love of computers and technology.

Thank you, Steve.

Audit Insanity

<RANT>

It’s amazing, but the deeper I dive into security, the more garbage security theater I uncover. Sure, there’s insanity everywhere, but I didn’t expect to come across some of this craziness…

One of the most recent activities I’ve been party to has been the response to an independent audit. When I inquired as to the reasoning behind the audit, the answer I’ve received has been that this is a recommended yearly activity. It’s possible that this information is incorrect, but I suspect that it’s truer than I’d like to believe.

Security audits like this are standard practice all over the US and possibly the world. Businesses are led to believe that getting audited is a good thing and that they should be repeated often. My main gripe here is that while audits can be good, they need to be done for the right reasons, not just because someone tells you they’re needed. Or, even better, the audits that are forced on a company by their insurance company, or their payment processor. These sorts of audits are there to pass the blame if something bad happens.

Let’s look a little deeper. The audit I participated in was a typical security audit. An auditor contacts you with a spreadsheet full of questions for you to answer. You will, of course, answer them truthfully. Questions included inquiries about the password policy, how security policies are distributed, and how logins are handled. They delve into areas such as logging, application timeouts, IDS/IPS use, and more. It’s fairly in-depth, but ultimately just a checklist. The auditor goes through their list, interpreting your answers, and applying checkmarks where appropriate. The auditor then generates a list of items you “failed” to comply with and you have a chance to respond. This is all incorporated into a final report which is presented to whoever requested the audit.

Some audits will include a scanning piece as well. The one I’m most familiar with in this aspect is the SecurityMetrics PCI scan. Basically, you fill out a simplified yes/no questionnaire about your security and then they run a Nessus scan against whatever IP(s) you provide to them. It’s a completely brain-dead scan, too. Here’s a perfect example. I worked for a company who processed credit cards. The system they used to do this was on a private network using outbound NAT. There were both IDS and firewall systems in place. For the size of the business and the frequency of credit card transactions, this was considerable security. But, because there was a payment card processor in the mix, they were required to perform a quarterly PCI scan. The vendor of choice, SecurityMetrics.

So, the security vendor went through their checklist and requested the IP of the server. I explained that it was behind a one-way NAT and inaccessible from the outside world. They wanted the IP of the machine, which I provided to them. 10.10.10.1. Did I mention that the host in question was behind a NAT? These “security professionals” then loaded that IP into their automated scanning system. And it failed to contact the host. Go figure. Again, we went around and around until they finally said that they needed the IP of the device doing the NAT. I explained that this was a router and wouldn’t provide them with any relevant information. The answer? We don’t care, we just need something to scan. So, they scanned a router. For years. Hell, they could still be doing it for all I know. Like I said, brain dead security.

What’s wrong with a checklist, though? The problem is, it’s a list of “common” security practices not tailored to any specific company. So, for instance, the audit may require that a company uses hardware-based authentication devices in addition to standard passwords. The problem here is that this doesn’t account for non-hardware solutions. The premise here is that two-factor authentication is more secure than just a username and password. Sure, I whole-heartedly agree. But, I would argue that public key authentication provides similar security. It satisfies the “What You Have” and “What You Know” portions of two-factor authentication. But it’s not hardware! Fine, put your key on a USB stick. (No, really, don’t. That’s not very secure.)

Other examples include the standard “Password Policy” crap that I’ve been hearing for years. Basically, you should expire passwords every 90 days or so, passwords should be “strong”, and you should prevent password reuse by remembering a history of passwords. So let’s look at this a bit. Forcing password changes every 90 days results in bad password habits. The reasoning is quite simple, and there have been studies that show this. This paper (pdf) from the University of North Carolina is a good example. Another decent write up is this article from Cryptosmith. Allow me to summarize. Forcing password expiration results in people making simpler passwords, writing passwords down, or using simplistic algorithms to generate “complex” passwords. In short, cracking these “fresh” passwords is often easier than well thought out ones.

The so-called “strong” password problem can be summarized by a rather clever XKCD comic. The long and short here is that truly complex passwords that cannot be easily cracked are either horribly complex mishmashes of numbers, letters, and symbols, or they’re long strings of generic words. Seriously, “correct horse battery staple” is significantly stronger than using a completely random 11 digit string.

And, of course, password history. This sort of goes hand-in-hand with password expiration, but not always. If it’s used in conjunction with password expiration, then it generally results in single character variation in passwords. Your super-secure “complex” password of “Password1” (seriously, it meets the criteria.. Uppercase, lowercase, number) becomes a series of passwords where the 1 is changed to a 2, then 3, then 4, etc. until the history is exceeded and the user can return to 1 again. It’s easier to remember that way and the user doesn’t have to do much extra work.

So even the standard security practices on the checklist can be questioned. The real answer here is to tweak each audit to the needs of the requestor of the audit, and to properly evaluate the responses based on the security posture of the responder. There do need to be baselines, but they should be sane baselines. If you don’t get all of the checkmarks on an audit, it may not mean you’re not secure, it may just mean you’re securing your network in a way the auditor didn’t think of. There’s more to security than fancy passwords and firewalls. A lot more.

</RANT>

Much Ado About Lion

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Fixing the Serendipity XMLRPC plugin

A while ago I purchased a copy of BlogPress for my iDevices.. It’s pretty full-featured, and seems to work pretty well. Problem was, I couldn’t get it to work with my Serendipity-based blog. Oh well, a wasted purchase.

But not so fast! Every once in a while I go back and search for a possible solution. This past week I finally hit paydirt. I came across this post on the s9y forums.

This explained why BlogPress was crashing when I used it. In short, it was expecting to see a categoryName tag in the resulting XML from the Serendipity XMLRPC plugin. Serendipity, however, used description instead, likely because Serendipity has better support for the MetaWeblog API.

Fortunately, fixing this problem is very straightforward. All you really need to do is implement both APIs and return all of the necessary data for both APIs at the same time. To fix this particular problem, it’s a single line addition to the serendipity_xmlrpc.inc.php file located in $S9YHOME/plugins/serendipity_event_xmlrpc. That addition is as follows :


if ($cat['categoryid']) $xml_entries_vals[] = new XML_RPC_Value(
    array(
      'description'   => new XML_RPC_Value($cat['category_name'], 'string'),
      // XenoPhage: Add 'categoryName' to support mobile publishing (Thanks PigsLipstick)
      'categoryName'  => new XML_RPC_Value($cat['category_name'], 'string'),
      'htmlUrl'       => new XML_RPC_Value(serendipity_categoryURL($cat, 'serendipityHTTPPath'), 'string'),
      'rssUrl'        => new XML_RPC_Value(serendipity_feedCategoryURL($cat, 'serendipityHTTPPath'), 'string')
    ),
    'struct'
);

And poof, you now have the proper category support for Movable Type.

Evaluating a Blogging Platform

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at wordpress.com. There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.

Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Helpful Rules for OSSEC

OSSEC has quickly become a primary weapon in my security toolkit.  It’s flexible, fast, and very easy to use.  I’d like to share a few rules I’ve found useful as of late.

I primarily use OSSEC in a server/client setup.  One side effect of this is that when I make changes to the agent’s configuration, it takes some time for it to push out to all of the clients.  Additionally, clients don’t restart automatically when a new agent config is received.  However, it’s fairly easy to remedy this.

First, make sure you have syscheck enabled and that you’re monitoring the OSSEC directory for changes.  I recommend monitoring all of /var/ossec and ignoring a few specific directories where files change regularly. You’ll need to add this to both the ossec.conf as well as the agent.conf.

<directories check_all="yes">/var</directories>
<ignore type="sregex">^/var/ossec/queue/</ignore>
<ignore type="sregex">^/var/ossec/logs/</ignore>
<ignore type="sregex">^/var/ossec/stats/</ignore>

The first time you set this up, you’ll have to manually restart the clients after the new config is pushed to them. All new clients should work fine, however.

Next, add the following rules to your local_rules.xml file (or whatever scheme you’re using).

<rule level="12" id="100005">
   <if_matched_group>syscheck</if_matched_group>
   <description>agent.conf changed, restarting OSSEC</description>
   <match>/var/ossec/etc/shared/agent.conf</match>
</rule>

This rule looks for changes to the agent.conf file and triggers a level 12 alert. Now we just need to capture that alert and act on it. To do that, you need to add the following to your ossec.conf file on the server.

<command>
    <name>restart-ossec</name>
    <executable>restart-ossec.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>no</timeout_allowed>
</command>
<active-response>
    <command>restart-ossec</command>
    <location>local</location>
    <rules_id>100005</rules_id>
</active-response>

You need to add this to the top of your active response section, above any other rules. OSSEC matches the first active-response block and ignores any subsequent ones. The restart-ossec.sh script referenced in the command section should exist already in your active-response/bin directory as it’s part of the distribution.

And that’s all there is to it. Whenever the agent.conf file changes on a client, it’ll restart the OSSEC agent, reading in the new configuration.

Next up, a simple DoS prevention rule for apache web traffic. I’ve had a few instances where a single IP would hammer away at a site I’m responsible for, eating up resources in the process. Generally speaking, there’s no real reason for this. So, one solution is to temporarily block IPs that are abusive.

Daniel Cid, the author of OSSEC, helped me out a little on this one. It turned out to be a little less intuitive than I expected.

First, you need to group together all of the “normal” error response codes. The actual error responses (400/500 errors) are handled with other, more aggressive rules, so you can ignore most of them. For our purposes, we want to trigger on 200/300/400 error codes.

<rule id="131105" level="1">
      <if_sid>31101, 31108, 31100</if_sid>
      <description>Group of all "normal" 200/300/400 error codes.</description>
</rule>

Next, we want to create a composite rule that will fire after a set frequency and time limit. In short, we want this rule to fire if X matches are made in Y seconds.

<rule id="131106" level="10" frequency="500" timeframe="60">
      <if_matched_sid>131105</if_matched_sid>
      <same_source_ip />
      <description>Excessive access, Temporary block</description>
</rule>

That should be all you need provided you have active response already enabled. You can also add a specific active response for this rule that blocks for a shorter, or longer, period of time. That’s the beauty of OSSEC, the choice is in your hands.

I hope you find these rules helpful. If you have any questions or comments, feel free to post them below.