Games as saviors?

I watched a video yesterday about using video games as a means to help solve world problems. It sounds outrageous at first, until you really think about the problem. But first, how about watching the video :

Ok, now that you have some background, let’s think about this for a bit. Technology is amazing, and has brought us many advancements. Gaming is one of those advancements. We have the capability of creating entire universes, purely for our own amusement. People spend hours each day exploring these worlds. Players are typically working toward completing goals set forth by the game designers. When a player completes a goal, they are rewarded. Sometimes rewards are new items, monetary in nature, or perhaps clues to other goals. Each goal is within the reach of the player, though some goals may require more work to attain.

Miss McGonigal argues that the devotion that players show to games can be harnessed and used to help solve real-world problems. Players feel empowered by games, finding within them a way to control what happens to them. Games teach players that they can accomplish the goals set before them, bringing with it an excitement to continue.

I had the opportunity to participate in a discussion about this topic with a group of college students. Opinions ranged from a general distaste of gaming, seeing it as a waste of time, to an embrace of the ideas presented in the video. For myself, I believe that many of the ideas Miss McGonigal presents have a lot of merit. Some of the students argued that such realistic games would be complicated and uninteresting. However, I would argue that such realistic games have already proven to be big hits.

Take, for example, The Sims. The Sims was a huge hit, with players spending hours in the game adjusting various aspects of their character’s lives. I found the entire phenomenon to be absolutely fascinating. I honestly don’t know what the draw of the game was. Regardless, it did extremely well, proving that such a game could succeed.

Imagine taking a real-world problem and creating a game to represent that problem. At the very least, such a game can foster conversation about the problem. It can also lead to unique ideas about how to solve the problem, even though those playing the game may not be well-versed on the topic.

It’s definitely an avenue worth tackling, especially as future generations spend more time online. If we can find a way to harness the energy and excitement that gaming generates, we may be able to find solutions to many of the worlds most perplexing problems.

 

SSL MitM Appliance

SSL has been used for years to protect against Man in the Middle attacks. It has worked quite well and kept our secret transactions secure. However, that sense of security is starting to crumble.

At Black Hat USA 2009, Dan Kaminsky, security researcher, presented a talk outlining flaws in x.509 SSL certificates. In short, it is possible to trick a certificate authority into certifying a site as legitimate when the site may, in fact, be malicious. It’s not the easiest hack to pull off, but it’s there.

Once you have a legitimate certificate, pulling off a MitM attack is as simple as proxying the traffic through your own system. If you can trick the user into hitting your server instead of the legitimate server, *cough*DNSPOISONING*cough*, you can impersonate the legitimate server via proxy, and log everything the user does. And the only way the user can tell is if they actually look at the IP they’re hitting. How many people do you know that keep track of the IP of the server they’re trying to get to?

Surely there’s something that will prevent this, right? I mean, the fingerprint of the certificate has changed, so the browser will tell me that something is amiss, right? Well, actually, no. In fact, if you replace a valid certificate from one CA with a valid certificate from another CA, the end user typically sees no change at all. There may be options that can be set to alter this behavior, but I know of no browsers that will detect this by default. Ultimately, this means that if an attacker can obtain a valid certificate and redirect your traffic, he will own everything you do without you being the wiser.

And now, just to make things more interesting, we have this little beauty.

This is an SSL interception device sold by Packet Forensics. In short, you provide the fake certificate and redirect the user traffic and the box will take care of the rest. According to Packet Forensics, this box is sold exclusively to law enforcement agencies, though I’m sure there are ways to get a unit. For “testing,” of course.

The legal use of this device is actually unknown. In order to use it, Law Enforcement Organizations (LEO) will need to obtain legitimate certificates to impersonate the remote website, as well as obtain access to insert the device into a network. If the device is not placed directly in-line with the user, then potentially illegal hacking has to take place in order to redirect the traffic instead. Regardless, once these are obtained, the LEO has full access to the user’s traffic to and from the remote server.

The existence of this device merely drives home the ease with which MitM attacks are possible. In fact, in a paper published by two researchers from the EFF, this may already be happening. To date, there are no readily available tools to prevent this sort of abuse. However, the authors of the aforementioned paper are planning on releasing a Firefox plugin, dubbed CertLock, that will track SSL certificate information and inform the user when it changes. Ultimately, however, it would be great if browser manufacturers would incorporate these checks into the main browser logic.

So remember kiddies, just because you see the pretty lock icon, or the browser bar turns green, there is no guarantee you’re not being watched. Be careful out there, cyberspace is dangerous.

 

Really Awesome New Cisco confIg Differ

Configuration management is pretty important, but often overlooked. It’s typically easy enough to handle configurations for servers since you have access to standard scripting tools as well as cron. Hardware devices such as switches and routers are a bit more to handle, though, as automating backups of these configs can be daunting, at best.

Several years ago, I took the time to write a fairly comprehensive configuration backup system for the company I was working for. It handled Cisco routers and switches, Fore Systems/Marconi ASX ATM switches, Redback SMS aggregators, and a few other odds and ends. Unfortunately, it was written specifically for that company and not something easily converted for general use.

Fortunately, there’s a robust open source alternative called RANCID. The Really Awesome New Cisco confIg Differ, RANCID, is a set of perl scripts designed to automate configuration retrieval from a host of devices including Cisco, Juniper, Redback, ADC, HP, and more. Additionally, since most of the framework is already there, you can extend it as needed to support additional devices.

RANCID has a few interesting features which make life much easier as a network admin. First, when it retrieves the configuration from a device, it checks it in to either a CVS or SVN repository. This gives you the ability to see changes between revisions, as well as the ability to retrieve an old revision of a config from just about any point in time. Additionally, RANCID emails a list of the changes between the current and last revision of a configuration to you. This way you can keep an eye on your equipment, seeing alerts when things change. Very, very useful to detect errors by you and others.

Note: RANCID handles text-based configurations. Binary configurations are a whole different story. While binary configs can be placed in an SVN repository, getting emailed about changes becomes a problem. It’s possible to handle binary configs, though I do not believe RANCID has this capability.

Setup of RANCID is pretty straightforward. You can either install straight from source, or use a pre-packaged RPM. For this short tutorial, I’ll be using an RPM-based installation. The source RPM I’m using can be found here. It is assumed that you can either rebuild the RPM via the rpmbuild utility, or you can install the software from source.

After the software is installed, there are a few steps required to set up the software. First, I would recommend editing the rancid.conf file. I find making the following modifications to be a good first step:

RCSSYS=svn; export RCSSYS
* Change RCSSYS from cvs to svn. I find SVN to be a superior revisioning system. Your mileage may vary, but I’m going to assume you’re using SVN for this tutorial.

FILTER_PWDS=ALL; export FILTER_PWDS
NOCOMMSTR=YES; export NOCOMMSTR
* Uncommenting these and turning them on ensures that passwords are not stored on your server. This is a security consideration as these files are stored in cleartext format.

OLDTIME=4; export OLDTIME
* This setting tells RANCID how long a device can be unreachable before alerting you to the problem. The default is 24 hours. Depending on how often you run RANCID, you may want to change this option.

LIST_OF_GROUPS=”routers switches firewalls”
* This is a list of names you’ll use to identify devices. These names are arbitrary, so Fred Bob and George are ok. However, I would encourage you to use something meaningful.

The next step is to create the CVS/SVN repositories you’ll be using. This can’t possibly be easier. Switch to the rancid user, then run rancid-cvs. You’ll see output similar to the following:

-bash-3.2$ rancid-cvs
Committed revision 1.
Checked out revision 1.
A configs
Adding configs
Committed revision 2.
A router.db
Adding router.db
Transmitting file data .
Committed revision 3.
Committed revision 4.
Checked out revision 4.
A configs
Adding configs
Committed revision 5.
A router.db
Adding router.db
Transmitting file data .
Committed revision 6.
-bash-3.2$

That’s it, your repositories are created. All that’s left is to set up the user credentials that rancid will use to access the devices, tell rancid which devices to contact, and finally, where to send email. Again, this is quite straightforward.

User credentials are stores in the .cloginrc file located in the rancid home directory. This file is quite detailed with explanations of the various configuration options. In short, for most Cisco devices, you’ll want something like this:

add user * <username>
add password * <login password> <enable password>
add method * ssh

This tells the system to use the given username and passwords for accessing all devices in rancid via ssh. You can specify overrides by adding additional lines above these, replacing the * with the device name.

Next, tell rancid what devices to contact. As the rancid user, switch to the appropriate repository directory. For instance, if we’re adding a router, switch to ~rancid/routers and edit the router.db file. Note: This file is always called router.db, regardless of the repository you are in. Each line of this file consists of three fields, separated by colons. Field 1 is the hostname of the device, field 2 is the type of device, and field 3 is either up or down depending on whether the device is up or not. If you remove a device from this file, the configuration is removed from the repository, so be careful.

router.example.com:cisco:up

Finally, set up the mailer addresses for receiving rancid mail. These consist of aliases on the local machine. If you’re using sendmail, edit the /etc/aliases file and add the following :

rancid-<group>: <email target>
rancid-admin-<group>: <email target>

There are two different aliases needed for each group. Groups are the names used for the repositories. So, in our previous example, we have three groups, switches, routers, and firewalls. So we set up two aliases for each, sending the results to the appropriate parties. The standard rancid-<group> alias is used for sending config diffs. The rancid-admin-<group> alias is used to send alerts about program problems such as not being able to contact a device.

Make sure you run newaliases when you’re done editing the aliases file.

Once these are all set up, we can run a test of rancid. As the rancid user, run rancid-run. This will run through all of the devices you have identified and begin retrieving configurations. Assuming all went well, you should receive notifications via email about the new configurations identified.

If you have successfully run rancid and retrieved configurations, it’s time to set up the cron job to have this run automagically. Merely edit the crontab file for rancid and add something similar to the following:

# run config differ 11 minutes after midnight, 2am, 4am, etc.
11 0-23/2 * * * /usr/bin/rancid-run
# clean out config differ logs
50 23 * * * /usr/bin/find /var/rancid/logs -type f -mtime +2 -exec rm {} \;

Offsetting the times a bit is a good practice, just to ensure everything doesn’t run at once and bog down the system. The second entry cleans up the rancid log files, removing anything older than 2 days.

And that’s it! You’re well on your way to being a better admin. Now to finish those other million or so “great ideas” ….

 

The Privacy Problem

Live free or die: Death is not the worst of evils.
General John Stark

Is life so dear, or peace so sweet, as to be purchased at the price of chains and slavery? Forbid it, Almighty God! I know not what course others may take; but as for me, give me liberty or give me death!
Patrick Henry

Privacy, n.
1. The state or condition of being alone, undisturbed, or free from public attention, as a matter of choice or right; seclusion; freedom from interference or intrusion.
2. The state of being privy to some act
3. a. Absence or avoidance of publicity or display; secrecy, concealment, discretion; protection from public knowledge or availability.
3. b. The keeping of a secret; reticence.
Oxford English Dictionary

Privacy is often taken for granted. When the US Constitution was drafted, the founding fathers made sure to put in provisions to guarantee the privacy of the citizens they would govern. Most scholars agree that their intention was to prevent government intrusion in private lives and activities. They were very forward thinking, trying to ensure this protection would continue indefinitely into the future. Unfortunately, even the most forward thinking, well intentioned individual won’t be able to cover all of the possible scenarios that will occur in the future.

Since that fateful day in 1787, a war has raged between those advocating absolute privacy and those advocating reasonable intrusion for the sake of security. At the extreme edge of the argument are the non-consequentialists who believe that privacy should be absolute. They believe that privacy is non-negotiable and that the loss of privacy is akin to slavery. A common argument is that giving up privacy merely encourages additional loss. In other words, if you allow your privacy to be compromised once, then those that violate it will expect to be able to violate it again.

At the other edge are those that believe that privacy is irrelevant in the face of potential evil. This is also a non-consequentialist view. Individuals with this view tend to argue that if you have something to hide, then you are obviously guilty of something.

Somewhere in the middle are the consequentialists who believe that privacy is essential to a point. Violation of privacy should be allowed when the benefit of doing so outweighs the benefit of keeping something private. In other words, if disclosing a secret may save a life, or prevent an innocent person from going to jail, then a violation of privacy should be allowed.

The right to privacy has been fought over for years. In more recent years, technological advances have brought to light many of the problems with absolute privacy, and at the same time, have highlighted the need for some transparency. Technology has benefits for both the innocent and the criminal. It makes no delineation between the two, offering the same access to information for both.

New technologies have allowed communication over long distances, allowing criminals to coordinate criminal activities without the need to gather. Technology has brought devastating weaponry to the average citizen. Terrorists can use an Internet search engine to learn how to build bombs, plan attacks, and communicate with relative privacy. Common tools can be used to replicate identification papers, allowing criminals access to secure areas. The Internet can be used to obtain access to remote systems without permission.

Technology can also be used in positive ways. Mapping data can be used to optimize travel, find new places, and get you home when you’re lost. Online stores can be used to conveniently shop from your home, or find products you normally wouldn’t have access to. Social networking can be used to keep in touch with friends and relatives, and to form new friendships with strangers you may never have come in contact with otherwise. Wikipedia can be used for research and updated by complete strangers to spread knowledge. Companies can stay in contact with customers, alerting them of new products, updates to existing ones, or even alert them to potential problems with something they previously purchased.

In the last ten or so years, privacy in the US has been “under attack.” These so-called attacks come from many different sources. Governmental agencies seek access to more and more private information in order to combat terrorism and other criminal activities. Private organizations seek to obtain private information to identify new customers, customize advertisements, prevent fraud, etc. Technology has enabled these organizations to obtain this data in a variety of ways, often unbeknownst to the average user.

When was the last time you went to the airport and waited for someone to arrive at the gate? How about escorting someone to the gate before their flight? As recently as 20 years ago, it was possible to do both. However, since that time, security measures have been put in place to prevent non-ticketed individuals access beyond security checkpoints. Since the 9/11 terrorist attacks, security has been enhanced to include random searches, bomb sniffing, pat downs, full-body scanners, and more. In fact, the Transportation Security Administration (TSA) started random screening at the gate in 2008. Even more recently, the TSA has authorized random swabbing of passenger hands to detect explosive residue. While these measures arguably enhance security, it does so at the expense of the private individual. Many travelers feel violated by the process, even arguing that they are assumed to be guilty, having to prove their innocence every time they fly.

Traditionally, any criminal proceeding is conducted with the assumption of innocence. A criminal is considered innocent of a crime unless and until they are proven guilty. In the airport example above, the passengers are being screened with what can be considered an assumption of guilt. If you refuse to be screened, you are barred from flying, if lucky, or taken in for additional questioning and potentially jailed for the offense. Of course, individuals are not granted the right to fly, but rather offered the opportunity at the expense of giving up some privacy. It’s when these restrictions are applied to daily life, without the consent of the individual, that more serious problems arise.

Each and every day, the government gathers information about its citizens. This information is generally available to the public, although access is not necessarily easy. How this information is used, however, is often a source of criticism by privacy advocates. Massive databases of information have been built with algorithms digging through the data looking for patterns. If these patterns match, the individuals to whom the data belongs can be subject to additional scrutiny. This “fishing” for wrongdoing is often at the crux of the privacy argument. Generally speaking, if you look hard enough, and you gather enough data, you can find wrongdoing. More often, however, false positives pop up and individuals are subjected to additional scrutiny without warrant. In some cases, individuals can be wrongly detained.

Many privacy opposers argue that an innocent person has nothing to hide. However, this argument can be considered a fallacy. Professor Daniel Solove wrote an essay explaining why this argument is faulty. He argues that the “nothing to hide argument” is essentially hollow. Privacy is an inherently individualistic preference. Without knowing the full extent of how information will be used, it is impossible to say that revealing everything will have no ill effects, assuming the individual is innocent of wrongdoing. For instance, data collected by the government may not be used to identify you as a criminal, but it may result in embarrassment or feelings of exposure. What one person may consider a non-issue, others may see as evil or wrong.

These arguments extend beyond government surveillance and into the private sector as well. Companies collect information about consumers at an alarming rate. Information entered into surveys, statistics collected from websites, travel information collected from toll booths, and more can be used to profile individuals. This information is made available, usually at a cost, to other companies or even individuals. This information isn’t always kept secure, either. Criminals often access remote systems, obtaining credit card and social security numbers. Stalkers and pedophiles use social networking sites to follow their victims. Personal information posted on public sites can find its way into credit reports and is even used by some businesses to justify firing employees.

Privacy laws have been put in place to prevent such abuses, but information is already out there. Have you taken the time to put your name into a search engine lately? Give it a try, you may be surprised by the information you can find out about yourself. These are public records that can be accessed by anyone. Financial and real estate information is commonly available to the public, accessible to those knowing how to look for it. Criminal records and court proceedings are published on the web now, allowing anyone a chance to access it.

Whenever you access a website, check out a book from the library, or chat with a friend in email, you run the risk of making that information available to people you don’t want to have it. In recent years, it has been common for potential employers to use the Internet to obtain background information on a potential employee. In some cases, embarrassing information can be uncovered, casting a negative light on an individual. Teachers have been fired because of pictures they posted, innocently, on their profile pages. Are you aware of how the information you publish on the Internet can be used against you?

There is no clear answer on what should and should not be kept private. Likewise, there is no clear answer on what private data the government and private companies should have access to. It is up to you, as an individual, to make a conscious choice as to what you make public. In an ever evolving world, the decisions you make today can and will have an impact on what may happen in the future. What you may think of as an innocent act today can potentially be used against you in the future. It’s up to you to fight for your privacy, both from the government, and from the companies you interact with. Be sure you’re aware of how your data can be used before you provide it. Privacy and private data is being used in new, interesting, and potentially harmful ways every day. Be sure you’re aware of how your data can be used before you provide it.

 

The Third Category

“Is there room for a third category of device in the middle, something that’s between a laptop and smartphone?”

And with that, Steve Jobs, CEO of Apple, ushered in the iPad.

So what is the iPad, exactly? I’ve been seeing it referred to as merely a gigantic iPod Touch. But is there more to it than that? Is this thing just a glorified iPod, or can there be more there?

On the surface, it truly is an oversized iPod Touch. It has the same basic layout as an iPod Touch with the home button at the bottom. It has a thick border around the screen where the user can hold the unit without interfering with the multitouch display.

The screen itself is an LCD display using IPS technology. According to Wikipedia, IPS (In-Plane Switching) is a technology designed by Hitachi. It offers a wide viewing angle and accurate color reproduction. The screen is backlit using LEDs, offering much longer battery life, uniform backlighting, and longer life.

Apple is introducing a total of 6 units, varying only in the size of the built-in flash storage, and the presence of 3G connectivity. Storage comes in either 16, 32, or 64 GB varieties. 3G access requires a data plan from a participating 3G provider, AT&T to start, and will entail a monthly fee. 3G access will also require the use of a micro-SIM card. AT&T is currently the only US provider using these cards. The base 16GB model will go for $499, while the 64GB 3G model will run you $829, plus a monthly data plan. As it stands now, however, the data plan is on a month by month basis, no contract required.

Ok, so with the standard descriptive details out of the way, what is this thing? Is it worth the money? What is the “killer feature,” if there is one?

On the surface, the iPad seems to be just a big iPod Touch, nothing more. In fact, the iPad runs an enhanced version of the iPhone OS, the same OS the iPod Touch runs. Apple claims that most of the existing apps in the iTunes App Store will run on the iPad, both in original size, as well as an enhanced mode that will allow the app to take up the entire screen.

Based on the demonstration that Steve Jobs gave, as well as various other reports, there’s more to this enhanced OS, though. For starters, it looks like there will be pop-out or drop-down menus, something the current iPhone OS does not have. Additionally, apps will be able to take advantage of file sharing, split screen views, custom fonts, and external displays.

One of the more touted features of the iPad was the inclusion of the iBook store. It seems that Apple wants a piece of the burgeoning eBook market and has decided to approach it just like they approached the music market. The problem here is that the iPad is still a backlit LCD screen at its core. Staring at a backlit display for long periods of time generally leads to headaches and/or eye strain. This is why eInk based units such as the Kindle or the Sony Reader do so well. It’s not the aesthetics of the Kindle that people like, it’s the comfort of using the unit.

It would be nice to see the eBook market opened up the way the music market has been. In fact, I look forward to the day that the majority of eBooks are available without DRM. Apple’s choice of using the ePub format for books is an auspicious one. The ePub format is fast becoming the standard of choice for eBooks and includes support for both a DRM and non-DRM format. Additionally, the format uses standard open formats as a base.

But what else does the iPad offer? Is it just a fancy book reader with some extra multimedia functionality? Or is there something more?

There has been some speculation that the iPad represents more than just an entry into the tablet market. That it, instead, represents an entry into the mobile processor market. After all, Apple put together their own processor, the Apple A4, specifically for this product. So is Apple merely using this as a platform for a launch into the mobile processor market? If so, early reports indicate that they may have something spectacular. Reports from those able to get hands-on time with the iPad report that the unit is very responsive and incredibly fast.

But for all of the design and power behind the iPad, there is one glaring hole. Flash support. And Apple isn’t hiding it, either. On stage, during the announcement of the iPad, Steve Jobs demonstrated web browsing by heading to the New York Times homepage. If you’ve ever been to their homepage, it’s dotted by various flash objects with video, slideshows, and more. On the iPad, these shows up as big white boxes with the Safari plugin icon showing.

So what is Apple playing at? Flash is pretty prevalent on the web, so not supporting it will result in a lot of missing content, as one Adobe employee demonstrated. Of course, the iPhone and iPod Touch have the same problem. Or, do they? If a device is popular, developers adapt. This can easily be seen by the number of websites that have adapted to the iPhone. But even more than that, look at the number of sites that adapt to the various web browsers, creating special markup to work with each one. This is nothing new for developers, it happens today.

Flash is unique, though, in that it gives the developers capabilities that don’t otherwise exist in HTML, right? Well, not exactly. HTML5 gives developers a standardized way to deploy video, handle offline storage, draw, and more. Couple this with CSS and you can replicate much of what Flash already does. There are lots of examples already of what HTML5 can do.

So what does the iPad truly mean to computing? Will it be as revolutionary as Apple wants us to believe it will be? I’m still not 100% sold on it, but it’s definitely something to watch. Microsoft has tried tablets in the past and failed, will Apple succeed?

 

“Educate to Innovate”

About 2 weeks ago, the President gave a speech about a new program called “Educate to Innovate.” The program aims to improve education in the categories of Science, Technology, Engineering, and Mathematics, or STEM. At the end of his speech, students from Oakton High School demonstrate their “Cougar Cannon,” a robot designed to scoop up and throw “moon rocks.” A video of the speech, and the demonstration, is below.

“As President, I believe that robotics can inspire young people to pursue science and engineering. And I also want to keep an eye on those robots in case they try anything.”


As a lover of technology, I find it wonderful that the president is moving in this direction. I wrote, not too long ago, about my disappointment with our current educational system. When I was in school, there were always extra subjects we could engage in to expand our knowledge. In fact, the high school I attended was set up similar to that of a college, requiring that a number of extra credits, beyond the core classes, be taken. Often these were foreign languages or some form of a shop class. Fortunately, for me, the school also offered classes in programming and electronics.

I was invited back to the school by my former electronics teacher a few years after I graduated. The electronics program had expanded somewhat and they were involved in a program called FIRST Robotics, developed by Dean Kamen. Unfortunately, I had moved out of the area, so my involvement was extremely limited, but I did enjoy working with the students. The FIRST program is an excellent way to engage competitiveness along with education. Adults get to assist the students with the building and programming of the robot, guiding them along the process. Some of the design work was simply outstanding, and solutions to problems were truly intuitive.

One of the first “Educate to Innovate” projects is called “National Lab Day.” National Lab Day is a program designed to bring students, educators, and volunteers together to learn and have fun. Local communities, called “hubs,” are encouraged to meet regularly throughout the year. Each year, communities will gather to show off what they have learned and created. Labs range from computer science to biology, geology to physics, and more. In short, this sounds like an exciting project, one that I have signed up for as a volunteer.

I’m excited to see education become a priority once again. Seeing what my children learn in school is very disappointing at times. Sure, they’re younger and I know that basic skills are necessary, but it seems they are learning at a much slower pace than when I was in school. I don’t want to see them struggle later in life because they didn’t get the education they need and deserve. I encourage you to help out where you can, volunteer for National Lab Day, or find another educational program you can participate in. Never stop learning!

 

A tribute to Cosmos

I just ran across this today, thanks Slashdot. It’s a tribute to Carl Sagan of Cosmos fame and was put together by auto-tuning Sagan’s own dialogue. Auto-tuning is a method by which you can change the pitch of an existing sound. It is often used to “fix” mistakes musicians make when recording. More recently, it has been used to create entire new works of art by modifying the pitch of recordings to the point of distortion and “tuning” them to follow a given musical flow. In the end, you end up with something like the following video:

You can download this video, or an MP3 of this from the artists site. I also recommend checking out The Gregory Brothers and all of their auto-tuning goodness.

Centralized Firewall Logging

I currently maintain a number of Linux-based servers. The number of servers is growing, and management becomes a bit wieldy after a while. One move I’ve made is to start using Spacewalk for general package and configuration management. This has turned out to be a huge benefit and I highly recommend it for anyone in the same position as I am. Sure, Spacewalk has its bugs, but it’s getting better with every release. Kudos to Redhat for bringing such a great platform to the public.

Another area of problematic management is the firewall. I firmly believe in defense in depth and I have several layers of protection on these various servers. One of those layers is the iptables configuration on the server itself. Technically, iptables is the program used to configure the filtering ruleset for a Linux kernel with the ip_tables packet filter installed. For the purposes of this article, iptables encompasses both the filter itself as well as the tools used to configure it.

Managing the ruleset on a bunch of disparate servers can be a daunting task. There are often a set of “standard” rules you want to deploy across all of the servers, as well as specialized rules based on what role the server plays. The standard rules are typically for management subnets, common services, and special filtering rules to drop malformed packets, source routes, and more. There didn’t seem to be an easy way to deploy such a ruleset, so I ended up rolling my own script to handle the configuration. I’ll leave that for a future blog entry, though.

In addition to centralized configuration, I wanted a way to monitor the firewall from a central location. There are several reasons for this. One of the major reasons is convenience. Having to wade through tons of logwatch reports, or manually access each server to check the local firewall rules is difficult and quickly reaches a point of unmanageability. What I needed was a way to centrally monitor the logs, adding and removing filters as necessary. Unfortunately, there doesn’t seem to be much out there. I stumbled across the iptablelog project, but it appears to be abandoned.

Good did come of this project, however, as it lead me to look into ulogd. The ulog daemon is a userspace logger for iptables. The iptables firewall can be configured to send security violations, accounting, and flow information to the ULOG target. Data sent to the ULOG target is picked up by ulogd and sent wherever ulogd is configured to send it. Typically, this is a text file or a sql database.

Getting started with ulogd was a bit of a problem for me, though. To start, since I’m using a centralized management system, I need to ensure that any new software I install uses the proper package format. So, my first step was to find an RPM version of ulogd. I can roll my own, of course, but why re-invent the wheel? Fortunately, Fedora has shipped with ulogd since about FC6. Unfortunately for me, however, I was unable to get the SRPM for the version that ships with Fedora 11 to install. I keep getting a cpio error. No problem, though, I just backed up a bit and downloaded a previous release. It appears that nothing much has changed as ulogd 1.24 has been released for some time.

Recompiling the ulog SRPM for my CentOS 5.3 system failed, however, complaining about linker problems. Additionally, there were errors when the configure script was run. So before I could get ulogd installed and running, I had to get it to compile. It took me a while to figure it out as I’m not a linker expert, but I came up with the following patch, which I added to the RPM spec file.

— ./configure 2006-01-25 06:15:22.000000000 -0500
+++ ./configure 2009-09-10 22:37:24.000000000 -0400
@@ -1728,11 +1728,11 @@
EOF

MYSQLINCLUDES=`$d/mysql_config –include`
– MYSQLLIBS=`$d/mysql_config –libs`
+ MYSQLLIBS=`$d/mysql_config –libs | sed s/-rdynamic//`

DATABASE_DIR=”${DATABASE_DIR} mysql”

– MYSQL_LIB=”${DATABASE_LIB} ${MYSQLLIBS} ”
+ MYSQL_LIB=”${DATABASE_LIB} -L/usr/lib ${MYSQLLIBS}”
# no change to DATABASE_LIB_DIR, since –libs already includes -L

DATABASE_DRIVERS=”${DATABASE_DRIVERS} ../mysql/mysql_driver.o ”
@@ -1747,7 +1747,8 @@
echo $ac_n “checking for mysql_real_escape_string support””… $ac_c” 1>&6
echo “configure:1749: checking for mysql_real_escape_string support” >&5

– MYSQL_FUNCTION_TEST=`strings ${MYSQLLIBS}/libmysqlclient.so | grep mysql_real_escape_string`
+ LIBMYSQLCLIENT=`locate libmysqlclient.so | grep libmysqlclient.so$`
+ MYSQL_FUNCTION_TEST=`strings $LIBMYSQLCLIENT | grep mysql_real_escape_string`

if test “x$MYSQL_FUNCTION_TEST” = x
then

In short, this snippet modifies the linker flags to add /usr/lib as a directory and removed the -rdynamic flag which mysql_config seems to errantly present. Additionally, it modifies how the script identifies whether the mysql_real_escape_string function is present in the version of MySQL installed. Both of these changes resolved my compile problem.

After getting the software to compile, I was able to install it and get it running. Happily enough, the SRPM I started with included patches to add an init script as well as a logrotate script. This makes life a bit easier when getting things running. So now I had a running userspace logger as well as a standardized firewall. Some simple changes to the firewall script added ULOG support. You can download the SRPM here.

At this point I have data being sent to both the local logs as well as a central MySQL database. Unfortunately, I don’t have any decent tools for manipulating the data in the database. I’m using iptablelog as a starting point and I’ll expand from there. To make matters more difficult, ulogd version 2 seems to make extensive changes to the database structure, which I’ll need to keep in mind when building my tools.

I will, however, be releasing them to the public when I have something worth looking at. Having iptablelog as a starting point should make things easier, but it’s still going to take some time. And, of course, time is something I have precious little of to begin with. Hopefully, though, I’ll have something worth releasing before the end of this year. Here’s hoping!