A tribute to Cosmos

I just ran across this today, thanks Slashdot. It’s a tribute to Carl Sagan of Cosmos fame and was put together by auto-tuning Sagan’s own dialogue. Auto-tuning is a method by which you can change the pitch of an existing sound. It is often used to “fix” mistakes musicians make when recording. More recently, it has been used to create entire new works of art by modifying the pitch of recordings to the point of distortion and “tuning” them to follow a given musical flow. In the end, you end up with something like the following video:

You can download this video, or an MP3 of this from the artists site. I also recommend checking out The Gregory Brothers and all of their auto-tuning goodness.

Centralized Firewall Logging

I currently maintain a number of Linux-based servers. The number of servers is growing, and management becomes a bit wieldy after a while. One move I’ve made is to start using Spacewalk for general package and configuration management. This has turned out to be a huge benefit and I highly recommend it for anyone in the same position as I am. Sure, Spacewalk has its bugs, but it’s getting better with every release. Kudos to Redhat for bringing such a great platform to the public.

Another area of problematic management is the firewall. I firmly believe in defense in depth and I have several layers of protection on these various servers. One of those layers is the iptables configuration on the server itself. Technically, iptables is the program used to configure the filtering ruleset for a Linux kernel with the ip_tables packet filter installed. For the purposes of this article, iptables encompasses both the filter itself as well as the tools used to configure it.

Managing the ruleset on a bunch of disparate servers can be a daunting task. There are often a set of “standard” rules you want to deploy across all of the servers, as well as specialized rules based on what role the server plays. The standard rules are typically for management subnets, common services, and special filtering rules to drop malformed packets, source routes, and more. There didn’t seem to be an easy way to deploy such a ruleset, so I ended up rolling my own script to handle the configuration. I’ll leave that for a future blog entry, though.

In addition to centralized configuration, I wanted a way to monitor the firewall from a central location. There are several reasons for this. One of the major reasons is convenience. Having to wade through tons of logwatch reports, or manually access each server to check the local firewall rules is difficult and quickly reaches a point of unmanageability. What I needed was a way to centrally monitor the logs, adding and removing filters as necessary. Unfortunately, there doesn’t seem to be much out there. I stumbled across the iptablelog project, but it appears to be abandoned.

Good did come of this project, however, as it lead me to look into ulogd. The ulog daemon is a userspace logger for iptables. The iptables firewall can be configured to send security violations, accounting, and flow information to the ULOG target. Data sent to the ULOG target is picked up by ulogd and sent wherever ulogd is configured to send it. Typically, this is a text file or a sql database.

Getting started with ulogd was a bit of a problem for me, though. To start, since I’m using a centralized management system, I need to ensure that any new software I install uses the proper package format. So, my first step was to find an RPM version of ulogd. I can roll my own, of course, but why re-invent the wheel? Fortunately, Fedora has shipped with ulogd since about FC6. Unfortunately for me, however, I was unable to get the SRPM for the version that ships with Fedora 11 to install. I keep getting a cpio error. No problem, though, I just backed up a bit and downloaded a previous release. It appears that nothing much has changed as ulogd 1.24 has been released for some time.

Recompiling the ulog SRPM for my CentOS 5.3 system failed, however, complaining about linker problems. Additionally, there were errors when the configure script was run. So before I could get ulogd installed and running, I had to get it to compile. It took me a while to figure it out as I’m not a linker expert, but I came up with the following patch, which I added to the RPM spec file.

— ./configure 2006-01-25 06:15:22.000000000 -0500
+++ ./configure 2009-09-10 22:37:24.000000000 -0400
@@ -1728,11 +1728,11 @@
EOF

MYSQLINCLUDES=`$d/mysql_config –include`
– MYSQLLIBS=`$d/mysql_config –libs`
+ MYSQLLIBS=`$d/mysql_config –libs | sed s/-rdynamic//`

DATABASE_DIR=”${DATABASE_DIR} mysql”

– MYSQL_LIB=”${DATABASE_LIB} ${MYSQLLIBS} ”
+ MYSQL_LIB=”${DATABASE_LIB} -L/usr/lib ${MYSQLLIBS}”
# no change to DATABASE_LIB_DIR, since –libs already includes -L

DATABASE_DRIVERS=”${DATABASE_DRIVERS} ../mysql/mysql_driver.o ”
@@ -1747,7 +1747,8 @@
echo $ac_n “checking for mysql_real_escape_string support””… $ac_c” 1>&6
echo “configure:1749: checking for mysql_real_escape_string support” >&5

– MYSQL_FUNCTION_TEST=`strings ${MYSQLLIBS}/libmysqlclient.so | grep mysql_real_escape_string`
+ LIBMYSQLCLIENT=`locate libmysqlclient.so | grep libmysqlclient.so$`
+ MYSQL_FUNCTION_TEST=`strings $LIBMYSQLCLIENT | grep mysql_real_escape_string`

if test “x$MYSQL_FUNCTION_TEST” = x
then

In short, this snippet modifies the linker flags to add /usr/lib as a directory and removed the -rdynamic flag which mysql_config seems to errantly present. Additionally, it modifies how the script identifies whether the mysql_real_escape_string function is present in the version of MySQL installed. Both of these changes resolved my compile problem.

After getting the software to compile, I was able to install it and get it running. Happily enough, the SRPM I started with included patches to add an init script as well as a logrotate script. This makes life a bit easier when getting things running. So now I had a running userspace logger as well as a standardized firewall. Some simple changes to the firewall script added ULOG support. You can download the SRPM here.

At this point I have data being sent to both the local logs as well as a central MySQL database. Unfortunately, I don’t have any decent tools for manipulating the data in the database. I’m using iptablelog as a starting point and I’ll expand from there. To make matters more difficult, ulogd version 2 seems to make extensive changes to the database structure, which I’ll need to keep in mind when building my tools.

I will, however, be releasing them to the public when I have something worth looking at. Having iptablelog as a starting point should make things easier, but it’s still going to take some time. And, of course, time is something I have precious little of to begin with. Hopefully, though, I’ll have something worth releasing before the end of this year. Here’s hoping!

 

Inexpensive Two Factor Authentication

Two Factor authentication is a means by which a user’s identity can be confirmed in a more secure manner. Typically, the user supplies a username and password, the first factor, and then an additional piece of information, the second factor. In theory, providing this additional information proves the user is who they say they are. Two different types of factors should be used to maximize security.

There are three general types of factors that are used. They are as follows (quoting from Wikipedia):

  • Human factors are inherently bound to the individual, for example [[biometrics]] (“Something you are”).
  • Personal factors are otherwise mentally or physically allocated to the individual as for example learned code numbers. (“Something you know”)
  • Technical factors are bound to physical means as for example a pass, an ID card or a token. (“Something you have”)

While Two Factor authentication can be secure, the security is often compromised through the use of less secure second factors. For instance, many institutions use a series of questions as a second factor. While this is somewhat more secure than a single username and password, these questions are often generic enough that they can be obtained through social engineering. This is an example of using the same factor twice, in this case, Personal factors. Personal factors are inexpensive, however, often free to the institution requiring the security.

On the other hand, use of either Human or Technical factors is often cost prohibitive. Biometrics, for instance, requires some sort of interface to read the biometric data and convert it to something the computer can understand. Technical factors are typically physical electronic devices with a cost per device. As a result, institutions are unwilling to put forth the cost necessary to protect their data.

Banks, in particular, are unwilling to provide this enhanced security due to their large customer base and the prohibitive cost of providing physical hardware. But, banks may be willing to provide a more cost effective second factor, if one existed. Australian inventor, Matt Walker, may be able to provide such a solution.

Passwindow is a new authentication method consisting of a transparent window with seemingly random markings on it. The key is to combine these markings with similar markings provided by the application requiring authentication. The markings are similar to those on an LED clock and combining the two sources reveals a series of numbers, effectively creating a one-time password. The Passwindow provides a Physical factor, making it an excellent second factor. The following video demonstrates how Passwindow works.

What makes Passwindow so incredible is the how inexpensive it is to implement it. The bulk of the cost is in providing users with their portion of the pattern. This can easily be added to new bank cards as they are sent out, or provided as a second card to customers until they require a new card. There is sufficient space on existing bank cards to integrate a clear window with the pattern on it.

Passwindow seems to be secure for a number of reasons. It’s a physical device, something that cannot be socially engineered. In order for it to be compromised, an attacker needs to have a copy of the segment pattern on your specific card. While users generally have a difficult time keeping passwords safe, they are exceedingly good at keeping physical objects secure.

If an attacker observes the user entering the generated number, the user remains secure because the number is a one-time password. While it is theoretically possible for that number to come up again, it is highly unlikely. A well written generator will ensure truly random patterns, ensuring they can’t be predicted. Additional security can be added by having the user rotate the card into various positions or adding additional lines to the card.

If Passwindow can find traction, I can see it being integrated into most bank cards, finally providing a more secure means of authentication. Additionally, it brings an inexpensive second factor to the table, giving other institutions the ability to use enhanced security. This is some pretty cool technology, I’m eager to see it implemented in person.

 

Strange Anatomy …

This is some of the coolest art I’ve seen in a while. Both imaginative and realistic. Just plain awesome.. Some of it is for sale, too..

Ever wonder what was inside of those balloon animals? How about the innards of a Lego Minifig? Or even the Ginger Bread man! Now you’ll think twice before taking a bite out of one of those …

Jason Freeny, the artist, is an interface designer for a New York based company. He has previously worked for MTV, creating sets, props, artwork, etc. He also worked briefly as a toy designer. He also has a blog where he posts his latest artwork. Definitely some cool stuff. Be sure to check out the store on his site, too.

 

Snow Kitty

Well, it’s finally out. Snow Leopard, Apple’s latest and greatest OS. Officially released on August 28th, they did a hell of a job getting it delivered on time. It arrived, on time, at my house yesterday afternoon. I had it installed on my Macbook Pro that evening.

OS X 10.6 brings full 64-bit application support to the OS. According to Apple, almost every single core app has been re-built to be 64-bit. This means that these applications can access more memory, if necessary, run faster, and actually take up less space on the hard drive. After installing the latest OS, I gained an extra 10 Gig of space on the hard drive.. Finally, an upgrade that really delivers on savings!

In addition to 64-bit, Apple has also included some new technology. First up is Grand Central Dispatch [pdf], a multi-core threading technology. Grand Central is responsible for handling threads, removing the burden from the developer. As long as an application is programmed to use GCD, the OS will take care of optimizing thread usage. Apple claims GCD is extremely efficient at what it does and will dynamically scale with the number of processors in the computer. As a result, programs will run faster, taking full advantage of the system.

Another new technology is OpenCL. OpenCL, or Open Computing Language, is a way for developers to take advantage of extra processing power by utilizing the GPU of the graphics card. I’m a bit on the fence about this particular technology. On the one hand, using the extra power can help programs run faster. On the other hand, it seems that an irresponsible programmer, or perhaps even a well-intentioned one, could use up GPU cycles, impacting overall graphics performance. Though my fear may be misplaced as I’m sure Apple has put some sort of check in place to ensure this doesn’t happen. Regardless, it’s a pretty cool technology, and I’d like to see it in action.

In addition to all of the “under the hood” stuff, OS X 10.6 includes a few new features. One of the more touted features is support for Microsoft Exchange. Mail, iCal, and the Address Book now have built-in support for Microsoft Exchange, allowing business users to easily access their data on a Mac. I don’t have much use for this, and no way to test any of it, so I don’t have much to say about it.

Other features include some additional UI improvements. Snow Leopard allows you to drill down into folders when you’re looking at a stack on the dock. I find this to be a really cool feature, letting me zip around my documents folder without popping up additional windows I don’t really need. Expose has also been updated and integrated into the dock. If you click and hold on an icon in the dock, Expose activates and shows you all of the open windows for that application. From there you can switch to a window, close the application, show it in the finder, and even set it up to launch at login.

There’s a whole bunch of other enhancements as well. You can read about them here.

Since the install, I’ve run into a few problems, but nothing I didn’t really expect. The install itself went smoothly, taking the better part of an hour to complete. I experienced no data loss at all, and it appears that none of my applications were marked as incompatible. I do have a few apps that are not Snow Leopard ready, though.

After launching Mail, I was notified that both the GPGMail and GrowlMail plugins had been disabled due to incompatibilities. GrowlMail is more of a flashy app, nothing I rely heavily on. GPGMail was a blow, however, as I use it daily. And to make matters worse, it looks like GPGMail won’t be updated anytime soon. The short story is that the internals of Mail changed significantly with the new release. To make matters worse, Apple apparently doesn’t publish any sort of Mail API, so it becomes even more difficult to create a Mail plugin. This is a real killer for me, as I really relied on this plugin. Hopefully someone will be able to step in and get this fixed soon.

I also noticed that Cisco’s Clean Access Agent is no longer functioning. It seems to run, but won’t identify the OS properly, so the system is rejected by the network. Supposedly the 4.6 release of CCA fixes this, but I haven’t been able to locate a copy to test yet.

Another broken app was Blogo, my blogging application. As usual, though, Brainjuice was on top of things and I’m currently running a new beta version that seems to work properly. The real test is when I’m done writing this and try to post it…

Beyond these few apps, everything appears to be working properly. Hopefully the apps I have will be updated to 64-bit over the next few weeks and months and I’ll see even more performance out of this system. As it is, the system seems to be running much quicker now. Unfortunately, I don’t have any definitive benchmarks to prove this, though.

So overall, I’m happy with the Snow Leopard upgrade. The speed and performance improvements thus far are great, and the extra new features are quite useful. The extra 10 Gig of disk space doesn’t hurt much either. I definitely recommend the update, but make sure your apps are compatible beforehand.

 

Weave your way through the net …

One of the greatest strengths of Firefox is the ability to extend its capabilities through the use of plugins. If you want more out of your web browser, then you can usually find a plugin that will add that functionality.

One feature I searched for when I first started using Firefox was the ability to backup my bookmarks, and eventually, synchronize them between machines. I didn’t want to send my bookmarks to a third party, though, so addons for sites like Delicious were of no interest to me.

For years, I used a plugin called Bookmark Sync, which eventually became Bookmark Sync and Sort. Unfortunately, Sync and Sort was never updated to work with Firefox 3, so I had to look elsewhere for a solution.

I stumbled across another plugin called Foxmarks, now known as Xmarks. Xmarks was designed to synchronize bookmarks with the Foxmarks site, a third party. Fortunately, they added third-party server support into the addon around the time I was looking for a new solution. So for the next year or two, I used Xmarks.

Earlier this year, when Foxmarks became Xmarks, they started adding additional features that I had no interest in. For instance, when using Google to search, Xmarks added additional content to the search results. I also had intermittent problems with my third-party settings being reset and some pretty serious speed issues when syncing. I tolerated it, because there was nothing better out there, but it still bothered me.

In December of 2007, Mozilla Labs introduced a new concept called Weave. At the time, I didn’t really understand the Weave concept. It sounded like something similar to Delicious and all of the other social bookmarking systems. I planned on keeping an eye on the project, though. Fast forward to earlier this year when 0.4 was released, and I looked a bit deeper into the project.

From what I’ve read, versions 0.1 and 0.2 supported syncing through the WebDAV protocol. 0.3 on supported a custom server, which was released by Mozilla. Additionally, Weave supported syncing more than just bookmarks, such as passwords, tabs, form input, and more. After reading about the custom server, I decided to take Weave for a spin.

The first step in trying out Weave was to set up the Weave server. This proved to be a bit more difficult than I initially thought. Mozilla provides the software via a Mercurial repository, so grabbing the software is as simple as heading to the repository and downloading the latest tarball. Unfortunately, there doesn’t seem to be any “official” release channel for new versions of the server software, so you need to manually look for software updates. That said, I’ve had no problems with the current software release.

Once you have the software, place it on your server in a location of your choosing. If you choose to place it within an existing domain, you may need to do some fancy aliases or URL rewriting to make it work. That bit of the install took me a while to get working.

The server software uses a MySQL database to store all of the synchronized data. This was one of the biggest reasons I decided to check out Weave. I deal with MySQL databases almost every day, so I’m quite comfortable with manipulating them. Additionally, this gives me the ability to write my own interface to deal with the MySQL data, if I choose to. It also means I can quickly and easily manipulate the data, should I choose to.

The rest of the server install consists of setting up the SQL database and tweaking some configuration variables. Once complete, you can point the Weave addon at your server and begin synchronizing data. Make sure you go into the preferences and identify what data you’d like to synchronize.

The Weave settings in Firefox are pretty straightforward and don’t need a lot of explanation. One trick you might keep in mind is for the first sync from a new machine. Unless you want the existing bookmarks to be merged with what’s on the server, you need to use the “Sync Now” option from within the preferences menu. This command acts differently when you use it within Preferences as opposed to clicking it on the Tools menu or from the icon at the bottom of the Firefox window. If you click on it via Preferences, you get a small options menu, shown below.

Using this menu, you can choose to replace all of the local data, all of the remote data, or merge the two. Very useful to prevent those default bookmarks from being merged back into your bookmarks again.

Weave, thus far, seems to be pretty secure. It uses an HTTPS connection to communicate with the Weave server. A username and password is required to log into the server, as is typical with most services. What seems to set Weave apart is the inclusion of an additional passphrase used to encrypt all of your data on the remote server. If you look at the data stored in MySQL, you’ll see that all of the data is encrypted prior to being added. Weave encrypts the data locally on your machine prior to sending it over the network to the Weave server. Just do yourself a favor and don’t forget your passphrase. The username and password are recoverable, especially if you run your own server, but the passphrase is not.

As of this writing, Weave is up to a 0.6pre1 release. Synchronization speed has increased considerably, and additional features are being added. The current roadmap shows a 0.6 release by August 26th, but it doesn’t go into much more detail about future releases. Regardless, Weave has proven to be extremely useful and I’m looking forward to see where development will lead. It’s definitely worth checking out.

 

Scanning the crowd

RFID, or Radio Frequency Identifier, chips are used throughout the commercial world to handle inventory, identify employees, and even more. Ever heard of EZPass? EZPass is an RFID tag you attach to your car window. When you pass near an EZPass receiver, the receiver records your ID and decrements your EZPass account an amount equivalent to the toll of the road or bridge you have just used.

One of the key concepts here is that the RFID tag can be read remotely. Depending on the type of RFID tag, remotely can be a few feet to several yards. In fact, under certain conditions, it may be possible to read RFID tags at extreme distances of hundreds and possibly thousands of feet. It is this “feature” of RFID that attracts the attention of security researchers, and others with more… nefarious intentions.

Of course, if we want to talk security and hacking, we need to talk about Defcon. Defcon, one of the largest, and most electronically hostile, hacking conventions in the US, and possibly the world, took place this past weekend, July 30th – August 2nd. Events range from presentations about security and hacking, to lock picking contests, a jeopardy-style event, and more. One of the more interesting panels this year was “Meet The Feds,” where attendees were given the opportunity to ask questions of the various federal representatives.

Of course, the “official” representatives on the panel weren’t the only feds attending Defcon. There are always others in the crowd, both out in the open and undercover. And while there is typically a “Spot the Fed” event at Defcon, there are other reasons Feds may be around that don’t want their identities known. Unfortunately, the federal government is one of the many organizations that have jumped on the RFID bandwagon, embedding tags into their ID cards.

So, what do you get when you combine a supposedly secure technology with a massive gathering of technology-oriented individuals who like to tinker with just about everything? In the case of Defcon 17, you get an RFID scanner, complete with camera.

According to Threat Level, a column in Wired magazine, a group of researchers set up a table at Defcon where they captured and displayed credentials from RFID chips. When the data was captured, they snapped a picture of the person the data was retrieved from, essentially tying a face to the data stream.

So what’s the big deal? Well, the short story is that this data can be used to impersonate the individual it was captured from. In some cases, it may be possible to alter the data and gain increased privileges at the target company. In other cases, the data can be used to identify the individual, which was the fear at Defcon. Threat Level goes into detail about how it works, but to summarize their report, it is sometimes possible to identify where a person works by the identification strings sent from an RFID chip. Thus, it might be possible to identify undercover federal agents, purely based on the data captured in a passive scan.

The Defcon organizers asked the researchers to destroy the captured data, but this incident just goes to prove how insecure this information truly is. Even with encryption and other security measures in place, identification of an individual is still possible. And to make matters worse, RFID chips are popping up everywhere. They’re attached to items you buy at the store, embedded into identification cards for companies, and even embedded in passports. And while the US government has gone so far as to provide special passport covers that block the RFID signal, there are still countless reasons to open your passport, even for a few moments. Remember, it only takes a few seconds to capture all the data.

Beware… You never know who’s watching …

 

From Dinosaurs to Humans …

Evolution, through tech. This is insanely cool… Sure, it’s an ad, but it’s a damn good one. It’s for a german company called Saturn, which I’ve never heard of. According to their website, they’re a consumer electronics company, kind of like Best Buy, I guess. Enjoy the video. (It’s also available in High Definition. You can reach it here, or just click the HQ button on the video after it starts)