Blacklisted!

Back in October of 2011, a bill was introduced in the House of Representatives called HR.3261, or the “Stop Online Privacy Act (SOPA).” Go take a look, I’ll wait. It’s a relatively straightforward bill, especially compared to others I’ve looked at. Hell, it’s only 15 pages long! And it’s going to kill the Internet.

Ok,ok.. It won’t *KILL* the Internet, but it has the potential to ruin what we consider to be the Internet. Personally, I believe that if this passes, it has the potential to turn the Internet into nothing more than a collection of business websites, at least in the US.

So how does this thing work? Well, it’s actually pretty straightforward. If your website is suspected of infringing on copyrighted material, your website is taken down, any advertising you have on your site is cut, and you are removed from search engines. But so what, you deserve it! You were breaking copyright law!

Not so fast. This applies to *any* content on your website. So if someone comments on a blog entry, or you innocently link to a website that infringes copyright, or other situations out of your control, you’re responsible. Basically, you have to police every single comment, link, etc. that appears on your website.

It’s even worse for service providers since they have to do the blocking. Every infringing site is blocked via DNS. And since the US doesn’t have control of all of DNS, and some infringing sites are not located in the US, this means we move into the realm of having DNS blacklist files. The ISP becomes the responsible party if they fail to block these sites, which in turn means more overhead for the ISP. Think you pay a lot for Internet access now?

So what can you do? Well, for one, you can contact your representative and tell them how insane this whole idea is. And you can protest SOPA itself by putting up a protest overlay on your site. There’s a github project with all of the source code you need to add an overlay to your website. Or, if you have a Serendipity web blog, you can download the Stop SOPA plugin I’ve written.

Get out there and protest!

Bringing Social To The Kernel

Imagine a world where you can login to your computer once and have full access to all of the functionality in your computer, plus seamless access to all of the web sites you visit on a daily basis. No more logging into each site individually, your computer’s operating system takes care of that for you.

That world may be coming quicker than you realize. I was listening to a recent episode of the PaulDotCom security podcast today. In this episode, they interviewed Jason Fossen, a SANS Security Faculty Fellow and instructor for SEC 505: Securing Windows. During the conversation, Jason mentioned some of the changes coming to the next version of Microsoft’s flagship operating system, Windows 8. What he described was, in a word, horrifying…

Not much information is out there about these changes yet, but it’s possible to piece together some of it. Jason mentioned that Windows 8 will have a broker system for passwords. Basically, Windows will store all of the passwords necessary to access all of the various services you interact with. Think something along the lines of 1Password or LastPass. The main difference being, this happens in the background with minimal interaction with the user. In other words, you never have to explicitly login to anything beyond your local Windows workstation.

Initially, Microsoft won’t have support for all of the various login systems out there. They seem to be focusing on their own service, Windows Live, and possibly Facebook. But the API is open, allowing third-parties to provide the necessary hooks to their own systems.

I’ve spent some time searching for more information and what I’m finding seems to indicate that what Jason was talking about is, in fact, the plan moving forward. TechRadar has a story about the Windows 8 Credential Vault, where website passwords are stored. The credential vault appears to be a direct competitor to 1Password and LastPass. As with other technologies that Microsoft has integrated in the past, this may be the death knell for password managers.

ReadWriteWeb has a story about the Windows Azure Access Control Service that is being used for Windows 8. Interestingly, this article seems to indicate that passwords won’t be stored on the Windows 8 system itself, but in a centralized “cloud” system. A system called the Access Control Service, or ACS, will store all of the actual login information, and the Windows 8 Password Broker will obtain tokens that are used for logins. This allows users to access their data from different systems, including tablets and phones, and retain full access to all of their login information.

Microsoft is positioning Azure ACS as a complete claims-based identity system. In short, this allows ACS to become a one-stop shop for single sign-on. I log into Windows and immediately have access to all of my accounts across the Internet.

Sounds great, right? In one respect, it is. But if you think about it, you’re making things REALLY easy for attackers. Now they can, with a single login and password, access every system you have access to. It doesn’t matter that you’ve used different usernames and passwords for your bank accounts. It doesn’t matter that you’ve used longer, more secure passwords for those sensitive sites. Once an attacker gains a foothold on your machine, it’s game over.

Jason also mentioned another chilling detail. You’ll be able to login to your local system using your Windows Live ID. So, apparently, if you forget your password for your local user, just login with your Windows Live ID. It’s all tied together. According to the TechRadar story, “if you forget your Windows password you can reset it from another PC using your Windows Live ID, so you don’t need to make a password restore USB stick any more.” They go on to say the following :

You’ll also have to prove your identity before you can ‘trust’ the PC you sync them to, by giving Windows Live a second email address or a mobile number it can text a security code to, so anyone who gets your Live ID password doesn’t get all your other passwords too – Windows 8 will make you set that up the first time you use your Live ID on a PC.

You can always sign in to your Windows account, even if you can’t get online – or if there’s a problem with your Live ID – because Windows 8 remembers the last password you signed in with successfully (again, that’s encrypted in the Password Vault).

With this additional tidbit of information, it would appear that an especially crafty attacker could even go as far as compromising your entire system, without actually touching your local machine. It may not be easy, but it looks like it’ll be significantly easier than it was before.

Federated identity is an interesting concept. And it definitely has its place. But, I don’t think tying everything together in this manner is a good move for security. Sure, you can use your Facebook ID (or Twitter, Google, OpenID, etc) already as a single login for many disparate sites. In fact, these companies are betting on you to do so. This ties all of your activity back to one central place where the data can be mined for useful and lucrative bits. And perhaps in the realm of a social network, that’s what you want. But I think there’s a limit to how wide a net you want to cast. But if what Jason says is true, Microsoft may be building the equivalent of the One Ring. ACS will store them all, ACS will verify them, ACS will authenticate them all, and to the ether supply them.

The Zero-Day Conundrum

Last week, another “zero-day” vulnerability was reported, this time in Adobe’s Acrobat PDF reader. Anti-virus company, Symantec, reports that this vulnerability is being used as an attack vector against defense contractors, chemical companies, and others. Obviously, this is a big deal for all those being targeted, but is it really something you need to worry about? Are “zero-days” really something worth defending against?

What is a zero-day anyway? Wikipedia has this to say:

A zero-day (or zero-hour or day zero) attack or threat is a computer threat that tries to exploit computer application vulnerabilities that are unknown to others or the software developer. Zero-day exploits (actual software that uses a security hole to carry out an attack) are used or shared by attackers before the developer of the target software knows about the vulnerability.

So, in short, a zero-day is an unknown vulnerability in a piece of software. Now, how do we defend against this? We have all sorts of tools on our side, surely there’s one that will catch these before they become a problem, right? IDS/IPS systems have heuristic filters for detecting anomalous activity. Of course, you wouldn’t want your IPS blocking arbitrary traffic, so that might not be a good idea. Anti-virus software also has heuristic filters, so that should help, right? Well… When’s the last time your heuristic filter caught something that wasn’t a false positive? So yeah, that’s probably not going to work either. So what’s a security engineer to do?

My advice? Don’t sweat it. Don’t get me wrong, zero-days are dangerous and can cause all sorts of problems, but unless you have an unlimited budget with an unlimited amount of time, trying to defend against an unknown attack is a pointless exercise in futility. But don’t despair, there is hope.

Turns out, if you spend your time securing your network properly, you’ll defend against most attacks out there. Let’s look at this latest attack, for instance. Let’s assume you’ve spent millions and have the latest and greatest hardware with all the cutting edge signatures and software. Someone sends the CEO’s secretary an innocuous PDF, which she promptly opens, and all that hard work goes out the window.

On the other hand, let’s assume you spent the small budget you have defending the critical data you store and spend the time you’ve saved not decoding those advanced heuristics manuals on training the staff. This time the CEO’s secretary looks twice, realizes this is an unsolicited email, and doesn’t open the PDF. No breach, the world is saved.

Seriously, though, spending your time and effort safe-guarding your data and training your staff will get you much further than worrying about every zero-day that comes along. Of course, you should be watching for these sorts of reports. In this case, for instance, you can alert your staff that there’s a critical flaw in this particular software and that they need to be extra careful. Or, if the flaw is in a web application, you can add the necessary signatures to look for it. But in the end, it’s very difficult, if not impossible, to defend against something you’re not aware of. Network and system security is complex and difficult enough without having to worry about the unknown.

In Memorium – Steve Jobs – 1955-2011

Somewhere in the early 1980’s, my father took me to a bookstore in Manhattan. I don’t remember why, exactly, we were there, but it was a defining moment in my life. On display was a new wonder, a Macintosh computer.

Being young, I wasn’t aware of social protocol. I was supposed to be awed by this machine, afraid to touch it. Instead, as my father says, I pushed my way over, grabbed the mouse, and went to town. While all of the adults around me looked on in horror, I quickly figured out the interface and was able to make the machine do what I wanted.

It would be over 20 years before I really became a Mac user, but that first experience helped define my love of computers and technology.

Thank you, Steve.

Audit Insanity

<RANT>

It’s amazing, but the deeper I dive into security, the more garbage security theater I uncover. Sure, there’s insanity everywhere, but I didn’t expect to come across some of this craziness…

One of the most recent activities I’ve been party to has been the response to an independent audit. When I inquired as to the reasoning behind the audit, the answer I’ve received has been that this is a recommended yearly activity. It’s possible that this information is incorrect, but I suspect that it’s truer than I’d like to believe.

Security audits like this are standard practice all over the US and possibly the world. Businesses are led to believe that getting audited is a good thing and that they should be repeated often. My main gripe here is that while audits can be good, they need to be done for the right reasons, not just because someone tells you they’re needed. Or, even better, the audits that are forced on a company by their insurance company, or their payment processor. These sorts of audits are there to pass the blame if something bad happens.

Let’s look a little deeper. The audit I participated in was a typical security audit. An auditor contacts you with a spreadsheet full of questions for you to answer. You will, of course, answer them truthfully. Questions included inquiries about the password policy, how security policies are distributed, and how logins are handled. They delve into areas such as logging, application timeouts, IDS/IPS use, and more. It’s fairly in-depth, but ultimately just a checklist. The auditor goes through their list, interpreting your answers, and applying checkmarks where appropriate. The auditor then generates a list of items you “failed” to comply with and you have a chance to respond. This is all incorporated into a final report which is presented to whoever requested the audit.

Some audits will include a scanning piece as well. The one I’m most familiar with in this aspect is the SecurityMetrics PCI scan. Basically, you fill out a simplified yes/no questionnaire about your security and then they run a Nessus scan against whatever IP(s) you provide to them. It’s a completely brain-dead scan, too. Here’s a perfect example. I worked for a company who processed credit cards. The system they used to do this was on a private network using outbound NAT. There were both IDS and firewall systems in place. For the size of the business and the frequency of credit card transactions, this was considerable security. But, because there was a payment card processor in the mix, they were required to perform a quarterly PCI scan. The vendor of choice, SecurityMetrics.

So, the security vendor went through their checklist and requested the IP of the server. I explained that it was behind a one-way NAT and inaccessible from the outside world. They wanted the IP of the machine, which I provided to them. 10.10.10.1. Did I mention that the host in question was behind a NAT? These “security professionals” then loaded that IP into their automated scanning system. And it failed to contact the host. Go figure. Again, we went around and around until they finally said that they needed the IP of the device doing the NAT. I explained that this was a router and wouldn’t provide them with any relevant information. The answer? We don’t care, we just need something to scan. So, they scanned a router. For years. Hell, they could still be doing it for all I know. Like I said, brain dead security.

What’s wrong with a checklist, though? The problem is, it’s a list of “common” security practices not tailored to any specific company. So, for instance, the audit may require that a company uses hardware-based authentication devices in addition to standard passwords. The problem here is that this doesn’t account for non-hardware solutions. The premise here is that two-factor authentication is more secure than just a username and password. Sure, I whole-heartedly agree. But, I would argue that public key authentication provides similar security. It satisfies the “What You Have” and “What You Know” portions of two-factor authentication. But it’s not hardware! Fine, put your key on a USB stick. (No, really, don’t. That’s not very secure.)

Other examples include the standard “Password Policy” crap that I’ve been hearing for years. Basically, you should expire passwords every 90 days or so, passwords should be “strong”, and you should prevent password reuse by remembering a history of passwords. So let’s look at this a bit. Forcing password changes every 90 days results in bad password habits. The reasoning is quite simple, and there have been studies that show this. This paper (pdf) from the University of North Carolina is a good example. Another decent write up is this article from Cryptosmith. Allow me to summarize. Forcing password expiration results in people making simpler passwords, writing passwords down, or using simplistic algorithms to generate “complex” passwords. In short, cracking these “fresh” passwords is often easier than well thought out ones.

The so-called “strong” password problem can be summarized by a rather clever XKCD comic. The long and short here is that truly complex passwords that cannot be easily cracked are either horribly complex mishmashes of numbers, letters, and symbols, or they’re long strings of generic words. Seriously, “correct horse battery staple” is significantly stronger than using a completely random 11 digit string.

And, of course, password history. This sort of goes hand-in-hand with password expiration, but not always. If it’s used in conjunction with password expiration, then it generally results in single character variation in passwords. Your super-secure “complex” password of “Password1” (seriously, it meets the criteria.. Uppercase, lowercase, number) becomes a series of passwords where the 1 is changed to a 2, then 3, then 4, etc. until the history is exceeded and the user can return to 1 again. It’s easier to remember that way and the user doesn’t have to do much extra work.

So even the standard security practices on the checklist can be questioned. The real answer here is to tweak each audit to the needs of the requestor of the audit, and to properly evaluate the responses based on the security posture of the responder. There do need to be baselines, but they should be sane baselines. If you don’t get all of the checkmarks on an audit, it may not mean you’re not secure, it may just mean you’re securing your network in a way the auditor didn’t think of. There’s more to security than fancy passwords and firewalls. A lot more.

</RANT>

Much Ado About Lion

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Fixing the Serendipity XMLRPC plugin

A while ago I purchased a copy of BlogPress for my iDevices.. It’s pretty full-featured, and seems to work pretty well. Problem was, I couldn’t get it to work with my Serendipity-based blog. Oh well, a wasted purchase.

But not so fast! Every once in a while I go back and search for a possible solution. This past week I finally hit paydirt. I came across this post on the s9y forums.

This explained why BlogPress was crashing when I used it. In short, it was expecting to see a categoryName tag in the resulting XML from the Serendipity XMLRPC plugin. Serendipity, however, used description instead, likely because Serendipity has better support for the MetaWeblog API.

Fortunately, fixing this problem is very straightforward. All you really need to do is implement both APIs and return all of the necessary data for both APIs at the same time. To fix this particular problem, it’s a single line addition to the serendipity_xmlrpc.inc.php file located in $S9YHOME/plugins/serendipity_event_xmlrpc. That addition is as follows :


if ($cat['categoryid']) $xml_entries_vals[] = new XML_RPC_Value(
    array(
      'description'   => new XML_RPC_Value($cat['category_name'], 'string'),
      // XenoPhage: Add 'categoryName' to support mobile publishing (Thanks PigsLipstick)
      'categoryName'  => new XML_RPC_Value($cat['category_name'], 'string'),
      'htmlUrl'       => new XML_RPC_Value(serendipity_categoryURL($cat, 'serendipityHTTPPath'), 'string'),
      'rssUrl'        => new XML_RPC_Value(serendipity_feedCategoryURL($cat, 'serendipityHTTPPath'), 'string')
    ),
    'struct'
);

And poof, you now have the proper category support for Movable Type.

Evaluating a Blogging Platform

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at wordpress.com. There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.

Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Hey KVM, you’ve got your bridge in my netfilter…

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
COMMIT

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”

 

Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC=192.168.1.2 DST=192.168.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.