Archive for the ‘Technology’ Category

The Future of Personal Computers

Saturday, September 15th, 2012

The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.

As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.

So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.

The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.

For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.

We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.

Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.

Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.

Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.

What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.

Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.

Multiple Personalities With The Linux Kernel

Friday, June 29th, 2012

Virtualization is all the rage these days. Taking a single computer system and installing multiple “guest” operating systems on it. The benefits are a reduced footprint and better utilization of existing resources. There is a danger, however, in having many systems dependent on a single piece of hardware. The solution, of course, is to use multiple pieces of hardware and allow your “guests” to be moved between the individual hardware units, thus making the system more resilient to failure.

I’ve started playing a bit with virtualization, specifically, KVM virtualization. For my purposes, I’m using CentOS 6.x on a 64-bit capable system.

The hypervisor itself is a standard CentOS base install with the addition of kvm and various management packages. I installed the hypervisor on a RAID1 LVM, allowing me some room to grow if necessary, and reserving the remainder of the hard drive for virtual hosts. While you can use binary blobs for virtual disk, I prefer using a raided LVM which gives me the ability to grow the disk if necessary as well as minor bumps in speed.

Using yum, adding KVM to an existing installation is a pretty straightforward process :

yum install virt-manager libvirt libvirt-python python-virtinst

That should take care of any dependencies required to get KVM virtualization up and running.

Next up, we need to tackle networking. There are many, many different configurations, far too many to go through here. So, I’m going to keep it simple. For my purposes, I need a single connection to the outside network, all in the same VLAN, as well as a local NAT for some VMs that I need local access to, but that don’t need to be accessed via the Internet.

Setting this up is brilliantly simple. First, copy the /etc/sysconfig/network-scripts/ifcfg-eth0 file to /etc/sysconfig/network-scripts/ifcfg-br0. Next, edit the ifcfg-eth0 file. You’ll need to remove a bunch of lines and add a BRIDGE line as follows :

DEVICE=”eth0″

BRIDGE=”br0″

HWADDR=”00:11:22:33:44:55″

ONBOOT=”yes”

Next, edit the ifcfg-br0 file. All you really need to do here is change the DEVICE= line to reflect br0. I also recommend disabling NM_CONTROLLED … NetworkManager shouldn’t be installed anyway since you used a base install, but better safe than sorry. In the end, the ifcfg-br0 file should look something like this :

DEVICE=”br0″

BOOTPROTO=”static”

BROADCAST=”204.10.167.63″

IPADDR=”204.10.167.50″

NETMASK=”255.255.255.192″

ONBOOT=”yes”

TYPE=”Bridge”

DELAY=”0″

Restart networking and you’ll be all set. The NAT portion of this is handled by KVM itself, so there’s nothing to do there. And networking should be all ready to go.

Without guests, however, all you have is a basic Linux system with a few extra packages taking up space. The real magic starts when you create and install your first VM. My recommendation is to start with creating a template system you can clone later rather than hand-installing every single VM. To install the template, first decide on the base disk size. I’m using 15 GB volumes which is more than enough for the base install and leaves room for most basic server configurations. If you need more space, you can attach additional disks later.

I’m not going to go into how I set up LVM, there are plenty of tutorials out there. For the purposes of this article, I have a volume group names vg_libvirt where I plan to store all of the virtual machines. So first we create the disk necessary for the template :

lvcreate -L15G -n template_base vg_libvirt

Next we install the OS. virt-install is essentially a wrapper script that sets all the necessary values within KVM to get you going. After the settings are configured and the VM is started, girt-installer will automatically attach you to the VM console. The full command I used to install is as follows :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –extra-args=”console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

Since this is effectively a text install, you do run into a bit of a problem. Namely, you can’t configure the drives the way you want. There is a way around this, though it takes a bit of work. Of course, since you’re creating a template, the little bit of work now is easily made up for later. So, here’s how I handled the drive configuration.

First, run through a basic install using the above install method. Once you’re up and running, log into the new VM and head to the root home directory. In that directory you’ll find a kickstart file called anaconda-ks.cfg. Make a local copy of that file and shut down the VM.

The kickstart file gives you the basic parameters that CentOS used to configure the system. You can edit this file and use it yourself to automatically install and configure systems. For our purposes, we’re interested in editing the drive configuration and then using the kickstart file to create the template. So, edit the file and set the parameters as you see fit. An example is as follows :

# Kickstart file automatically generated by anaconda.

#version=DEVEL

install

cdrom

lang en_US.UTF-8

keyboard us

network –onboot no –device eth0 –noipv4 –noipv6

rootpw –iscrypted somerandomstringthatiwontrevealtoyoubutnicetry

firewall –service=ssh

authconfig –enableshadow –passalgo=sha512

selinux –enforcing

timezone –utc America/New_York

bootloader –location=mbr –driveorder=vda –append=” console=ttyS0 crashkernel=auto”

# The following is the partition information you requested

# Note that any partitions you deleted are not expressed

# here so unless you clear all partitions first, this is

# not guaranteed to work

clearpart –all –drives=vda

part /boot –fstype=ext4 –size=500

part swap –size=2048

part pv.253002 –grow –size=1

volgroup VolGroup –pesize=4096 pv.253002

logvol / –fstype=ext4 –name=lv_root –vgname=VolGroup –size=4096

logvol /tmp –fstype=ext4 –name=lv_tmp –vgname=VolGroup –size=2048

logvol /var –fstype=ext4 –name=lv_var –vgname=VolGroup –size=4096

logvol /home –fstype=ext4 –name=lv_home –vgname=VolGroup –size=2048

#repo –name=”CentOS” –baseurl=cdrom:sr0 –cost=100

%packages –nobase

@core

%end

Once you have this, you can re-run the girt-install command from above with a slight tweak to make the install use the kickstart file you created (I named it kick1.ks) :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –initrd-inject=/path/to/kick1.ks –extra-args=”ks=file:/kick1.ks console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

This will nuke the existing VM and replace it with one configured with the drive partitions as set in the kickstart file. And now you almost have a template.

You could use this new VM as a clone, but if you’ve set an IP on it, you’ll run into duplicate IP problems. SSH keys on the machine will be cloned, making all of your systems contain the same keys. And other machine-specific settings will be cloned as well. This can be worked around, though.

I recommend that you first configure this new template with the basic settings you want on all of your VMs. For instance, if you’re using Spacewalk for server management, you can install all of the necessary spacewalk binaries. You can configure a standard iptables template for the system. Maybe you have some standard security software you use such as OSSEC. And, of course, create the standard users on the system so you don’t have to create them each time you clone the VM. Once everything is installed and running how you want it, perform the following actions to make the template :

touch /.unconfigured

rm -rf /etc/ssh/ssh_host_*

poweroff

The VM will power down and you’ll have your template. Cloning this to a new VM is quick and simple. First, create the new logical volume as we did before. Next, clone the VM to the new drive :

virt-clone -o template -n new_vm -f /dev/mapper/vg_libvirt-new_vm_base

Simple enough, right? Run this command and when it completed, you can start the VM and connect to the console. You’ll be greeted with the standard first boot process and then dropped at a login prompt. Congratulations, you now have a VM. Set the IP, configure whatever services you need, and you’re off to the races.

If you need to modify the RAM, number of CPUs, etc., then use the virsh command on the hypervisor. You’ll need to shut down the VM and restart it in order for these changes to take effect.

And that’s really all there is to it. The VMs themselves can be treated as self-contained systems with no special care necessary … One note, however. If you reboot the hypervisor, the VMs are paused before rebooting and resumed after reboot. This leads to an interesting problem in that the uptime on a VM can easily exceed that of the hypervisor. Be aware of this and don’t depend on a VMs uptime to be accurate.

Monitoring as a Lifestyle

Friday, April 6th, 2012

A few years ago, I wrote a blog entry about losing weight using the Wii Fit. This worked really well for me and I was quite happy with the weight I lost. But I found, over time, that I put at least some of the weight back on. Most of this, I believe, was due to not having a full understanding of how much I was eating.

I’ve since switched from using the Wii Fit to using the XBox Kinect for fitness. I also go to fitness classes outside of home, but that’s a more recent change. But this blog entry isn’t really about fitness alone. It’s about monitoring your lifestyle, keeping track of the data you generate on a daily basis. Right now, I track a lot of personal data about my weight, what I eat, how often I work out, how I sleep, etc.

Allow me to lay out some of the tools I use on a daily basis. First off, my phone. I happen to be an iPhone user at the moment, though any modern smartphone has somewhat similar capabilities. Using my phone, I can view and edit my data whenever I need to, wherever I am. There are literally thousands of applications that can be used to track data about yourself. I’m hoping to be able to aggregate all or most of this data in a single location at some point, but for now, it’s spread across a few different services.

I’m typically fairly private about my data and I tend to avoid most cloud services. However, I have found that it’s virtually impossible to do the type of tracking I want without having to building every single tool myself. So, instead, I use a few online services and provide them with virtually no personal information about myself beyond what is required to make the service work.

So what am I using, anyway? Let’s start with how I track my diet. I’m using a service called My Fitness Pal to track what my daily caloric intake is. This has significantly helped me redefine my dietary habits and helped me to realize how much I should be eating. Previously, I would try to reduce my intake by spreading out meals over the course of the day. While this is a great habit, in the end I believe I was eating more than I should have been, despite my intent. Using the MyFitnessPal application, I get a clear view of where I stand at any point during the day. I’ve been able to significantly reduce my intake without having to shun the foods I love.

On the fitness side of things, I work out every morning before work using XBox Kinect and Your Shape Fitness. I switched over to this when the original Your Shape game came out and I’ve been quite happy. The Wii Fit is a great tool to start with, and it has the benefit of checking your weight every time you play, something I do miss with Your Shape, but the exercises became far too easy to complete. Your Shape pushes a bit harder, bringing a higher level of exercise to my daily routine. And now with the new version, they’ve raised the bar a bit, allowing me to push even harder. There are a few areas I’d like to see improvements in, but overall, I don’t have many complaints.

Using the Your Shape app on my phone, I get a readout of my exercise for the day, as well as an estimate of the calories I burned. I take this information and enter it into the My Fitness Pal application. Doing this allows me to increase my allotment of calories for the day based on how active I have been. In a way, I guess it works like a reward system, granting me the ability to enjoy a little more each day I spend time to work out.

I also wear a Jawbone Up. The Up is a pretty cool little device that tracks your movement during the day and your sleep patterns at night. It can also be used to track your food, though the interface for this is a bit lacking, which is why I use MyFitnessPal. The Up gives me a great view of how active I am during the day, as well as a view of how well I’m sleeping at night. Jawbone has had a bit of a hard time with this particular product, but my personal experience has been pretty positive thus far.

I have a few applications on my phone for tracking runs, though I use them for walking instead.. I’m not much of a runner. These applications are a dime a dozen, and I don’t really have a preference at this point. As long as the application has feedback on distance and route, it’s typically good enough. The application for the Up has this capability as well, though I haven’t had a chance to try it out yet.

And finally, I use an application to track my weight on a daily basis. One of the first things I do in the morning is weigh myself. I’m currently using an application called TargetWeight by Tactio. Basically, this application tracks your weight over time, offering up a few features to help along the way. If you enter a target weight, the application will show you the weight left to lose as part of the icon on your phone. Additionally, it will attempt to predict when you’ll hit your target rate based on the historical date it has collected. There’s a nice graphical view of your weight over time as well. Entering your weight is a quick process each morning and is one of the biggest motivators for me. There’s also an option to use a WiFi enabled Withings scale to wirelessly enter your data.

All together, these various applications and tools allow me to gain better insight into my daily health. This is obviously not for everyone, but for myself it has worked wonders. I’ve lost about 30 pounds or so in the past 2 months, and I’m getting quite close to my current target weight. To each his own, but this is working wonders for me.

MAKE : Mass Monitor Rebuild

Monday, February 20th, 2012

A few years ago, I came across a Mass EDI 4-monitor display. The computer system I had just happened to have two dual-display video cards, so it was a perfect match. Last year, one of the displays burned out and had to be replaced. Unfortunately, Mass wanted upwards of $500 for a new display. I did have a number of Dell displays available, though, and decided to look into adding one of those to the mix.

My initial attempt at adding a Dell to the mix was fairly crude, but it worked. I decided to rebuild the entire array this past week and remove the remaining three Mass monitors. There were two main reasons for this. First, the crude setup I had with the first Dell monitor wasn’t an ideal situation. The way the new monitor was mounted, it pressed up against the others and was difficult to adjust. The second reason was that I have a new video card, a Galaxy nVidia GeForce 210, that requires DVI and not VGA. The version of the Mass display I had didn’t support DVI.

And so I started to look at how to better mount a Dell display on a Mass multi-monitor array. The Dell monitor I used initially was a 1907FP. The general size was about right, it just needed to be lifted up away from the lower monitor a bit. The main problem I had with the current mount was that in order to couple the Mass mounting bracket to the Dell mounting bracket, there was really only one location that it could be placed without adding additional hardware. The Dell monitor has a small button on the back to remove it from its mounting, and the Mass has a lever of sorts that does the same. The coupling had to take both of these removal mechanisms into consideration. I spoke with a colleague about the problem and we came up with a small coupling plate that would raise the dell monitor up, keep both removal mechanisms clear, and allow for much better adjustment of the resulting monitor array.

Assembly was pretty straightforward. In order to attach the coupling plate to the Dell monitor, the Dell mount had to be removed from the original stand, lined up with the coupling plate, and holes were drilled to match.

Once the Dell side was finished, the Mass mount was removed from the original monitor and paired up with the augmented Dell mount.

And finally, the new augmented mounting brackets are attached to both the Dell monitor and the Mass monitor array. The dangling VGA cable was for testing prior to the installation of the new video card.

All that remains now is general adjustment of the new monitors. There’s a single Hex screw on the Mass array behind each monitor that can be used to adjust the monitors up and down, as well as some angled movement. This should allow me to adjust the display to exactly what I need. And it now works with the new video card, which was a breeze to install and get running in Fedora.

I love it when a plan comes together.

Much Ado About Lion

Sunday, August 7th, 2011

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Evaluating a Blogging Platform

Thursday, June 23rd, 2011

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at wordpress.com. There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.

Technology in the here and now

Wednesday, June 22nd, 2011

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Hey KVM, you’ve got your bridge in my netfilter…

Monday, April 18th, 2011

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
COMMIT

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”

 

Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC=192.168.1.2 DST=192.168.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.

Meltdown

Tuesday, March 15th, 2011

Back when the Chernobyl nuclear reactor in the Ukraine melted down, I was in grade school. That disaster absolutely fascinated me and I spent a bit of time researching nuclear power, drawing diagrams of reactor designs, and dreaming about being a nuclear scientist.

One thing that stuck with me about that disaster was the sheer power involved. I remember hearing about the roof of the reactor, a massive slab of concrete, having been blown off the building. From what I remember it was tossed many miles away, though I’m having trouble actually confirming that now. No doubt there was a lot of misreporting done at the time.

The reasons behind the meltdown at Chernobyl are still a point of contention ranging from operator error to design flaws in the reactor. Chances are it is more a combination of both. There’s a really detailed report about what happened here. Additional supporting material can be found on Wikipedia.

 

Today we have the disaster at the Fukushima power plants in Japan. Of course the primary difference from the get-go is that this situation was caused by a natural disaster rather than design flaws or operator error. Honestly, when you get hit with a massive earthquake immediately followed by a devastating tsunami, you’re pretty much starting at screwed.

From what I understand, there are 5 reactors at two plants that are listed as critical. In two instances, the containment structure has suffered an explosion. Whoa! An explosion? Yes, yes, calm down. It’s not a nuclear explosion as most people know it. Most people equate a nuclear explosion with images of mushroom clouds, thoughts of nuclear fallout, and radiation sickness. The explosion we’re talking about in this instance is a hydrogen explosion resulting from venting the inner containment chamber. Yes, it’s entirely possible that radiation was released, but nothing near the high dosages most people equate with a nuclear bomb.

And herein lies a major problem with nuclear power. Not many people understand it, and a large majority are afraid of the consequences. Yes, we have had a massive meltdown as is the case with Chernobyl. We’ve also had a partial meltdown as is the case with Three Mile Island. Currently, the disaster in Japan is closer to Three Mile Island than it is to Chernobyl. That, of course, is subject to change. It’s entirely possible that the reactor in Japan will go into a full core meltdown.

But if you look at the overall effects of nuclear power, I believe you can argue that it is cleaner and safer than many other types of power generation have been. Coal power massively pollutes the atmosphere and leaves behind some rather nasty byproducts that we just don’t have a method of dealing with. Oil and gas also cause pollution in both the atmosphere as well as the area surrounding where the oil and gas are mined. Water, wind, and sun power are, generally speaking, clean, but you have to have massive amounts of each to generate sufficient power.

Nuclear power has had such a negative stigma for such a long period of time that research dollars are not being spent on improving the technology. There are severe restrictions on what scientists can research with respect to nuclear power. As a result, we haven’t advanced very far as compared to other technologies. If we were to open up research we would be able to develop reactors that are significantly safer.

Unfortunately, I think this disaster will make things worse for the nuclear power industry. Despite the fact that this disaster wasn’t caused by design flaws, nor was there operator error, the population at large will question the validity of this technology they know nothing about. Personally, I believe we could make the earth a much cleaner, safer place to live if we were to switch to nuclear power and spend time and effort on making it safer and more efficient.

And finally, a brief note. I’m not a nuclear physicist or engineer, but I have done some background research. I strongly encourage you to do your own research if you’re in doubt about anything I’ve stated. And if I’m wrong about something, please, let me know! I’ll happily make edits to fix incorrect facts.

DRM Humor

Monday, April 12th, 2010

The Brads nailed it