MAKE : Mass Monitor Rebuild

A few years ago, I came across a Mass EDI 4-monitor display. The computer system I had just happened to have two dual-display video cards, so it was a perfect match. Last year, one of the displays burned out and had to be replaced. Unfortunately, Mass wanted upwards of $500 for a new display. I did have a number of Dell displays available, though, and decided to look into adding one of those to the mix.

My initial attempt at adding a Dell to the mix was fairly crude, but it worked. I decided to rebuild the entire array this past week and remove the remaining three Mass monitors. There were two main reasons for this. First, the crude setup I had with the first Dell monitor wasn’t an ideal situation. The way the new monitor was mounted, it pressed up against the others and was difficult to adjust. The second reason was that I have a new video card, a Galaxy nVidia GeForce 210, that requires DVI and not VGA. The version of the Mass display I had didn’t support DVI.

And so I started to look at how to better mount a Dell display on a Mass multi-monitor array. The Dell monitor I used initially was a 1907FP. The general size was about right, it just needed to be lifted up away from the lower monitor a bit. The main problem I had with the current mount was that in order to couple the Mass mounting bracket to the Dell mounting bracket, there was really only one location that it could be placed without adding additional hardware. The Dell monitor has a small button on the back to remove it from its mounting, and the Mass has a lever of sorts that does the same. The coupling had to take both of these removal mechanisms into consideration. I spoke with a colleague about the problem and we came up with a small coupling plate that would raise the dell monitor up, keep both removal mechanisms clear, and allow for much better adjustment of the resulting monitor array.

Assembly was pretty straightforward. In order to attach the coupling plate to the Dell monitor, the Dell mount had to be removed from the original stand, lined up with the coupling plate, and holes were drilled to match.

Once the Dell side was finished, the Mass mount was removed from the original monitor and paired up with the augmented Dell mount.

And finally, the new augmented mounting brackets are attached to both the Dell monitor and the Mass monitor array. The dangling VGA cable was for testing prior to the installation of the new video card.

All that remains now is general adjustment of the new monitors. There’s a single Hex screw on the Mass array behind each monitor that can be used to adjust the monitors up and down, as well as some angled movement. This should allow me to adjust the display to exactly what I need. And it now works with the new video card, which was a breeze to install and get running in Fedora.

I love it when a plan comes together.

Much Ado About Lion

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Evaluating a Blogging Platform

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at wordpress.com. There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.

Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Hey KVM, you’ve got your bridge in my netfilter…

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
COMMIT

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”

 

Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC=192.168.1.2 DST=192.168.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.

Meltdown

Back when the Chernobyl nuclear reactor in the Ukraine melted down, I was in grade school. That disaster absolutely fascinated me and I spent a bit of time researching nuclear power, drawing diagrams of reactor designs, and dreaming about being a nuclear scientist.

One thing that stuck with me about that disaster was the sheer power involved. I remember hearing about the roof of the reactor, a massive slab of concrete, having been blown off the building. From what I remember it was tossed many miles away, though I’m having trouble actually confirming that now. No doubt there was a lot of misreporting done at the time.

The reasons behind the meltdown at Chernobyl are still a point of contention ranging from operator error to design flaws in the reactor. Chances are it is more a combination of both. There’s a really detailed report about what happened here. Additional supporting material can be found on Wikipedia.

 

Today we have the disaster at the Fukushima power plants in Japan. Of course the primary difference from the get-go is that this situation was caused by a natural disaster rather than design flaws or operator error. Honestly, when you get hit with a massive earthquake immediately followed by a devastating tsunami, you’re pretty much starting at screwed.

From what I understand, there are 5 reactors at two plants that are listed as critical. In two instances, the containment structure has suffered an explosion. Whoa! An explosion? Yes, yes, calm down. It’s not a nuclear explosion as most people know it. Most people equate a nuclear explosion with images of mushroom clouds, thoughts of nuclear fallout, and radiation sickness. The explosion we’re talking about in this instance is a hydrogen explosion resulting from venting the inner containment chamber. Yes, it’s entirely possible that radiation was released, but nothing near the high dosages most people equate with a nuclear bomb.

And herein lies a major problem with nuclear power. Not many people understand it, and a large majority are afraid of the consequences. Yes, we have had a massive meltdown as is the case with Chernobyl. We’ve also had a partial meltdown as is the case with Three Mile Island. Currently, the disaster in Japan is closer to Three Mile Island than it is to Chernobyl. That, of course, is subject to change. It’s entirely possible that the reactor in Japan will go into a full core meltdown.

But if you look at the overall effects of nuclear power, I believe you can argue that it is cleaner and safer than many other types of power generation have been. Coal power massively pollutes the atmosphere and leaves behind some rather nasty byproducts that we just don’t have a method of dealing with. Oil and gas also cause pollution in both the atmosphere as well as the area surrounding where the oil and gas are mined. Water, wind, and sun power are, generally speaking, clean, but you have to have massive amounts of each to generate sufficient power.

Nuclear power has had such a negative stigma for such a long period of time that research dollars are not being spent on improving the technology. There are severe restrictions on what scientists can research with respect to nuclear power. As a result, we haven’t advanced very far as compared to other technologies. If we were to open up research we would be able to develop reactors that are significantly safer.

Unfortunately, I think this disaster will make things worse for the nuclear power industry. Despite the fact that this disaster wasn’t caused by design flaws, nor was there operator error, the population at large will question the validity of this technology they know nothing about. Personally, I believe we could make the earth a much cleaner, safer place to live if we were to switch to nuclear power and spend time and effort on making it safer and more efficient.

And finally, a brief note. I’m not a nuclear physicist or engineer, but I have done some background research. I strongly encourage you to do your own research if you’re in doubt about anything I’ve stated. And if I’m wrong about something, please, let me know! I’ll happily make edits to fix incorrect facts.

Games as saviors?

I watched a video yesterday about using video games as a means to help solve world problems. It sounds outrageous at first, until you really think about the problem. But first, how about watching the video :

Ok, now that you have some background, let’s think about this for a bit. Technology is amazing, and has brought us many advancements. Gaming is one of those advancements. We have the capability of creating entire universes, purely for our own amusement. People spend hours each day exploring these worlds. Players are typically working toward completing goals set forth by the game designers. When a player completes a goal, they are rewarded. Sometimes rewards are new items, monetary in nature, or perhaps clues to other goals. Each goal is within the reach of the player, though some goals may require more work to attain.

Miss McGonigal argues that the devotion that players show to games can be harnessed and used to help solve real-world problems. Players feel empowered by games, finding within them a way to control what happens to them. Games teach players that they can accomplish the goals set before them, bringing with it an excitement to continue.

I had the opportunity to participate in a discussion about this topic with a group of college students. Opinions ranged from a general distaste of gaming, seeing it as a waste of time, to an embrace of the ideas presented in the video. For myself, I believe that many of the ideas Miss McGonigal presents have a lot of merit. Some of the students argued that such realistic games would be complicated and uninteresting. However, I would argue that such realistic games have already proven to be big hits.

Take, for example, The Sims. The Sims was a huge hit, with players spending hours in the game adjusting various aspects of their character’s lives. I found the entire phenomenon to be absolutely fascinating. I honestly don’t know what the draw of the game was. Regardless, it did extremely well, proving that such a game could succeed.

Imagine taking a real-world problem and creating a game to represent that problem. At the very least, such a game can foster conversation about the problem. It can also lead to unique ideas about how to solve the problem, even though those playing the game may not be well-versed on the topic.

It’s definitely an avenue worth tackling, especially as future generations spend more time online. If we can find a way to harness the energy and excitement that gaming generates, we may be able to find solutions to many of the worlds most perplexing problems.

 

The Case of the Missing RAID

I have a few servers with hardware RAID directly on the motherboard. They’re not the best boards in the world, but they process my data and serve up the information I want. Recently, I noticed that one of the servers was running on the /dev/sdb* devices, which was extremely odd. Digging some more, it seemed that /dev/sda* existed and seemed to be ok, but wasn’t being used.

After some searching, I was able to determine that the server, when built, actually booted up on /dev/mapper/via_* devices, which were actually the hardware RAID. At some point these devices disappeared. To make matters worse, it seems that kernel updates weren’t being applied correctly. My guess is that either the grub update was failing, or it updated a boot loader somewhere that wasn’t actually being used to boot. As a result, an older kernel was loading, with no way to get to the newer kernel.

I spent some time tonight digging around with Google, posting messages on the CentOS forums, and digging around on the system itself. With guidance from a user via the forums, I discovered that my system should be using dmraid, which is a program that discovers and runs RAID devices such as the one I have. Digging around a bit more with dmraid and I found this :

[user@dev ~]$ sudo /sbin/dmraid -ay -v
Password:
INFO: via: version 2; format handler specified for version 0+1 only
INFO: via: version 2; format handler specified for version 0+1 only
RAID set “via_bfjibfadia” was not activated
[user@dev ~]$

Apparently my RAID is running version 2 and dmraid only supports versions 0 and 1. Since this was initially working, I’m at a loss as to why my RAID is suddenly not supported. I suppose I can rebuild the machine, again, and check, but the machine is about 60+ miles from me and I’d rather not have to migrate data anyway.

So how does one go about fixing such a problem? Is my RAID truly not supported? Why did it work when I built the system? What changed? If you know what I’m doing wrong, I’d love to hear from you… This one has me stumped. But fear not, when I have an answer, I’ll post a full writeup!

 

Space Photography

Slashdot posted a news item late last evening about some rather stunning photos from the International Space Station. On June 12th, the Sarychev Peak volcano erupted. At the same time, the ISS happened to be right overhead. What resulted was some incredible imagery, provided to the public by NASA. Check out the images below:

You can find more images and information here. Isn’t nature awesome?