Looking into the SociaVirtualistic Future

Let’s get this out of the way. One of the primary reasons I’m writing this is in response to a request by John Carmack for coherent commentary about the recent acquisition of Oculus VR by Facebook. My hope is that he does, in fact, read this and maybe drop a comment in response. <fanboy>Hi John!</fanboy> I’ve been a huge Carmack fan since the early ID days, so please excuse the fanboyism.

And I *just* saw the news that Michael Abrash has joined Oculus as well, which is also incredibly exciting. Abrash is an Assembly GOD. <Insert more fanboyism here />

Ok, on to the topic a hand. The Oculus Rift is a VR headset that got its public start with a Kickstarter campaign in September of 2012. It blew away it’s meager goal of $250,000 and raked in almost $2.5 Million. For a mere $275 and some patience, contributors would receive an unassembled prototype of the Oculus Rift. Toss in another $25 and you received an assembled version.

But what is the Oculus Rift? According to the Kickstarter campaign :

Oculus Rift is a new virtual reality (VR) headset designed specifically for video games that will change the way you think about gaming forever. With an incredibly wide field of view, high resolution display, and ultra-low latency head tracking, the Rift provides a truly immersive experience that allows you to step inside your favorite game and explore new worlds like never before.

In short, the Rift is the culmination of every VR lover’s dreams. Put a pair of these puppies on and magic appears before your eyes.

For myself, Rift was interesting, but probably not something I could ever use. Unfortunately, I suffer from Amblyopia, or Lazy Eye as it’s commonly called. I’m told I don’t see 3D. Going to 3D movies pretty much confirms this for me since nothing ever jumps out of the screen. So as cool as VR sounds to me, I would miss out on the 3D aspect. Though it might be possible to “tweak” the headset and adjust the angles a bit to force my eyes to see 3D. I’m not sure if that’s good for my eyes, though.

At any rate, the Rift sounds like an amazing piece of technology. In the past year I’ve watched a number of videos demonstrating the capabilities of the Rift. From the Hak5 crew to Ben Heck, the reviews have all been positive.

And then I learned that John Carmack joined Oculus. I think that was about the time I realized that Oculus was the real deal. John is a visionary in so many different ways. One can argue that modern 3D gaming is largely in part to the work he did in the field. In more recent years, his visions have aimed a bit higher with his rocket company, Armadillo Aerospace. Armadillo started winding down last year, right about the time that John joined Oculus, leaving him plenty of time to deep dive into a new venture.

For anyone paying attention, Oculus was recently acquired by Facebook for a mere $2 Billion. Since the announcement, I’ve seen a lot of hatred being tossed around on Twitter. Some of this hatred seems to be Kickstarter backers who are under some sort of delusion that makes them believe they have a say in anything they back. I see this a lot, especially when a project is taking longer than they believe it should.

I can easily write several blog posts on my personal views about this, but to sum it up quickly, if you back a project, you’re contributing to make something a reality. Sometimes that works, sometimes it doesn’t. But Kickstarter clearly states that you’re merely contributing financial backing, not gaining a stake in a potential product and/or company. Nor are you guaranteed to receive the perks you’ve contributed towards. So suck it up and get over it. You never had control to begin with.

I think Notch, of Minecraft fame, wrote a really good post about his feeling on the subject. I think he has his head right. He contributed, did his part, and though it’s not working out the way he wanted, he’s still willing to wish the venture luck. He may not want to play in that particular sandbox, but that’s his choice.

VR in a social setting is fairly interesting. In his first Oculus blog post, Michael Abrash mentioned reading Neal Stephenson’s incredible novel, Snow Crash. Snow Crash provided me with a view of what virtual reality might bring to daily life. Around the same time, the movie Lawnmower Man was released. Again, VR was brought into the forefront of my mind. But despite the promises of books and movies, VR remained elusive.

More recently, I read a novel by Ernest Cline, Ready Player One. Without giving too much away, the novel centers around a technology called the OASIS. Funnily enough, the OASIS is, effectively, a massive social network that users interact with via VR rigs. OASIS was the first thing I thought about when I heard about the Facebook / Oculus acquisition.

For myself, my concern is Facebook. Despite being a massively popular platform, I think users still distrust Facebook quite a bit. I lasted about 2 weeks on Facebook before having my account deleted. I understand their business model and I have no interest in taking part. Unfortunately, I’m starting to miss out on some aspects of Internet life since some sites are requiring Facebook accounts for access. Ah well, I guess they miss out on me as well.

I have a lot of distrust in Facebook at the moment. They wield an incredible amount of information about users and, to be honest, they’re nowhere near transparent enough for me to believe what they say. Google is slightly better, but there’s some distrust there as well. But more than just the distrust, I’m afraid that Facebook is going to take something amazing and destroy it in a backwards attempt to monetize it. I’m afraid that Facebook is the IOI of this story. (It’s a Ready Player One reference. Go read it, you can thank me later)

Ultimately, I have no stake in this particular game. At least, not yet, anyway. Maybe I’m wrong and Facebook makes all the right moves. Maybe they become a power for good and are able to bring VR to the masses. Maybe people like Carmack and Abrash can protect Oculus and fend off any fumbling attempts Facebook may make at clumsy monetization. I’m not sure how this will play out, only time will tell.

How will we know how things are going? Well, for one, watching his Facebook interacts with this new property will be pretty telling. I think if Facebook is able to sit in the shadows and watch rather than kicking in the front door and taking over, maybe Oculus will have a chance to thrive. Watching what products are ultimately released by Oculus will be another telling aspect. While I fully expect that Oculus will add some sort of Facebook integration into the SDK over time, I’m also hoping that they continue to provide an SDK for standalone applications.

I sincerely wish Carmack, Abrash, and the rest of the Oculus team the best. I think they’re in a position where they can make amazing things happen, and I’m eager to see what comes next.

Pebble Review

In April of 2012, a Kickstarter project was launched by a company aiming to create an electronic watch that served as a companion to your smartphone. A month later, the project exceeded it’s funding goal by over 100%, closing at over $10 million in pledges. Happily, I was one of the over 68,000 people that pledged. I received my Pebble about a month ago or so and I’ve been wearing it ever since.

The watch itself is fairly simple, a rectangular unit with an e-ink display, four buttons, and a rubberized plastic strap. The screen resolution is 144×168, plenty of pixels for some fairly impressive detail. The watch communicates with your mobile phone (Android or iPhone only) via a bluetooth connection. All software updates and app installation occurs over the bluetooth connection. There is a 3-axis accelerometer as well a a pretty standard vibrating motor for silent alerts.

According to the official Pebble FAQ, battery life is 7+ days on a single charge, but this depends on your overall use of the device. The more alerts your receive, the more the backlight comes on, and the more apps you use on the device, the shorter your battery life.

Pebble is still in the process of building the initial run of watches for backers. Black watches, being the majority of the orders, were built first. Other colors are coming online in more recent weeks. Pebble has a website where interested parties can track how many pebbles have been built and shipped.

I’ve been pretty impressed with the watch thus far. Pebble has been fairly responsive to inquiries I’ve made, and they seem dedicated to making sure they have a top quality product. Of course, as is typical on the Internet, not everyone is happy. There seem to be a lot of complaints about communication, how long it’s taking to get watches, and about the features themselves.

It’s hard to say whether these complaints have any merit, though. For starters, I can’t imagine it’s a simple task to design and build 68,000 watches in a short period of time. And to complicate matters further, it seems that many backers of Kickstarter projects don’t understand the difference between being a backer and being a customer.

When you back a Kickstarter project, you’re pledging money to help start the project. As a “reward” for contributing, if the project is successful, you are entitled to whatever the project owners have designated for your level of contribution. The key part of this being, if the project is successful. Some projects take longer than others, and times often slip. That said, I’ve only been part of one Kickstarter that has failed, and even that one is being resurrected by other interested parties.

But there are some legitimate complaints, some that can be addressed, and others that likely won’t. For instance, I’ve noticed that with recent firmware releases, the battery life on my watch had dropped considerably. Based on communication with the developers, they are aware of this and are actively working to resolve it. I’m not sure what the problem is, exactly, but I’m confident they’ll have it fixed in the next firmware update.

The battery indicator is a source of frequent discussion. Right now, there’s no indicator of battery life until the battery is running low. And that indicator doesn’t show on the watchface, it only shows when you are in other menus. This, in my opinion, is a poor UI choice. I’d much rather see a battery indicator option available for the watchface itself.

Menu layout was also a frequent source of frustration for users. In previous firmware releases, you had to actively go to the watchface you wanted. Recent releases changed this so that the watch was the default view and other screens were chosen as needed. The behavior of the navigation buttons on the watch were also updated to reflect this new choice.

So Pebble continues to improve over time. It’s an iterative process that will take some time to get right. I’m eager to see what future releases will bring. Next week, Pebble is scheduled to release the watch SDK, allowing users, for the first time, to start adding their own customizations to the watch.

The Pebble watch has a lot of potential. As the platform matures, I’m hoping to see a number of features I’m interested in come to fruition. Interaction between Pebble and other apps on iPhone devices would be a welcome addition. I would love to see an actigraphy app that uses the Pebble for sleep monitoring. From what I’ve read, sleep monitoring is even more accurate when the monitor is placed on the sleeper’s wrist. Seems like a perfect use for the Pebble.

I’d also like to see more of an open SDK, allowing users such as myself to write code for the Pebble. While I’m aware of the closed nature of the iPhone platform itself, it is still possible to add applications to the Pebble itself. I can’t wait to see what others build for this platform. Given a bit of time, I think this can grow into something even more amazing.

Customer Dis-Service

In general, I’m a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I’ll take a look, but unless there’s a compelling reason to change, I’ll stick with what I have.

But sometimes a change is forced upon me. Take, for instance, this last week. I’ve been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I’ve been using Verizon’s services for a long time. I’ve been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I’ve had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I’ve always been able to pay the bill and move on. Since that’s really the only thing I used their online service for, I was content to leave well enough alone.

In more recent months, we’ve been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon’s fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn’t nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn’t offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I’d rather not do business with if I could avoid it.

In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.

Ok, I thought, let’s dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.

Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn’t a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don’t know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn’t get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn’t understand my request.

Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I wanted to cancel my home phone and just keep the DSL. I also wanted to verify that I was not under contract so I wouldn’t end up with some crazy early termination fee. She explained that this was perfectly acceptable and that I could make these changes whenever I wanted. I verified again that I could keep the DSL without issue. She agreed, no problem.

Excellent! Off I went to the cell carrier, purchased (free with a contract) the new home phone box, and had them port the number. The representative cautioned that he saw DSL service listed when he was porting and suggested I contact Verizon to verify that the DSL service would be ok.

I called Verizon again to verify everything would work as intended. I explained what I had done, asked when the port would go through, and stressed that the DSL service was staying. The representative verified the port date and said that the DSL service would be fine.

You can guess where this is going, can’t you. On the day of the port, the phone line switched as expected. The new home phone worked perfectly and I made the necessary changes to the home wiring to ensure that the DSL connection was isolated away from the rest of the wiring. DSl was still up, phone ported, everything was great. Until the next morning.

I woke up the following morning and started my normal routine. Get dressed, go exercise, etc. Except that on the way to exercise, I noticed that the router light was blinking. Odd, I wonder what was going on. Perhaps something knocked the system online overnight? The DSL light on the modem was still on, so I had a connection to the DSLAM. No problem, reboot the router and we’ll be fine. So, I rebooted and walked away. After a few minutes I checked the system and noticed that I was still not able to get online. I walked through a mental checklist and decided that the username and password for the PPPoE connection must be failing. Time to call Verizon and see what’s wrong.

I contacted Verizon and first spoke to a sales rep who informed me that my services had been cancelled per my request. Wonderful. Al that work and they screw it up anyway. I explained what I had done and she took a deeper look into the account. Turns out the account was “being migrated” and she apologized for the mixup. Since I was no longer bundled, the DSL account had to be migrated. I talked with her some more about it and she decided to send me to technical support to verify everything was ok. Off I go to technical support, fully expecting them to ask be to reset my DSL modem. No such luck, however, the technical support rep explained that I had no DSL service.

And back to sales I went. I explained, AGAIN, what was going on. The representative confirmed my story, verified that the account was being migrated, and asked me to check the service again in a few hours. All told, I spent roughly an hour on the phone with Verizon and missed out on my morning exercise.

After rushing through the remainder of my morning routine and explaining to my wife why the Internet wasn’t working, I left for work. My wife checked in a few hours later to let me know that, no, we still did not have an Internet connection. So I called Verizon again. Again I’m told I have no service and that I have cancelled them. Again I explain the problem and what I had done. And this time, the representative explains to me that they do not offer unbundled DSL service anymore, they haven’t had that service in about a year. She goes on to offer me a bundled package with a phone line and explains that I don’t have to use the phone line, I just have to pay for it.

So all of the careful planning I had done was for naught. In an effort to make sure this didn’t happen to anyone else, the rep checked back on my account to see who had informed me about the DSL service. According to the notes, however, I had never called about such a thing. I called to complain about unsolicited phone calls and they referred me to their fraud and abuse office and explains about the magical phone code I could put in to block calls. Ugh! She then went on to detail every aspect of my problem, again so someone else didn’t have this problem.

This is the sort of situation that will, very rapidly, cause me to look elsewhere for service. And that’s exactly what I did. I’ve since cut all ties with Verizon and moved on to a different Internet service provider. I’m not happy with having to deal with this provider, but it’s the only alternative at the moment. Assuming I don’t have any major problems with the service, I’ll probably continue with them for a while. Of course, if I run into problems here, the decision becomes more difficult. A “lesser of two evils” situation, if you will. But for now, I’ll deal with what comes up.

So you want to talk at a conference

Last year at this time I was attending an absolutely amazing conference known as DerbyCon. It was an amazing time where I met some absolutely amazing people and learned amazing things. Believe me, there was a lot of amazing.

I attended one talk that really got me thinking about blue-team security. That is, defensive security, basically what I’m all about these days. And I decided that I wanted to help the cause .. So, I started putting together the pieces in my head and decided I wanted to do a talk at the following DerbyCon ..

And so, when the CFP was placed, I submitted my thoughts and ideas. Honestly, while I hoped it would be accepted, I didn’t think I had a chance in hell given the talent that talked the previous year.. Boy was I wrong.. Talk accepted. And so I started putting things together, working on the talk itself, pushing forward the design I wanted for this new tool. I aimed high and came up a little short..

As luck would have it, this past summer was a beast. Just no time to work on anything in-depth .. And time went by. And before I knew it, DerbyCon was here.. I did a dry-run of my talk to get some feedback and suggestions. Total talk time? 15 minutes. Uhh.. That might be an issue.. 50 minute talk window and all..

So, back to the drawing board. Fortunately, I received some awesome feedback and expanded my talk a bit. The revised edition should be a bit longer, I would hope.. I’ll find out tomorrow. I’m talking at 2pm.

I’m terrified.

But I’m surrounded by some of the most awesome people I have ever met. I’ll be fine.. I hope..

The Future of Personal Computers

The latest version of OS X, Mountain Lion, has been out for a few months and the next release of Windows, Windows 8, will be out very soon. These operating systems continue the trend of adding new and radical features to a desktop operating system, features we’ve only seen in mobile interfaces. For instance, OS X has the launchpad, an icon-based menu used for launching applications similar to the interface used on the iPhone and iPad. Windows 8 has their new Metro interface, a tile-based interface first seen on their Windows Mobile operating system.

As operating systems evolve and mature, we’ll likely see more of this. But what will the interface of the future look like? How will we be expected to interact with the computer, both desktop and mobile, in the future? There’s a lot out there already about how computers will continue to become an integral part of daily life, how they’ll become so ubiquitous that we won’t know we’re actually using them, etc. It’s fairly easy to argue that this has already happened, though. But putting that aside, I’m going to ramble on a bit about what I think the future may hold. This isn’t a prediction, per se, but more of what I’m thinking we’ll see moving forward.

So let’s start with today. Touch-based devices such as IOS and Android based devices have become the standard for mobile phones and tablets. In fact, the Android operating system is being used for much more than this, appearing in game consoles such as the OUYA, as the operating system behind Google’s Project Glass initiative, and more. It’s not much of a surprise, of course, as Linux has been making these in-roads for years and Android is, at it’s core, an enhanced distribution of Linux designed for mobile and embedded applications.

The near future looks like it will be filled with more touch-based interfaces as developers iterate and enhance the current state of the art. I’m sure we’ll see streamlined multi-touch interfaces, novel ways of launching and interacting with applications, and new uses for touch-based computing.

For desktop and laptop systems, the traditional input methods of keyboards and mice will be enhanced with touch. We see this happening already with Apple’s Magic Mouse and Magic Pad. Keyboards will follow suit with enhanced touch pads integrated into them, reducing the need to reach for the mouse. And while some keyboard exist today with touchpads attached already, I believe we’ll start seeing tighter integrations with multi-touch capabilities.

We’re also starting to see the beginnings of gesture-based devices such as Microsoft’s Kinect. Microsoft bet a lot on Kinect as the next big thing in gaming, a direct response to Nintendo’s Wii and Sony’s Move controllers. And since the launch of Kinect, hobbyists have been hacking away, adding Kinect support to “traditional” computer operating systems. Microsoft has responded, releasing a development kit for Windows and designing a Kinect intended for use with Dekstop operating systems.

Gesture based interfaces have long been perceived as the ultimate in computer interaction. Movies such as Minority Report and Iron Man have shown the world what such interfaces may look like. But life is far different from a movie. Humans were not designed to hold their arms in a horizontal position for long periods of time, a syndrome known as “Gorilla Arm.” Designers will have to adapt the technology in ways that work around these physical limitations.

Tablet computers work well at the moment because most interactions with them are on a horizontal and not vertical plane, thus humans do not need to strain themselves to use them. Limited applications, such as ATMs, are more tolerant of these limitations since the duration of use is very low.

Right now we’re limited to 2D interfaces for applications. How will technology adapt when true 3D display exist? It stands to reason that some sort of gesture interface will come into play, but in what form? Will we have interfaces like those seen in Iron Man? For designers, such an interface may provide endless insight into new designs. Perhaps a merging of 2D and 3D interfaces will allow for this. We already have 3D renderings in modern design software, but allowing such software to render in true 3D where the designer can move their head instead of their screen to interact? That is truly a breakthrough.

What about mobile life? Will touch-based interfaces continue to dominate? Or will wearable computing with HUD style displays become the new norm? I’m quite excited at the prospect of using something such as Google’s Project Glass in the near future. The cost is still prohibitive for the average user, but it’s still far below the cost of similar cutting edge technologies a mere 5 years ago. And prices will continue to drop.

Perhaps in the far future, 20+ years from now, the input device will be our own bodies, ala Kinect, with a display small enough that it’s embedded in our eyes, or inserted as a contact lens. Maybe in that timeframe, we truly become one with the computer and transform from mere humans into cyborgs. There will always be those who won’t follow suit, but for those of us with the interest and the drive, those will be interesting times, won’t they.

Multiple Personalities With The Linux Kernel

Virtualization is all the rage these days. Taking a single computer system and installing multiple “guest” operating systems on it. The benefits are a reduced footprint and better utilization of existing resources. There is a danger, however, in having many systems dependent on a single piece of hardware. The solution, of course, is to use multiple pieces of hardware and allow your “guests” to be moved between the individual hardware units, thus making the system more resilient to failure.

I’ve started playing a bit with virtualization, specifically, KVM virtualization. For my purposes, I’m using CentOS 6.x on a 64-bit capable system.

The hypervisor itself is a standard CentOS base install with the addition of kvm and various management packages. I installed the hypervisor on a RAID1 LVM, allowing me some room to grow if necessary, and reserving the remainder of the hard drive for virtual hosts. While you can use binary blobs for virtual disk, I prefer using a raided LVM which gives me the ability to grow the disk if necessary as well as minor bumps in speed.

Using yum, adding KVM to an existing installation is a pretty straightforward process :

yum install virt-manager libvirt libvirt-python python-virtinst

That should take care of any dependencies required to get KVM virtualization up and running.

Next up, we need to tackle networking. There are many, many different configurations, far too many to go through here. So, I’m going to keep it simple. For my purposes, I need a single connection to the outside network, all in the same VLAN, as well as a local NAT for some VMs that I need local access to, but that don’t need to be accessed via the Internet.

Setting this up is brilliantly simple. First, copy the /etc/sysconfig/network-scripts/ifcfg-eth0 file to /etc/sysconfig/network-scripts/ifcfg-br0. Next, edit the ifcfg-eth0 file. You’ll need to remove a bunch of lines and add a BRIDGE line as follows :

DEVICE=”eth0″

BRIDGE=”br0″

HWADDR=”00:11:22:33:44:55″

ONBOOT=”yes”

Next, edit the ifcfg-br0 file. All you really need to do here is change the DEVICE= line to reflect br0. I also recommend disabling NM_CONTROLLED … NetworkManager shouldn’t be installed anyway since you used a base install, but better safe than sorry. In the end, the ifcfg-br0 file should look something like this :

DEVICE=”br0″

BOOTPROTO=”static”

BROADCAST=”204.10.167.63″

IPADDR=”204.10.167.50″

NETMASK=”255.255.255.192″

ONBOOT=”yes”

TYPE=”Bridge”

DELAY=”0″

Restart networking and you’ll be all set. The NAT portion of this is handled by KVM itself, so there’s nothing to do there. And networking should be all ready to go.

Without guests, however, all you have is a basic Linux system with a few extra packages taking up space. The real magic starts when you create and install your first VM. My recommendation is to start with creating a template system you can clone later rather than hand-installing every single VM. To install the template, first decide on the base disk size. I’m using 15 GB volumes which is more than enough for the base install and leaves room for most basic server configurations. If you need more space, you can attach additional disks later.

I’m not going to go into how I set up LVM, there are plenty of tutorials out there. For the purposes of this article, I have a volume group names vg_libvirt where I plan to store all of the virtual machines. So first we create the disk necessary for the template :

lvcreate -L15G -n template_base vg_libvirt

Next we install the OS. virt-install is essentially a wrapper script that sets all the necessary values within KVM to get you going. After the settings are configured and the VM is started, girt-installer will automatically attach you to the VM console. The full command I used to install is as follows :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –extra-args=”console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

Since this is effectively a text install, you do run into a bit of a problem. Namely, you can’t configure the drives the way you want. There is a way around this, though it takes a bit of work. Of course, since you’re creating a template, the little bit of work now is easily made up for later. So, here’s how I handled the drive configuration.

First, run through a basic install using the above install method. Once you’re up and running, log into the new VM and head to the root home directory. In that directory you’ll find a kickstart file called anaconda-ks.cfg. Make a local copy of that file and shut down the VM.

The kickstart file gives you the basic parameters that CentOS used to configure the system. You can edit this file and use it yourself to automatically install and configure systems. For our purposes, we’re interested in editing the drive configuration and then using the kickstart file to create the template. So, edit the file and set the parameters as you see fit. An example is as follows :

# Kickstart file automatically generated by anaconda.

#version=DEVEL

install

cdrom

lang en_US.UTF-8

keyboard us

network –onboot no –device eth0 –noipv4 –noipv6

rootpw –iscrypted somerandomstringthatiwontrevealtoyoubutnicetry

firewall –service=ssh

authconfig –enableshadow –passalgo=sha512

selinux –enforcing

timezone –utc America/New_York

bootloader –location=mbr –driveorder=vda –append=” console=ttyS0 crashkernel=auto”

# The following is the partition information you requested

# Note that any partitions you deleted are not expressed

# here so unless you clear all partitions first, this is

# not guaranteed to work

clearpart –all –drives=vda

part /boot –fstype=ext4 –size=500

part swap –size=2048

part pv.253002 –grow –size=1

volgroup VolGroup –pesize=4096 pv.253002

logvol / –fstype=ext4 –name=lv_root –vgname=VolGroup –size=4096

logvol /tmp –fstype=ext4 –name=lv_tmp –vgname=VolGroup –size=2048

logvol /var –fstype=ext4 –name=lv_var –vgname=VolGroup –size=4096

logvol /home –fstype=ext4 –name=lv_home –vgname=VolGroup –size=2048

#repo –name=”CentOS” –baseurl=cdrom:sr0 –cost=100

%packages –nobase

@core

%end

Once you have this, you can re-run the girt-install command from above with a slight tweak to make the install use the kickstart file you created (I named it kick1.ks) :

virt-install –accelerate –hvm –connect qemu:///system –network bridge:bra –name template –ram 512 –disk=/dev/mapper/vg_libvirt-template_base –vcpus=1 –check-cpu –nographics –initrd-inject=/path/to/kick1.ks –extra-args=”ks=file:/kick1.ks console=ttyS0 text” –location=/tmp/CentOS-6.2-x86_64-bin-DVD1.iso

This will nuke the existing VM and replace it with one configured with the drive partitions as set in the kickstart file. And now you almost have a template.

You could use this new VM as a clone, but if you’ve set an IP on it, you’ll run into duplicate IP problems. SSH keys on the machine will be cloned, making all of your systems contain the same keys. And other machine-specific settings will be cloned as well. This can be worked around, though.

I recommend that you first configure this new template with the basic settings you want on all of your VMs. For instance, if you’re using Spacewalk for server management, you can install all of the necessary spacewalk binaries. You can configure a standard iptables template for the system. Maybe you have some standard security software you use such as OSSEC. And, of course, create the standard users on the system so you don’t have to create them each time you clone the VM. Once everything is installed and running how you want it, perform the following actions to make the template :

touch /.unconfigured

rm -rf /etc/ssh/ssh_host_*

poweroff

The VM will power down and you’ll have your template. Cloning this to a new VM is quick and simple. First, create the new logical volume as we did before. Next, clone the VM to the new drive :

virt-clone -o template -n new_vm -f /dev/mapper/vg_libvirt-new_vm_base

Simple enough, right? Run this command and when it completed, you can start the VM and connect to the console. You’ll be greeted with the standard first boot process and then dropped at a login prompt. Congratulations, you now have a VM. Set the IP, configure whatever services you need, and you’re off to the races.

If you need to modify the RAM, number of CPUs, etc., then use the virsh command on the hypervisor. You’ll need to shut down the VM and restart it in order for these changes to take effect.

And that’s really all there is to it. The VMs themselves can be treated as self-contained systems with no special care necessary … One note, however. If you reboot the hypervisor, the VMs are paused before rebooting and resumed after reboot. This leads to an interesting problem in that the uptime on a VM can easily exceed that of the hypervisor. Be aware of this and don’t depend on a VMs uptime to be accurate.

Monitoring as a Lifestyle

A few years ago, I wrote a blog entry about losing weight using the Wii Fit. This worked really well for me and I was quite happy with the weight I lost. But I found, over time, that I put at least some of the weight back on. Most of this, I believe, was due to not having a full understanding of how much I was eating.

I’ve since switched from using the Wii Fit to using the XBox Kinect for fitness. I also go to fitness classes outside of home, but that’s a more recent change. But this blog entry isn’t really about fitness alone. It’s about monitoring your lifestyle, keeping track of the data you generate on a daily basis. Right now, I track a lot of personal data about my weight, what I eat, how often I work out, how I sleep, etc.

Allow me to lay out some of the tools I use on a daily basis. First off, my phone. I happen to be an iPhone user at the moment, though any modern smartphone has somewhat similar capabilities. Using my phone, I can view and edit my data whenever I need to, wherever I am. There are literally thousands of applications that can be used to track data about yourself. I’m hoping to be able to aggregate all or most of this data in a single location at some point, but for now, it’s spread across a few different services.

I’m typically fairly private about my data and I tend to avoid most cloud services. However, I have found that it’s virtually impossible to do the type of tracking I want without having to building every single tool myself. So, instead, I use a few online services and provide them with virtually no personal information about myself beyond what is required to make the service work.

So what am I using, anyway? Let’s start with how I track my diet. I’m using a service called My Fitness Pal to track what my daily caloric intake is. This has significantly helped me redefine my dietary habits and helped me to realize how much I should be eating. Previously, I would try to reduce my intake by spreading out meals over the course of the day. While this is a great habit, in the end I believe I was eating more than I should have been, despite my intent. Using the MyFitnessPal application, I get a clear view of where I stand at any point during the day. I’ve been able to significantly reduce my intake without having to shun the foods I love.

On the fitness side of things, I work out every morning before work using XBox Kinect and Your Shape Fitness. I switched over to this when the original Your Shape game came out and I’ve been quite happy. The Wii Fit is a great tool to start with, and it has the benefit of checking your weight every time you play, something I do miss with Your Shape, but the exercises became far too easy to complete. Your Shape pushes a bit harder, bringing a higher level of exercise to my daily routine. And now with the new version, they’ve raised the bar a bit, allowing me to push even harder. There are a few areas I’d like to see improvements in, but overall, I don’t have many complaints.

Using the Your Shape app on my phone, I get a readout of my exercise for the day, as well as an estimate of the calories I burned. I take this information and enter it into the My Fitness Pal application. Doing this allows me to increase my allotment of calories for the day based on how active I have been. In a way, I guess it works like a reward system, granting me the ability to enjoy a little more each day I spend time to work out.

I also wear a Jawbone Up. The Up is a pretty cool little device that tracks your movement during the day and your sleep patterns at night. It can also be used to track your food, though the interface for this is a bit lacking, which is why I use MyFitnessPal. The Up gives me a great view of how active I am during the day, as well as a view of how well I’m sleeping at night. Jawbone has had a bit of a hard time with this particular product, but my personal experience has been pretty positive thus far.

I have a few applications on my phone for tracking runs, though I use them for walking instead.. I’m not much of a runner. These applications are a dime a dozen, and I don’t really have a preference at this point. As long as the application has feedback on distance and route, it’s typically good enough. The application for the Up has this capability as well, though I haven’t had a chance to try it out yet.

And finally, I use an application to track my weight on a daily basis. One of the first things I do in the morning is weigh myself. I’m currently using an application called TargetWeight by Tactio. Basically, this application tracks your weight over time, offering up a few features to help along the way. If you enter a target weight, the application will show you the weight left to lose as part of the icon on your phone. Additionally, it will attempt to predict when you’ll hit your target rate based on the historical date it has collected. There’s a nice graphical view of your weight over time as well. Entering your weight is a quick process each morning and is one of the biggest motivators for me. There’s also an option to use a WiFi enabled Withings scale to wirelessly enter your data.

All together, these various applications and tools allow me to gain better insight into my daily health. This is obviously not for everyone, but for myself it has worked wonders. I’ve lost about 30 pounds or so in the past 2 months, and I’m getting quite close to my current target weight. To each his own, but this is working wonders for me.

MAKE : Mass Monitor Rebuild

A few years ago, I came across a Mass EDI 4-monitor display. The computer system I had just happened to have two dual-display video cards, so it was a perfect match. Last year, one of the displays burned out and had to be replaced. Unfortunately, Mass wanted upwards of $500 for a new display. I did have a number of Dell displays available, though, and decided to look into adding one of those to the mix.

My initial attempt at adding a Dell to the mix was fairly crude, but it worked. I decided to rebuild the entire array this past week and remove the remaining three Mass monitors. There were two main reasons for this. First, the crude setup I had with the first Dell monitor wasn’t an ideal situation. The way the new monitor was mounted, it pressed up against the others and was difficult to adjust. The second reason was that I have a new video card, a Galaxy nVidia GeForce 210, that requires DVI and not VGA. The version of the Mass display I had didn’t support DVI.

And so I started to look at how to better mount a Dell display on a Mass multi-monitor array. The Dell monitor I used initially was a 1907FP. The general size was about right, it just needed to be lifted up away from the lower monitor a bit. The main problem I had with the current mount was that in order to couple the Mass mounting bracket to the Dell mounting bracket, there was really only one location that it could be placed without adding additional hardware. The Dell monitor has a small button on the back to remove it from its mounting, and the Mass has a lever of sorts that does the same. The coupling had to take both of these removal mechanisms into consideration. I spoke with a colleague about the problem and we came up with a small coupling plate that would raise the dell monitor up, keep both removal mechanisms clear, and allow for much better adjustment of the resulting monitor array.

Assembly was pretty straightforward. In order to attach the coupling plate to the Dell monitor, the Dell mount had to be removed from the original stand, lined up with the coupling plate, and holes were drilled to match.

Once the Dell side was finished, the Mass mount was removed from the original monitor and paired up with the augmented Dell mount.

And finally, the new augmented mounting brackets are attached to both the Dell monitor and the Mass monitor array. The dangling VGA cable was for testing prior to the installation of the new video card.

All that remains now is general adjustment of the new monitors. There’s a single Hex screw on the Mass array behind each monitor that can be used to adjust the monitors up and down, as well as some angled movement. This should allow me to adjust the display to exactly what I need. And it now works with the new video card, which was a breeze to install and get running in Fedora.

I love it when a plan comes together.

Much Ado About Lion

Apple released the latest version of it’s OS X operating system, Lion, on July 20th. With this release came a myriad of changes in both the UI and back-end systems. Many of these features are denounced by critics as Apple slowly killing off OS X in favor of iOS. After spending some time with Lion, I have to disagree.

Many of the new UI features are very iOS-like, but I’m convinced that this is not a move to dumb down OS X. I believe this is a move by Apple to make the OS work better with the hardware it sells. Hear me out before you declare me a fanboy and move on.

Since the advent of the unibody Macbook, Apple has been shipping buttonless input devices. The Macbook itself has a large touchpad, sans button. Later, they released the magic mouse, sort of a transition device between mice and trackpads. I’m not a fan of that particular device. And finally, they’re shipping the trackpad today. No buttons, lots of room for gestures. Just check out the copy direct from their website.

If you look at a lot of the changes made in Lion, they go hand-in-hand with new gestures. Natural scrolling allows you to move the screen in the same direction your fingers are moving. Swipe three fingers to the left and right, the desktop you’re on moves along with it. Explode your fingers outwards and Launchpad appears, a quick, simple way to access your applications folder. Similar gestures are available for the Magic Mouse as well.

These gestures allow for quick and simple access to many of the more advanced features of Lion. Sure, iOS had some of these features first, but just because they’ve moved to another platform doesn’t mean that the platforms are merging.

Another really interesting feature in Lion is one that has been around for a while in iOS. When Apple first designed iOS, they likely realized that standard scrollbars chew up a significant amount of screen real estate. Sure, on a regular computer it may be a relatively small percentage, but on a small screen like a phone, it’s significant. So, they designed a thinner scrollbar, minus the arrows normally seen at the top and bottom, and made it auto-hide when the screen isn’t being scrolled. This saved a lot of room on the screen.

Apple has taken the scrollbar feature and integrated it into the desktop OS. And the effect is pretty significant. The amount of room saved on-screen is quite noticeable. I have seen a few complaints about this new feature, however, mostly complaining that it’s difficult to grab the scrollbar with the mouse pointer, or that the arrow buttons are gone. I think the former is just a general “they changed something” complaint while the latter is truly legitimate. There have been a few situations where I’ve looked for the arrow buttons and their absence was noticeable., I wonder, however, whether this is a function of habit, or if their use is truly necessary. I’ve been able to work around this pretty easily on my Macbook, but after I install Lion on my Mac Pro, I expect that I’ll have a slightly harder time. Unless, that is, I buy a trackpad. As I said, I believe Apple has built this new OS with their newer input devices in mind.

On the back end, Lion is, from what I can tell, completely 64-bit. They have removed Java and Flash, and, interestingly, banned both from their online App Store. No apps that require Java or Flash can be sold there. Interesting move. Additionally, Rosetta, the emulation software that allows older PowerPC software to run, has been removed as well.

Overall, I’m enjoying my Lion experience. I still have the power of a unix-based system with the simplicity of a well thought out GUI interface. I can still do all of the programming I’m used to as well as watch videos, listen to music, and play games. I think I’ll still keep a traditional multi-button mouse around for gaming, though.

Evaluating a Blogging Platform

I’ve been pondering my choices lately, determining if I should stay with my current blogging platform or move to another one. There’s nothing immediate forcing me to change, nor is there anything overly compelling to the platform I’m currently using. This is an exercise I seem to go through from time to time. It’s probably for the better as it keeps me abreast of what else is out there and allows me to re-evaluate choices I’ve made in the past.

So, what is out there? Well, Serendipity has grown quite a bit as a blogging platform and is quite well supported. That, in its own right, makes it a worthy choice. The plugin support is quite vast and the API is simple enough that creating new plugins when the need arises is a quick task.

There are some drawbacks, however. Since it’s not quite as popular as some other platforms, interoperability with some things is difficult. For instance, the offline blogging tool I’m using right now, BlogPress, doesn’t work quite right with Serendipity. I believe this might be due to missing features and/or bugs in the Serendipity XMLRPC interface. Fortunately, someone in the community had already debugged the problem and provided a fix.

WordPress is probably one of the more popular platforms right now. Starting a WordPress blog can be as simple as creating a new account at wordpress.com. There’s also the option of downloading the WordPress distribution and hosting it on your own. As with Serendipity, WordPress also has a vibrant community and a significant plugin collection. From what I understand, WordPress also has the ability to be used as a static website, though that’s less of an interest for me. WordPress has wide support in a number of offline blogging tools, including custom applications for iPad and iPhone devices.

There are a number of “cloud” platforms as well. Examples include Tumblr, Live Journal, and Blogger. These platforms have a wide variety of interoperability with services such as Twitter and Flickr, but you sacrifice control. You are at the complete mercy of the platform provider with very little alternative. For instance, if a provider disagrees with you, they can easily block or delete your content. Or, the provider can go out of business, leaving you without access to your blog at all. These, in my book, are significant drawbacks.

Another possible choice is Drupal. I’ve been playing around with Drupal quite a bit, especially since it’s the platform of choice for a lot of projects I’ve been involved with lately. It seems to fit the bill pretty well and is incredibly extensible. In fact, it’s probably the closest I’ve come to actually making a switch up to this point. The one major hurdle I have at the moment is lack of API support for blogging tools. Yes, I’m aware of the BlogAPI module, but according to the project page for it, it’s incomplete, unsupported, and the author isn’t working on it anymore. While I was able to install it and initially connect to the Drupal site, it doesn’t seem that any of the posting functionality works at this time. Drupal remains the strongest competitor at this point and has a real chance of becoming my new platform of choice.

For the time being, however, I’m content with Serendipity. The community remains strong, there’s a new release on the horizon, and, most important, it just works.