Technology in the here and now

I’m writing this while several thousand feet up in the air, on a flight from here to there. I won’t be able to publish it until I land, but that seems to be the exception these days rather than the norm.

And yet, while preparing for takeoff, the same old announcements are made. Turn off cell phones and pagers, disable wireless communications on electronic devices. And listening around me, hurried conversations between passengers as they ensure that all of their devices are disabled. As if a stray radio signal will cause the airplane to suddenly drop from the sky, or prevent it from taking off to begin with.

Why is it that we, as a society, cannot get over these simple hurdles. Plenty of studies have shown that these devices don’t interfere with planes. In fact, some airlines are offering in-flight wireless access. Many airlines have offered in-flight telephone calls. Unless my understanding of flight is severely limited, I’m fairly certain that all of these functions use radio signals to operate. And yet we are still told that stray signals may cause planes to crash, may cause interference with the pilots instrumentation.

We need to get over this hurdle. We need to start spending our time looking to the future, advancing our technology, forging new paths. We need to stop clinging to outdated ideas. Learning from our past mistakes is one thing, and there’s merit in understanding history. But lets spend our energy wisely and make the simple things we take for granted even better.

Hey KVM, you’ve got your bridge in my netfilter…

It’s always interesting to see how new technologies alter the way we do things.  Recently, I worked on firewalling for a KVM-based virtualization platform.  From the outset it seems pretty straightforward.  Set up iptables on the host and guest and move on.  But it’s not that simple, and my google-fu initially failed me when searching for an answer.

The primary issue was that when iptables was enabled on the host, the guests became unavailable.  If you enable logging, you can see the traffic being blocked by the host, thus never making it to the guest.  So how do we do this?  Well, if we start with a generic iptables setup, we have something that looks like this:

# Firewall configuration written by system-config-securitylevel
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:RH-Firewall-1-INPUT – [0:0]
-A INPUT -j RH-Firewall-1-INPUT
-A FORWARD -j RH-Firewall-1-INPUT
-A RH-Firewall-1-INPUT -i lo -j ACCEPT
-A RH-Firewall-1-INPUT -p icmp –icmp-type any -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state ESTABLISHED,RELATED -j ACCEPT
-A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 22 -j ACCEPT
-A RH-Firewall-1-INPUT -j REJECT –reject-with icmp-host-prohibited
COMMIT

Adding logging to identify what’s going on is pretty straightforward.  Add two logging lines, one for the INPUT chain and one for the FORWARD chain.  Make sure these are added as the first rules in the chain, otherwise you’ll jump to the RH-Firewall-1-INPUT chain and never make it to the log.

-A INPUT -j LOG –log-prefix “Firewall INPUT: ”
-A FORWARD -j LOG –log-prefix “Firewall FORWARD: ”

 

Now, with this in place you can try sending traffic to the domU.  If you tail /var/log/messages, you’ll see the blocking done by netfilter.  It should look something like this:

Apr 18 12:00:00 example kernel: Firewall FORWARD: IN=br123 OUT=br123 PHYSIN=vnet0 PHYSOUT=eth1.123 SRC=192.168.1.2 DST=192.168.1.1 LEN=56 TOS=0x00 PREC=0x00 TTL=64 ID=18137 DF PROTO=UDP SPT=56712 DPT=53 LEN=36

There are a few things of note here.  First, this occurs on the FORWARD chain only.  The INPUT chain is bypassed completely.  Second, the system recognizes that this is a bridged connection.  This makes things a bit easier to fix.

My attempt at resolving this was to put in a rule that allowed traffic to pass for the bridged interface.  I added the following:

-A FORWARD -i br123 -o br123 -j ACCEPT

This worked as expected and allowed the traffic through the FORWARD chain, making it to the domU unmolested.  However, this method means I have to add a rule for every bridge interface I create.  While explicitly adding rules for each interface should make this more secure, it means I may need to change iptables while the system is in production and running, not something I want to do.

A bit more googling led me to this post about KVM and iptables.  In short it provides two additional methods for handling this situation.  The first is a more generalized rule for bridged interfaces:

-A FORWARD -m physdev –physdev-is-bridged -j ACCEPT

Essentially, this rule tells netfilter to accept any traffic for bridged interfaces.  This removes the need to add a new rule for each bridged interface you create making management a bit simpler.  The second method is to completely remove bridged interfaces from netfilter.  Set the following sysctl variables:

net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

I’m a little worried about this method as it completely bypasses iptables on dom0.  However, it appears that this is actually a more secure manner of handling bridged interfaces.  According to this bugzilla report and this post, allowing bridged traffic to pass through netfilter on dom0 can result in a possible security vulnerability.  I believe this is somewhat similar to cryptographic hash collision.  Attackers can take advantage of netfilter entries with similar IP/port combinations and possibly modify traffic or access systems.  By using the sysctl method above, the traffic completely bypasses netfilter on dom0 and these attacks are no longer possible.

More testing is required, but I believe the latter method of using sysctl is the way to go.  In addition to the security considerations, bypassing netfilter has a positive impact on throughput.  It seems like a win-win from all angles.

Meltdown

Back when the Chernobyl nuclear reactor in the Ukraine melted down, I was in grade school. That disaster absolutely fascinated me and I spent a bit of time researching nuclear power, drawing diagrams of reactor designs, and dreaming about being a nuclear scientist.

One thing that stuck with me about that disaster was the sheer power involved. I remember hearing about the roof of the reactor, a massive slab of concrete, having been blown off the building. From what I remember it was tossed many miles away, though I’m having trouble actually confirming that now. No doubt there was a lot of misreporting done at the time.

The reasons behind the meltdown at Chernobyl are still a point of contention ranging from operator error to design flaws in the reactor. Chances are it is more a combination of both. There’s a really detailed report about what happened here. Additional supporting material can be found on Wikipedia.

 

Today we have the disaster at the Fukushima power plants in Japan. Of course the primary difference from the get-go is that this situation was caused by a natural disaster rather than design flaws or operator error. Honestly, when you get hit with a massive earthquake immediately followed by a devastating tsunami, you’re pretty much starting at screwed.

From what I understand, there are 5 reactors at two plants that are listed as critical. In two instances, the containment structure has suffered an explosion. Whoa! An explosion? Yes, yes, calm down. It’s not a nuclear explosion as most people know it. Most people equate a nuclear explosion with images of mushroom clouds, thoughts of nuclear fallout, and radiation sickness. The explosion we’re talking about in this instance is a hydrogen explosion resulting from venting the inner containment chamber. Yes, it’s entirely possible that radiation was released, but nothing near the high dosages most people equate with a nuclear bomb.

And herein lies a major problem with nuclear power. Not many people understand it, and a large majority are afraid of the consequences. Yes, we have had a massive meltdown as is the case with Chernobyl. We’ve also had a partial meltdown as is the case with Three Mile Island. Currently, the disaster in Japan is closer to Three Mile Island than it is to Chernobyl. That, of course, is subject to change. It’s entirely possible that the reactor in Japan will go into a full core meltdown.

But if you look at the overall effects of nuclear power, I believe you can argue that it is cleaner and safer than many other types of power generation have been. Coal power massively pollutes the atmosphere and leaves behind some rather nasty byproducts that we just don’t have a method of dealing with. Oil and gas also cause pollution in both the atmosphere as well as the area surrounding where the oil and gas are mined. Water, wind, and sun power are, generally speaking, clean, but you have to have massive amounts of each to generate sufficient power.

Nuclear power has had such a negative stigma for such a long period of time that research dollars are not being spent on improving the technology. There are severe restrictions on what scientists can research with respect to nuclear power. As a result, we haven’t advanced very far as compared to other technologies. If we were to open up research we would be able to develop reactors that are significantly safer.

Unfortunately, I think this disaster will make things worse for the nuclear power industry. Despite the fact that this disaster wasn’t caused by design flaws, nor was there operator error, the population at large will question the validity of this technology they know nothing about. Personally, I believe we could make the earth a much cleaner, safer place to live if we were to switch to nuclear power and spend time and effort on making it safer and more efficient.

And finally, a brief note. I’m not a nuclear physicist or engineer, but I have done some background research. I strongly encourage you to do your own research if you’re in doubt about anything I’ve stated. And if I’m wrong about something, please, let me know! I’ll happily make edits to fix incorrect facts.

Games as saviors?

I watched a video yesterday about using video games as a means to help solve world problems. It sounds outrageous at first, until you really think about the problem. But first, how about watching the video :

Ok, now that you have some background, let’s think about this for a bit. Technology is amazing, and has brought us many advancements. Gaming is one of those advancements. We have the capability of creating entire universes, purely for our own amusement. People spend hours each day exploring these worlds. Players are typically working toward completing goals set forth by the game designers. When a player completes a goal, they are rewarded. Sometimes rewards are new items, monetary in nature, or perhaps clues to other goals. Each goal is within the reach of the player, though some goals may require more work to attain.

Miss McGonigal argues that the devotion that players show to games can be harnessed and used to help solve real-world problems. Players feel empowered by games, finding within them a way to control what happens to them. Games teach players that they can accomplish the goals set before them, bringing with it an excitement to continue.

I had the opportunity to participate in a discussion about this topic with a group of college students. Opinions ranged from a general distaste of gaming, seeing it as a waste of time, to an embrace of the ideas presented in the video. For myself, I believe that many of the ideas Miss McGonigal presents have a lot of merit. Some of the students argued that such realistic games would be complicated and uninteresting. However, I would argue that such realistic games have already proven to be big hits.

Take, for example, The Sims. The Sims was a huge hit, with players spending hours in the game adjusting various aspects of their character’s lives. I found the entire phenomenon to be absolutely fascinating. I honestly don’t know what the draw of the game was. Regardless, it did extremely well, proving that such a game could succeed.

Imagine taking a real-world problem and creating a game to represent that problem. At the very least, such a game can foster conversation about the problem. It can also lead to unique ideas about how to solve the problem, even though those playing the game may not be well-versed on the topic.

It’s definitely an avenue worth tackling, especially as future generations spend more time online. If we can find a way to harness the energy and excitement that gaming generates, we may be able to find solutions to many of the worlds most perplexing problems.

 

The Case of the Missing RAID

I have a few servers with hardware RAID directly on the motherboard. They’re not the best boards in the world, but they process my data and serve up the information I want. Recently, I noticed that one of the servers was running on the /dev/sdb* devices, which was extremely odd. Digging some more, it seemed that /dev/sda* existed and seemed to be ok, but wasn’t being used.

After some searching, I was able to determine that the server, when built, actually booted up on /dev/mapper/via_* devices, which were actually the hardware RAID. At some point these devices disappeared. To make matters worse, it seems that kernel updates weren’t being applied correctly. My guess is that either the grub update was failing, or it updated a boot loader somewhere that wasn’t actually being used to boot. As a result, an older kernel was loading, with no way to get to the newer kernel.

I spent some time tonight digging around with Google, posting messages on the CentOS forums, and digging around on the system itself. With guidance from a user via the forums, I discovered that my system should be using dmraid, which is a program that discovers and runs RAID devices such as the one I have. Digging around a bit more with dmraid and I found this :

[user@dev ~]$ sudo /sbin/dmraid -ay -v
Password:
INFO: via: version 2; format handler specified for version 0+1 only
INFO: via: version 2; format handler specified for version 0+1 only
RAID set “via_bfjibfadia” was not activated
[user@dev ~]$

Apparently my RAID is running version 2 and dmraid only supports versions 0 and 1. Since this was initially working, I’m at a loss as to why my RAID is suddenly not supported. I suppose I can rebuild the machine, again, and check, but the machine is about 60+ miles from me and I’d rather not have to migrate data anyway.

So how does one go about fixing such a problem? Is my RAID truly not supported? Why did it work when I built the system? What changed? If you know what I’m doing wrong, I’d love to hear from you… This one has me stumped. But fear not, when I have an answer, I’ll post a full writeup!

 

Space Photography

Slashdot posted a news item late last evening about some rather stunning photos from the International Space Station. On June 12th, the Sarychev Peak volcano erupted. At the same time, the ISS happened to be right overhead. What resulted was some incredible imagery, provided to the public by NASA. Check out the images below:

You can find more images and information here. Isn’t nature awesome?

NANOG 46 – Final Thoughts

Nanog 46 is wrapping up today and it has been an incredible experience. This particular Nanog seemed to have an underlying IPv6 current to it, but, if you believe the reports, IPv6 is going to have to become the standard in the next couple of years. We’ll be running dual-stack configurations for some time to come, but IPv6 rollout is necessary.

To date, I haven’t had a lot to do with IPv6. A few years ago I set up one of the many IPv6 shims, just to check out connectivity, but never really went anywhere with it. It was nothing more than a tech demo at the time, with no real content out there to bother with. Content exists today, however, and will continue to grow as time moves on.

IPv6 connectivity is still spotty and problematic for some, though, and there doesn’t seem to be a definitive, workable solution. For instance, if your IPv6 connectivity is not properly configured, you may lose access to some sites as you receive DNS responses pointing you at IPv6 content, but that you cannot reach. This results in either a major delay in falling back to IPv4 connectivity, or complete breakage. So one of the primary problems right now is whether or not to send AAAA record responses to DNS requests when the IPv6 connectivity status of the receiver is unknown. Google, from what I understand, is using a whitelist system. When a provider has sufficient IPv6 connectivity, Google adds them to their whitelist and the provider is then able to receive AAAA records.

Those problems aside, I think rolling out IPv6 will be pretty straightforward. My general take on this is to run dual-stack to start, and probably for the forseeable future, and getting the network to hand out IPv6 addresses. Once that’s in place, then we can start offering AAAA records for services. I’m still unsure at this point how to handle DNS responses to users with possibly poor v6 connectivity.

Another area of great interest this time around is DNSSEC. I’m still quite skeptical about DNSSEC as a technology, partly due to ignorance, partly due to seeing problems with what I do understand. Rest assured, once I have a better handle on this, I’ll finish up my How DNS Works series.

I’m all for securing the DNS infrastructure and doing something to ensure that DNS cannot be poisoned the same way it can today. DNSSEC aims to add security to DNS such that you can trust the responses you receive. However, I have major concerns with what I’ve seen of DNSSEC so far. One of the bigger problems I see is that each and every domain (zone) needs to be signed. Sure, this makes sense, but my concern is the cost involved to do so. SSL Certificates are not cheap and are a recurring cost. Smaller providers may run into major issues with funding such security. As a result, they will be unable to sign their domains and participate in the secure infrastructure.

Another issue I find extremely problematic is the fallback to TCP. Cryptographic signatures are big, and they tend to be bigger, the larger the key you use. As a result, DNS responses are exceeding the size of UDP and falling back to TCP. One reason DNS works so well today is that the DNS server doesn’t have to worry about retransmissions, state of connections, etc. There is no handshake required, and the UDP packets just fly. It’s up to the client to retransmit if necessary. When you move to TCP, the nature of the protocol means that both the client and server need to keep state information and perform any necessary retransmissions. This takes up socket space on the server, takes time, and uses up many more CPU cycles. Based on a lightning talk during today’s session, when the .ORG domain was signed, they saw a 100-fold increase in TCP connections, moving from less than 1 query per second to almost 100. This concerns me greatly as the majority of the Internet has not enabled DNSSEC at this point. I can see this climbing even more, eventually overwhelming the system and bringing DNS to its knees.

I also believe that moving in this direction will allow the “bad guys” to DoS attack servers in much easier ways as they can easily trigger TCP transactions, perform various TCP-based attacks, and generally muck up the system further.

So what’s the alternative? Well, there is DNSCurve, though I know even less about that as it’s very much a fringe technology at this point. In fact, the first workable patch against djbdns was only released in the past few weeks. It’s going to take some time to absorb what’s out there, but based on the current move to DNSSEC, my general feeling is that no matter how much better DNSCurve may or may not be, it doesn’t have much of a chance. Even so, there’s a lot more to learn in this arena.

I also participated in a Security BOF. BOFs are, essentially, less structured talks on a given subject. There is a bit more audience participation and the audience tends to be a bit smaller. The Security BOF was excellent as there were conversations about abuse, spam, and methods of dealing with each. The spam problem is, of course, widespread and it’s comforting to know that you’re not the only one without a definitive answer. Of course, the flip side of that is that it’s somewhat discouraging to know that even the big guys such as Google are still facing major problems with spam. The conversation as a whole, though, was quite enlightening and I learned a lot.

One of the more exciting parts of Nanog for me, though, was to meet some of the Internet greats. I’ve talked to some of these folks via email and on various mailing lists, but to meet them in person is a rare honor. I was able to meet and speak with both Randy Bush and Paul Vixie, both giants in their fields. I was able to rub elbows with folks from Google, Yahoo, and more. I’ve exchanged PGP keys with several people throughout the conference, serving as a geek’s autograph. I have met some incredible people and I look forward to talking with them in the future.

If you’re a network operator, or your interests lie in that direction, I strongly encourage you to make a trip to at least one NANOG in your lifetime. I’m hooked at this point and I’m looking forward to being able to attend more meetings in the future.

 

Hi, my name is Jason and I Twitter.

As you may have noticed by now, I’ve been using Twitter for a while now. Honestly, I’m not entirely sure I remember what made me decide to make an account to begin with, but I’m pretty sure it’s Wil Wheaton’s fault. But, since I’m an old pro now, I thought perhaps it was time to talk about it…

I’m not a huge fan of social media. I avoid MySpace like the plague. In fact, I’m fairly certain MySpace is a plague carrier… I do have a Facebook account, but that’s because my best friend apparently hates me. I’ll show him, though. I refuse to use the Facebook account for anything more than viewing his updates, then I’ll email him comments. There, take that!

Why do I avoid these? Honestly, it has a lot to do with what I believe are poorly designed and implemented interfaces. Seriously, have you ever seen a decent looking MySpace site? Until yesterday I had avoided Facebook, much for the same reason, and while Facebook definitely looks cleaner, I still find it very cluttered and difficult to navigate. I’m probably not giving Facebook much of a chance as I’ve only seen 3 or 4 profiles, but they all look the same…

But then there’s Twitter. Twitter, I find, is quite interesting. What intrigues me the most is the size restriction. Posting via twitter is limited to a max of 140 characters. Generally, this means you need to think before you post. Sure, you can use that insane texting vocabulary [PDF] made popular by phone texting, but I certainly won’t be following you if you do. Twitter also has a pretty open API which has spawned a slew of third-party apps, as can be seen in the Twitterverse image to the right.

Twitter has a lot of features, some readily apparent, some not. When you first start, it can be a little daunting to figure out what’s going on. There are a bunch of getting started guides out there, including a book from O’Reilly. I’ll toss out some information here as well to get you started.

Most people join Twitter to view the updates from other people. With Twitter, you can pick and choose who you follow. Following someone allows you to see their updates on your local Twitter feed. But even if you don’t follow someone, you can go to that user’s Twitter page and view their updates, unless they’ve marked their account private. Private accounts need to approve you as a follower before you can see their page. Wired has a pretty good list of interesting people to follow on Twitter. Me? I’d recommend Wil Wheaton, Warren Ellis, Tim O’Reilly, Felicia Day, Neil Gaiman, and The Onion to start. Oh yeah.. And me too!

So now you’re following some people and you can see their updates on your Twitter feed. Now, perhaps, you’d like to make updates of your own. Perhaps you’d like to send a message to someone. Well, there are two ways to do this. The most common way is via a reply. To send a reply, precede the username of the person you’re replying to with an @ . That’s all there is to it, it looks something like this:

@wilw This twitter thing is pretty slick

Your message will appear in the recipient’s Twitter feed. Of course, if it’s someone as popular as Wil Wheaton, you may never get a response as he tends to get a lot of messages. If you’re one of the few (100 or so) people that Wil follows, you can send him a direct message. Direct messages are only possible between people who follow each other. A direct message is the username preceded by a d. Again, quite simple, like this :

d wilw Wouldn’t it be cool if you actually followed me and this would work?

In a nutshell, that’s enough to get you started with Twitter. If you need more help, Twitter has a pretty decent help site. I recommend using a client to interact with Twitter, perhaps Twitterific for OSX or Twhirl. Twhirl runs via Adobe AIR, so it’s semi-cross platform, running on all the majors. Twitter has a list of a few clients on their site.

There are two other Twitter syntaxes I want to touch on briefly. First, there’s the concept of a Re-Tweet. Simply put, a Re-Tweet is a message that someone receives and passes on to their followers. The accepted method of Re-Tweeting is to merely put RT before the message, like so :

RT @wilw You should all follow @XenoPhage, he’s incredible!

Finally, there are hashtags. Hashtags are a mechanism that can be used to search for topics quickly. Hashtags are added in any message by preceding a word with a #, like so :

This thing is pretty slick. I’m really getting the hang of it. Time to install #twitterific!

Now, if you head over to hashtags.org, you can follow topics and trends, find new people to follow, and more. It’s an interesting way to add metadata that can be used by others without cluttering up a conversation.

So what about the future of Twitter? Well, the future, as usual, is uncertain. That said, there were rumors in April about Google possibly purchasing Twitter, though those talks apparently broke down. Right now, Twitter continues to grow in features and popularity. There is speculation about the future, but no one really knows what will happen. I’m hoping Twitter sticks around for a while, it’s a fun distraction that has some really good uses.

 

That no good, nothing Internet

At the end of May, the New Yorker hosted a panel discussion called “The Future of Filmmaking.” At that panel, Michael Lynton, Chairman and CEO of Sony Pictures Entertainment, made the following comment (paraphrased version from Wikipedia):

“I’m a guy who doesn’t see anything good having come from the Internet, period. [The internet has] created this notion that anyone can have whatever they want at any given time. It’s as if the stores on Madison Avenue were open 24 hours a day. They feel entitled. They say, ‘Give it to me now,’ and if you don’t give it to them for free, they’ll steal it.”

This statement was like a shot across the bow of the blogosphere and incited ridicule, derision, and a general uproar. In many cases, though, the response was one of incredulity that a CEO of a major content company doesn’t see the bigger picture and cannot see the absolutely amazing advances the Internet have made possible.

Mr. Lynton responded by writing an article for the Huffington Post. He expanded on his comment saying that the Internet has spawned nothing but piracy and has had a massive impact on “legitimate” business and threatened a number of industries including music, newspapers, books, and movies. He goes on to say that the Internet should be regulated, much like how the Interstate Highway System was regulated when it was built in the 1950’s.

The problem with his response is that he overlooks the reason behind much of the piracy, as well as making a flawed comparison of the Internet to a highway system. This is a gentleman who was formerly the CEO of America Online, one of the first Internet providers. Having been at the forefront of the Internet revolution, I would have expected more from him, but apparently not.

At the moment, he’s the head of a major media organization that makes their money by creating content that the viewers pay for. For many years the movie industry has created content and released it in a controlled fashion, first to the theatre, then to VHS/DVD, and finally to cable television stations. Each phase of the release cycle opened up a new revenue stream for the movie companies, allowing them a continuous source of income. One of Mr. Lynton’s chief complaints is that the Internet has broken this business model. His belief is that people are no longer willing to wait for content and are willing to break the law to get their content.

In a way, he’s right. The Internet has allowed this. Of course, this is the price of advancement. Guns allowed murderers and robbers to threaten and kill more people. Cars allowed robbers to escape the scene of the crime faster, making it more difficult for the police to chase them. Telephones have made fraud and deception easy and difficult to trace. Every advancement in technology has both positive and negative effects.

As new technology is used and as people become more comfortable with it, the benefits generally start to outweigh the drawbacks. Because the Internet is having a global effect, it has shaken up a number of industries. Those industries that are not willing to change and adapt will die, much like the industries of old. When cars were invented, the horse and buggie industry did what they could to make owning a car difficult. In the end, they failed, and went out of business. When movies were invented, the theater companies protested and tried to stop movies. In the end, movies mostly killed off theaters. Of course, in both instances, traces remain.

The Internet is forcing changes all over. For instance, users are finding their news online through social media such as blogs, email, and more. Newspapers have been slow to provide online content and are suffering. Because of the instant nature of the Internet, users are more likely to find their news online, rather than wait for the newspaper to be printed and delivered to their home.

Users want more content in an instant manner, and the industries need to adapt to the new climate. Media companies have not adapted quickly enough and users have found alternate methods of providing the content that people want, often leading to piracy. And this, I think, is the crux of the problem. If the content is available in a quick and easy manner, people will be more likely to obtain it legally. But it has to be provided in a reasonable manner.

Media companies have decided to provide content, with restrictions. They claim the restrictions are there to prevent piracy and protect their so-called intellectual property, but if you look closely, the restrictions always mean that they make more money. Music and movie companies add DRM to their content, restricting its use and, in many cases, causing numerous interoperability problems. Content is provided via the company, but if the company vanishes, so does the content you paid for. In many instances, the company holds the key to whether or not you can view or listen to your content, and if the company disappears, so do the keys.

When movies were provided on VHS, and music was on tapes and CDs, people were able to freely copy them. There was piracy back then, too. But the overall effect on the industry was nil. Now, with the advent of the Internet, distribution is easier. What’s interesting to note, however, is that distribution (both legal and illegal) increases awareness. X-Men, the pirated movie that Mr. Lynton mentions in his article, still opened with massive revenues. Why? The pirating of the movie was big news as the FBI was brought in and as the movie company ranted and raved. As a result, interest in the movie grew resulting in a big opening weekend.

It doesn’t always have to be that way, though. Every day, I hear about interesting and new things via the Internet. I have discovered new music, movies, books, and more. I have payed for content I received free over the Internet, purely to give back to the creators. In some cases there is an additional benefit to buying the content, but in others, it’s a desire to own a copy. For example, a number of stories by Cory Doctorow were re-imagined as comics. You can freely download the comic online, which I did. At the same time, I’m a fan and I wanted to own a copy of the comic, so I went and purchased a copy. I’ve done the same with books, music, and movies. All things I learned about through the Internet.

In the end, industries must evolve or die. There are many, many companies out there who “get it.” Look at Valve and their Steam service. How about Netflix and their streaming video content. How about the numerous music services such as iTunes. It is possible to evolve and live. The trick is to know when you have to. Maybe it’s time for Mr. Lynton to find a new business model.