Tron 2.0

It looks like Disney is moving forward with their Tron Sequel. A sequel to Tron has been rumored since the late 90’s, but nothing ever came of it. Back in 2003, Tron 2.0 was released for PC. Originally, it was intended to be a movie, but in the end, they decided on a game release first.

Although Tron 2.0 didn’t sell that well, Disney has apparently decided to move forward with their sequel, titled Tron Legacy. A teaser for the movie was shown at Comic Con in 2008, and a newer trailer was shown at Comic Con in 2009. The new trailer is below.

The teaser was released during a panel discussion at Comic Con. The panelists also showed stills and concept art which you can find here and here.

Tron is one of my all-time favorite movies and I’m looking forward to checking out the sequel.

 

New Marvel Anime

I’m not a die-hard comic nut, but I do enjoy reading the occasional comic now and again. And while my comic preferences don’t tend towards your typical superhero genre, aka Superman, X-Men, etc., I do get a bit geeked out about some of them coming to the big screen. Iron Man was a particularly excellent film, as was The Incredible Hulk, the more recent one, not that crappy train-wreck that came before it.

Of the comic authors I truly enjoy, Warren Ellis has a pretty decent lead on the rest. He has hit just about every genre I can think of, and each story is unique, enjoyable, engrossing, and more often than not, pretty gruesome too. At Comic-con this year, he announced two movie projects he’s working on, complete with trailers! As he pointed out on his blog, these are both test animations, intended to show off the style, not the content. So take these with a grain of salt. That said, I think the style is incredible.

The first trailer here is for Iron Man.

This next trailer, and the one I’m much more excited about, is Wolverine.

Both of these look gorgeous, and I love the marriage of American comic heroes with east-asian anime style. From what Ellis states on his blog, these are directed by Rintaro, an award winning anime director. Rintaro directed the much-respected, award winning, Metropolis. While these are being produced for the east-asian market, here’s hoping that they are brought to the US as well.

 

Orwellian DRM

On the morning of July 17, 2009, copies of certain books vanished from Kindles across the world. Monetary reparations were deposited into the respective Kindle owner’s account. In a stroke of pure irony, one of the deleted books was 1984 by George Orwell.

According to Amazon, these deletions were in response to a request by the rights holder. Amazon goes on to explain that the digital editions of both 1984 and Animal Farm were uploaded to Amazon’s store through a self-service portal. These were “unauthorized” versions of the ebooks and the party responsible for uploading them should not have done so.

In the end, the consumer loses, having been denied content they purchased. Sure, Amazon refunded the money they paid, but how many of those people were in the middle of reading those books? Or had them in the queue to read later? And what right does Amazon have to take back something they sold you? To borrow a really good example, that’s like Barnes and Nobles coming to your house and taking books off your shelves without permission. Does it make it OK if they leave a check on the table? Ok, sure, it’s your house rather than a device with just books, so how about if you had all of those books in a room, with separate access? Yeah.. you’d still feel violated, wouldn’t you..

What’s interesting is that this is the book industry doing this, and not the music or movie industry. With the insane tactics the RIAA has taken over the years, this seems to be right up their alley.. And the book industry has always had more openness, what with libraries, selling and swapping books, etc. But now there’s suddenly a big to-do about DRM and book rights. Interesting how times change.

 

The Case of the Missing RAID

I have a few servers with hardware RAID directly on the motherboard. They’re not the best boards in the world, but they process my data and serve up the information I want. Recently, I noticed that one of the servers was running on the /dev/sdb* devices, which was extremely odd. Digging some more, it seemed that /dev/sda* existed and seemed to be ok, but wasn’t being used.

After some searching, I was able to determine that the server, when built, actually booted up on /dev/mapper/via_* devices, which were actually the hardware RAID. At some point these devices disappeared. To make matters worse, it seems that kernel updates weren’t being applied correctly. My guess is that either the grub update was failing, or it updated a boot loader somewhere that wasn’t actually being used to boot. As a result, an older kernel was loading, with no way to get to the newer kernel.

I spent some time tonight digging around with Google, posting messages on the CentOS forums, and digging around on the system itself. With guidance from a user via the forums, I discovered that my system should be using dmraid, which is a program that discovers and runs RAID devices such as the one I have. Digging around a bit more with dmraid and I found this :

[user@dev ~]$ sudo /sbin/dmraid -ay -v
Password:
INFO: via: version 2; format handler specified for version 0+1 only
INFO: via: version 2; format handler specified for version 0+1 only
RAID set “via_bfjibfadia” was not activated
[user@dev ~]$

Apparently my RAID is running version 2 and dmraid only supports versions 0 and 1. Since this was initially working, I’m at a loss as to why my RAID is suddenly not supported. I suppose I can rebuild the machine, again, and check, but the machine is about 60+ miles from me and I’d rather not have to migrate data anyway.

So how does one go about fixing such a problem? Is my RAID truly not supported? Why did it work when I built the system? What changed? If you know what I’m doing wrong, I’d love to hear from you… This one has me stumped. But fear not, when I have an answer, I’ll post a full writeup!

 

Space Photography

Slashdot posted a news item late last evening about some rather stunning photos from the International Space Station. On June 12th, the Sarychev Peak volcano erupted. At the same time, the ISS happened to be right overhead. What resulted was some incredible imagery, provided to the public by NASA. Check out the images below:

You can find more images and information here. Isn’t nature awesome?

NANOG 46 – Final Thoughts

Nanog 46 is wrapping up today and it has been an incredible experience. This particular Nanog seemed to have an underlying IPv6 current to it, but, if you believe the reports, IPv6 is going to have to become the standard in the next couple of years. We’ll be running dual-stack configurations for some time to come, but IPv6 rollout is necessary.

To date, I haven’t had a lot to do with IPv6. A few years ago I set up one of the many IPv6 shims, just to check out connectivity, but never really went anywhere with it. It was nothing more than a tech demo at the time, with no real content out there to bother with. Content exists today, however, and will continue to grow as time moves on.

IPv6 connectivity is still spotty and problematic for some, though, and there doesn’t seem to be a definitive, workable solution. For instance, if your IPv6 connectivity is not properly configured, you may lose access to some sites as you receive DNS responses pointing you at IPv6 content, but that you cannot reach. This results in either a major delay in falling back to IPv4 connectivity, or complete breakage. So one of the primary problems right now is whether or not to send AAAA record responses to DNS requests when the IPv6 connectivity status of the receiver is unknown. Google, from what I understand, is using a whitelist system. When a provider has sufficient IPv6 connectivity, Google adds them to their whitelist and the provider is then able to receive AAAA records.

Those problems aside, I think rolling out IPv6 will be pretty straightforward. My general take on this is to run dual-stack to start, and probably for the forseeable future, and getting the network to hand out IPv6 addresses. Once that’s in place, then we can start offering AAAA records for services. I’m still unsure at this point how to handle DNS responses to users with possibly poor v6 connectivity.

Another area of great interest this time around is DNSSEC. I’m still quite skeptical about DNSSEC as a technology, partly due to ignorance, partly due to seeing problems with what I do understand. Rest assured, once I have a better handle on this, I’ll finish up my How DNS Works series.

I’m all for securing the DNS infrastructure and doing something to ensure that DNS cannot be poisoned the same way it can today. DNSSEC aims to add security to DNS such that you can trust the responses you receive. However, I have major concerns with what I’ve seen of DNSSEC so far. One of the bigger problems I see is that each and every domain (zone) needs to be signed. Sure, this makes sense, but my concern is the cost involved to do so. SSL Certificates are not cheap and are a recurring cost. Smaller providers may run into major issues with funding such security. As a result, they will be unable to sign their domains and participate in the secure infrastructure.

Another issue I find extremely problematic is the fallback to TCP. Cryptographic signatures are big, and they tend to be bigger, the larger the key you use. As a result, DNS responses are exceeding the size of UDP and falling back to TCP. One reason DNS works so well today is that the DNS server doesn’t have to worry about retransmissions, state of connections, etc. There is no handshake required, and the UDP packets just fly. It’s up to the client to retransmit if necessary. When you move to TCP, the nature of the protocol means that both the client and server need to keep state information and perform any necessary retransmissions. This takes up socket space on the server, takes time, and uses up many more CPU cycles. Based on a lightning talk during today’s session, when the .ORG domain was signed, they saw a 100-fold increase in TCP connections, moving from less than 1 query per second to almost 100. This concerns me greatly as the majority of the Internet has not enabled DNSSEC at this point. I can see this climbing even more, eventually overwhelming the system and bringing DNS to its knees.

I also believe that moving in this direction will allow the “bad guys” to DoS attack servers in much easier ways as they can easily trigger TCP transactions, perform various TCP-based attacks, and generally muck up the system further.

So what’s the alternative? Well, there is DNSCurve, though I know even less about that as it’s very much a fringe technology at this point. In fact, the first workable patch against djbdns was only released in the past few weeks. It’s going to take some time to absorb what’s out there, but based on the current move to DNSSEC, my general feeling is that no matter how much better DNSCurve may or may not be, it doesn’t have much of a chance. Even so, there’s a lot more to learn in this arena.

I also participated in a Security BOF. BOFs are, essentially, less structured talks on a given subject. There is a bit more audience participation and the audience tends to be a bit smaller. The Security BOF was excellent as there were conversations about abuse, spam, and methods of dealing with each. The spam problem is, of course, widespread and it’s comforting to know that you’re not the only one without a definitive answer. Of course, the flip side of that is that it’s somewhat discouraging to know that even the big guys such as Google are still facing major problems with spam. The conversation as a whole, though, was quite enlightening and I learned a lot.

One of the more exciting parts of Nanog for me, though, was to meet some of the Internet greats. I’ve talked to some of these folks via email and on various mailing lists, but to meet them in person is a rare honor. I was able to meet and speak with both Randy Bush and Paul Vixie, both giants in their fields. I was able to rub elbows with folks from Google, Yahoo, and more. I’ve exchanged PGP keys with several people throughout the conference, serving as a geek’s autograph. I have met some incredible people and I look forward to talking with them in the future.

If you’re a network operator, or your interests lie in that direction, I strongly encourage you to make a trip to at least one NANOG in your lifetime. I’m hooked at this point and I’m looking forward to being able to attend more meetings in the future.

 

The Internet Arms Race

I’m here in sunny Philadelphia, attending NANOG46, a conference for network operators. The conference, thus far, has been excellent, with some great information being disseminated. One of the talks was by a long-time Internet pioneer, Paul Vixie. Vixie has had his hands in a lot of different projects ranging from being the primary author of BIND for many years, starting MAPS way back in 1996, and more recently, involvement with the Conficker Working Group.

Vixie’s talk was titled “Internet Superbugs and The Art of War,” and was about the struggle between Internet operators and the “criminal” element that uses the Internet for spam, DDOS attack, etc. The crux of the talk centered around the fact that it costs the bad guys next to nothing to continually evolve their attacks and use the network for their nefarious activities. On the flip side, however, it costs the network operators a good deal of time and money to try and stop these attacks.

Years ago, attacks were generally sourced from a single location and it was relatively easy to mitigate them. In addition, tracking down the source of the attack was simple enough, so legal action could be taken. At the very least, the network provider upstream from the attacker could disable the account and stop the attack.

Fast forward to today and we have botnets that are used for sending spam, performing DDOS attacks, and causing other sorts of havoc. It becomes next to impossible to mitigate a DDOS attack because the attack can be sourced from hundreds and thousands of machines simultaneously. This costs the bad guys nothing to deploy because users are largely ignorant and don’t understand the importance of patching and securing their networks. This results in millions of machines on the Internet that are exploitable. The bad guys write viruses, worms, trojans, etc. that infect these machines and turn them into zombie machines for their botnet.

Fighting these attacks becomes an exercise in futility. We use blacklists to block traffic from places we know are sending spam, we use anti-virus software to prevent infection of our machines, and more. When Conficker was detected and analyzed, researchers realized that this infection was a new evolution of attack. Conficker used cryptographic signatures to verify updates, pseudo-random lists of websites for updates, and more. The website lists are an excellent example of the costs paid by the good guys vs the bad guys.

The first generation of Conficker used a generated list of websites for updates. This list was 250 sites per day, making it difficult, but not impossible to mitigate. So, the people fighting this outbreak started buying up these domains in an attempt to prevent Conficker from updating. The authors of Conficker responded by upping this list to 50,000 per day, making it nearly impossible to buy them up. Fortunately, the people working to prevent the outbreak were able to work with ICANN and the various ccTLD companies to monitor and block purchases of these sites. Sites that already existed were thoroughly checked to ensure they weren’t hosting the new version of Conficker.

Vixie brought up an interesting point about all of this activity, though. The authors of Conficker made a relatively simple change to Conficker to make it use 50,000 domains. The people fighting Conficker spent many hours and days, not to mention a significant amount of money, to mitigate this. Smaller ccTLD companies that don’t have 24×7 abuse staff are unable to cope. They don’t have the budget to be able to do all of this work for free. As the workload climbs, they’re more likely to turn a blind eye.

All of this, in turn, means that our current mode of reacting to these attacks and mitigating them does not scale. It merely results in lost revenue and frustration. Additionally, creating lists of places to avoid, generating lists of bad content, etc. will never be able to scale over time. There is a breaking point, somewhere, and at that point we have no recourse unless we change our way of thinking.

Along the same line of thought, I came across a pretty decent quote today, originally posted by Don Franke from ISC(2):

“PC security is no longer about a virus that trashes your hard drive. It’s about botnets made up of millions of unpatched computers that attack banks, infrastructures, governments. Bandwidth caps will contribute to this unless the thinking of Internet providers and OS vendors change. Because we are all inter-connected now.”

If you read the original post, it explains how moving to bandwidth caps will only exacerbate the security problem because users will no longer be interested in wasting time downloading updates, but rather saving that bandwidth for things they’re interested in.

Overall, it was a very interesting talk and a very different way of thinking. There is no definitive answer as to what direction we need to go in to resolve this, but it’s definitely something that needs to be investigated.

 

Holy .. Green?

So yeah, the background of the site is green. Why? Simply put, it’s a show of support for those in Iran fighting for their freedom. Check out the main media outlets, CNN, BBC, etc. And you can follow more on my other blog if you are so inclined. I’m not going to update at all here about Iran related stuff, this is a tech blog. But I’ll show my support nonetheless.

 

Hi, my name is Jason and I Twitter.

As you may have noticed by now, I’ve been using Twitter for a while now. Honestly, I’m not entirely sure I remember what made me decide to make an account to begin with, but I’m pretty sure it’s Wil Wheaton’s fault. But, since I’m an old pro now, I thought perhaps it was time to talk about it…

I’m not a huge fan of social media. I avoid MySpace like the plague. In fact, I’m fairly certain MySpace is a plague carrier… I do have a Facebook account, but that’s because my best friend apparently hates me. I’ll show him, though. I refuse to use the Facebook account for anything more than viewing his updates, then I’ll email him comments. There, take that!

Why do I avoid these? Honestly, it has a lot to do with what I believe are poorly designed and implemented interfaces. Seriously, have you ever seen a decent looking MySpace site? Until yesterday I had avoided Facebook, much for the same reason, and while Facebook definitely looks cleaner, I still find it very cluttered and difficult to navigate. I’m probably not giving Facebook much of a chance as I’ve only seen 3 or 4 profiles, but they all look the same…

But then there’s Twitter. Twitter, I find, is quite interesting. What intrigues me the most is the size restriction. Posting via twitter is limited to a max of 140 characters. Generally, this means you need to think before you post. Sure, you can use that insane texting vocabulary [PDF] made popular by phone texting, but I certainly won’t be following you if you do. Twitter also has a pretty open API which has spawned a slew of third-party apps, as can be seen in the Twitterverse image to the right.

Twitter has a lot of features, some readily apparent, some not. When you first start, it can be a little daunting to figure out what’s going on. There are a bunch of getting started guides out there, including a book from O’Reilly. I’ll toss out some information here as well to get you started.

Most people join Twitter to view the updates from other people. With Twitter, you can pick and choose who you follow. Following someone allows you to see their updates on your local Twitter feed. But even if you don’t follow someone, you can go to that user’s Twitter page and view their updates, unless they’ve marked their account private. Private accounts need to approve you as a follower before you can see their page. Wired has a pretty good list of interesting people to follow on Twitter. Me? I’d recommend Wil Wheaton, Warren Ellis, Tim O’Reilly, Felicia Day, Neil Gaiman, and The Onion to start. Oh yeah.. And me too!

So now you’re following some people and you can see their updates on your Twitter feed. Now, perhaps, you’d like to make updates of your own. Perhaps you’d like to send a message to someone. Well, there are two ways to do this. The most common way is via a reply. To send a reply, precede the username of the person you’re replying to with an @ . That’s all there is to it, it looks something like this:

@wilw This twitter thing is pretty slick

Your message will appear in the recipient’s Twitter feed. Of course, if it’s someone as popular as Wil Wheaton, you may never get a response as he tends to get a lot of messages. If you’re one of the few (100 or so) people that Wil follows, you can send him a direct message. Direct messages are only possible between people who follow each other. A direct message is the username preceded by a d. Again, quite simple, like this :

d wilw Wouldn’t it be cool if you actually followed me and this would work?

In a nutshell, that’s enough to get you started with Twitter. If you need more help, Twitter has a pretty decent help site. I recommend using a client to interact with Twitter, perhaps Twitterific for OSX or Twhirl. Twhirl runs via Adobe AIR, so it’s semi-cross platform, running on all the majors. Twitter has a list of a few clients on their site.

There are two other Twitter syntaxes I want to touch on briefly. First, there’s the concept of a Re-Tweet. Simply put, a Re-Tweet is a message that someone receives and passes on to their followers. The accepted method of Re-Tweeting is to merely put RT before the message, like so :

RT @wilw You should all follow @XenoPhage, he’s incredible!

Finally, there are hashtags. Hashtags are a mechanism that can be used to search for topics quickly. Hashtags are added in any message by preceding a word with a #, like so :

This thing is pretty slick. I’m really getting the hang of it. Time to install #twitterific!

Now, if you head over to hashtags.org, you can follow topics and trends, find new people to follow, and more. It’s an interesting way to add metadata that can be used by others without cluttering up a conversation.

So what about the future of Twitter? Well, the future, as usual, is uncertain. That said, there were rumors in April about Google possibly purchasing Twitter, though those talks apparently broke down. Right now, Twitter continues to grow in features and popularity. There is speculation about the future, but no one really knows what will happen. I’m hoping Twitter sticks around for a while, it’s a fun distraction that has some really good uses.

 

That no good, nothing Internet

At the end of May, the New Yorker hosted a panel discussion called “The Future of Filmmaking.” At that panel, Michael Lynton, Chairman and CEO of Sony Pictures Entertainment, made the following comment (paraphrased version from Wikipedia):

“I’m a guy who doesn’t see anything good having come from the Internet, period. [The internet has] created this notion that anyone can have whatever they want at any given time. It’s as if the stores on Madison Avenue were open 24 hours a day. They feel entitled. They say, ‘Give it to me now,’ and if you don’t give it to them for free, they’ll steal it.”

This statement was like a shot across the bow of the blogosphere and incited ridicule, derision, and a general uproar. In many cases, though, the response was one of incredulity that a CEO of a major content company doesn’t see the bigger picture and cannot see the absolutely amazing advances the Internet have made possible.

Mr. Lynton responded by writing an article for the Huffington Post. He expanded on his comment saying that the Internet has spawned nothing but piracy and has had a massive impact on “legitimate” business and threatened a number of industries including music, newspapers, books, and movies. He goes on to say that the Internet should be regulated, much like how the Interstate Highway System was regulated when it was built in the 1950’s.

The problem with his response is that he overlooks the reason behind much of the piracy, as well as making a flawed comparison of the Internet to a highway system. This is a gentleman who was formerly the CEO of America Online, one of the first Internet providers. Having been at the forefront of the Internet revolution, I would have expected more from him, but apparently not.

At the moment, he’s the head of a major media organization that makes their money by creating content that the viewers pay for. For many years the movie industry has created content and released it in a controlled fashion, first to the theatre, then to VHS/DVD, and finally to cable television stations. Each phase of the release cycle opened up a new revenue stream for the movie companies, allowing them a continuous source of income. One of Mr. Lynton’s chief complaints is that the Internet has broken this business model. His belief is that people are no longer willing to wait for content and are willing to break the law to get their content.

In a way, he’s right. The Internet has allowed this. Of course, this is the price of advancement. Guns allowed murderers and robbers to threaten and kill more people. Cars allowed robbers to escape the scene of the crime faster, making it more difficult for the police to chase them. Telephones have made fraud and deception easy and difficult to trace. Every advancement in technology has both positive and negative effects.

As new technology is used and as people become more comfortable with it, the benefits generally start to outweigh the drawbacks. Because the Internet is having a global effect, it has shaken up a number of industries. Those industries that are not willing to change and adapt will die, much like the industries of old. When cars were invented, the horse and buggie industry did what they could to make owning a car difficult. In the end, they failed, and went out of business. When movies were invented, the theater companies protested and tried to stop movies. In the end, movies mostly killed off theaters. Of course, in both instances, traces remain.

The Internet is forcing changes all over. For instance, users are finding their news online through social media such as blogs, email, and more. Newspapers have been slow to provide online content and are suffering. Because of the instant nature of the Internet, users are more likely to find their news online, rather than wait for the newspaper to be printed and delivered to their home.

Users want more content in an instant manner, and the industries need to adapt to the new climate. Media companies have not adapted quickly enough and users have found alternate methods of providing the content that people want, often leading to piracy. And this, I think, is the crux of the problem. If the content is available in a quick and easy manner, people will be more likely to obtain it legally. But it has to be provided in a reasonable manner.

Media companies have decided to provide content, with restrictions. They claim the restrictions are there to prevent piracy and protect their so-called intellectual property, but if you look closely, the restrictions always mean that they make more money. Music and movie companies add DRM to their content, restricting its use and, in many cases, causing numerous interoperability problems. Content is provided via the company, but if the company vanishes, so does the content you paid for. In many instances, the company holds the key to whether or not you can view or listen to your content, and if the company disappears, so do the keys.

When movies were provided on VHS, and music was on tapes and CDs, people were able to freely copy them. There was piracy back then, too. But the overall effect on the industry was nil. Now, with the advent of the Internet, distribution is easier. What’s interesting to note, however, is that distribution (both legal and illegal) increases awareness. X-Men, the pirated movie that Mr. Lynton mentions in his article, still opened with massive revenues. Why? The pirating of the movie was big news as the FBI was brought in and as the movie company ranted and raved. As a result, interest in the movie grew resulting in a big opening weekend.

It doesn’t always have to be that way, though. Every day, I hear about interesting and new things via the Internet. I have discovered new music, movies, books, and more. I have payed for content I received free over the Internet, purely to give back to the creators. In some cases there is an additional benefit to buying the content, but in others, it’s a desire to own a copy. For example, a number of stories by Cory Doctorow were re-imagined as comics. You can freely download the comic online, which I did. At the same time, I’m a fan and I wanted to own a copy of the comic, so I went and purchased a copy. I’ve done the same with books, music, and movies. All things I learned about through the Internet.

In the end, industries must evolve or die. There are many, many companies out there who “get it.” Look at Valve and their Steam service. How about Netflix and their streaming video content. How about the numerous music services such as iTunes. It is possible to evolve and live. The trick is to know when you have to. Maybe it’s time for Mr. Lynton to find a new business model.