Space Photography

Slashdot posted a news item late last evening about some rather stunning photos from the International Space Station. On June 12th, the Sarychev Peak volcano erupted. At the same time, the ISS happened to be right overhead. What resulted was some incredible imagery, provided to the public by NASA. Check out the images below:

You can find more images and information here. Isn’t nature awesome?

NANOG 46 – Final Thoughts

Nanog 46 is wrapping up today and it has been an incredible experience. This particular Nanog seemed to have an underlying IPv6 current to it, but, if you believe the reports, IPv6 is going to have to become the standard in the next couple of years. We’ll be running dual-stack configurations for some time to come, but IPv6 rollout is necessary.

To date, I haven’t had a lot to do with IPv6. A few years ago I set up one of the many IPv6 shims, just to check out connectivity, but never really went anywhere with it. It was nothing more than a tech demo at the time, with no real content out there to bother with. Content exists today, however, and will continue to grow as time moves on.

IPv6 connectivity is still spotty and problematic for some, though, and there doesn’t seem to be a definitive, workable solution. For instance, if your IPv6 connectivity is not properly configured, you may lose access to some sites as you receive DNS responses pointing you at IPv6 content, but that you cannot reach. This results in either a major delay in falling back to IPv4 connectivity, or complete breakage. So one of the primary problems right now is whether or not to send AAAA record responses to DNS requests when the IPv6 connectivity status of the receiver is unknown. Google, from what I understand, is using a whitelist system. When a provider has sufficient IPv6 connectivity, Google adds them to their whitelist and the provider is then able to receive AAAA records.

Those problems aside, I think rolling out IPv6 will be pretty straightforward. My general take on this is to run dual-stack to start, and probably for the forseeable future, and getting the network to hand out IPv6 addresses. Once that’s in place, then we can start offering AAAA records for services. I’m still unsure at this point how to handle DNS responses to users with possibly poor v6 connectivity.

Another area of great interest this time around is DNSSEC. I’m still quite skeptical about DNSSEC as a technology, partly due to ignorance, partly due to seeing problems with what I do understand. Rest assured, once I have a better handle on this, I’ll finish up my How DNS Works series.

I’m all for securing the DNS infrastructure and doing something to ensure that DNS cannot be poisoned the same way it can today. DNSSEC aims to add security to DNS such that you can trust the responses you receive. However, I have major concerns with what I’ve seen of DNSSEC so far. One of the bigger problems I see is that each and every domain (zone) needs to be signed. Sure, this makes sense, but my concern is the cost involved to do so. SSL Certificates are not cheap and are a recurring cost. Smaller providers may run into major issues with funding such security. As a result, they will be unable to sign their domains and participate in the secure infrastructure.

Another issue I find extremely problematic is the fallback to TCP. Cryptographic signatures are big, and they tend to be bigger, the larger the key you use. As a result, DNS responses are exceeding the size of UDP and falling back to TCP. One reason DNS works so well today is that the DNS server doesn’t have to worry about retransmissions, state of connections, etc. There is no handshake required, and the UDP packets just fly. It’s up to the client to retransmit if necessary. When you move to TCP, the nature of the protocol means that both the client and server need to keep state information and perform any necessary retransmissions. This takes up socket space on the server, takes time, and uses up many more CPU cycles. Based on a lightning talk during today’s session, when the .ORG domain was signed, they saw a 100-fold increase in TCP connections, moving from less than 1 query per second to almost 100. This concerns me greatly as the majority of the Internet has not enabled DNSSEC at this point. I can see this climbing even more, eventually overwhelming the system and bringing DNS to its knees.

I also believe that moving in this direction will allow the “bad guys” to DoS attack servers in much easier ways as they can easily trigger TCP transactions, perform various TCP-based attacks, and generally muck up the system further.

So what’s the alternative? Well, there is DNSCurve, though I know even less about that as it’s very much a fringe technology at this point. In fact, the first workable patch against djbdns was only released in the past few weeks. It’s going to take some time to absorb what’s out there, but based on the current move to DNSSEC, my general feeling is that no matter how much better DNSCurve may or may not be, it doesn’t have much of a chance. Even so, there’s a lot more to learn in this arena.

I also participated in a Security BOF. BOFs are, essentially, less structured talks on a given subject. There is a bit more audience participation and the audience tends to be a bit smaller. The Security BOF was excellent as there were conversations about abuse, spam, and methods of dealing with each. The spam problem is, of course, widespread and it’s comforting to know that you’re not the only one without a definitive answer. Of course, the flip side of that is that it’s somewhat discouraging to know that even the big guys such as Google are still facing major problems with spam. The conversation as a whole, though, was quite enlightening and I learned a lot.

One of the more exciting parts of Nanog for me, though, was to meet some of the Internet greats. I’ve talked to some of these folks via email and on various mailing lists, but to meet them in person is a rare honor. I was able to meet and speak with both Randy Bush and Paul Vixie, both giants in their fields. I was able to rub elbows with folks from Google, Yahoo, and more. I’ve exchanged PGP keys with several people throughout the conference, serving as a geek’s autograph. I have met some incredible people and I look forward to talking with them in the future.

If you’re a network operator, or your interests lie in that direction, I strongly encourage you to make a trip to at least one NANOG in your lifetime. I’m hooked at this point and I’m looking forward to being able to attend more meetings in the future.

 

Hi, my name is Jason and I Twitter.

As you may have noticed by now, I’ve been using Twitter for a while now. Honestly, I’m not entirely sure I remember what made me decide to make an account to begin with, but I’m pretty sure it’s Wil Wheaton’s fault. But, since I’m an old pro now, I thought perhaps it was time to talk about it…

I’m not a huge fan of social media. I avoid MySpace like the plague. In fact, I’m fairly certain MySpace is a plague carrier… I do have a Facebook account, but that’s because my best friend apparently hates me. I’ll show him, though. I refuse to use the Facebook account for anything more than viewing his updates, then I’ll email him comments. There, take that!

Why do I avoid these? Honestly, it has a lot to do with what I believe are poorly designed and implemented interfaces. Seriously, have you ever seen a decent looking MySpace site? Until yesterday I had avoided Facebook, much for the same reason, and while Facebook definitely looks cleaner, I still find it very cluttered and difficult to navigate. I’m probably not giving Facebook much of a chance as I’ve only seen 3 or 4 profiles, but they all look the same…

But then there’s Twitter. Twitter, I find, is quite interesting. What intrigues me the most is the size restriction. Posting via twitter is limited to a max of 140 characters. Generally, this means you need to think before you post. Sure, you can use that insane texting vocabulary [PDF] made popular by phone texting, but I certainly won’t be following you if you do. Twitter also has a pretty open API which has spawned a slew of third-party apps, as can be seen in the Twitterverse image to the right.

Twitter has a lot of features, some readily apparent, some not. When you first start, it can be a little daunting to figure out what’s going on. There are a bunch of getting started guides out there, including a book from O’Reilly. I’ll toss out some information here as well to get you started.

Most people join Twitter to view the updates from other people. With Twitter, you can pick and choose who you follow. Following someone allows you to see their updates on your local Twitter feed. But even if you don’t follow someone, you can go to that user’s Twitter page and view their updates, unless they’ve marked their account private. Private accounts need to approve you as a follower before you can see their page. Wired has a pretty good list of interesting people to follow on Twitter. Me? I’d recommend Wil Wheaton, Warren Ellis, Tim O’Reilly, Felicia Day, Neil Gaiman, and The Onion to start. Oh yeah.. And me too!

So now you’re following some people and you can see their updates on your Twitter feed. Now, perhaps, you’d like to make updates of your own. Perhaps you’d like to send a message to someone. Well, there are two ways to do this. The most common way is via a reply. To send a reply, precede the username of the person you’re replying to with an @ . That’s all there is to it, it looks something like this:

@wilw This twitter thing is pretty slick

Your message will appear in the recipient’s Twitter feed. Of course, if it’s someone as popular as Wil Wheaton, you may never get a response as he tends to get a lot of messages. If you’re one of the few (100 or so) people that Wil follows, you can send him a direct message. Direct messages are only possible between people who follow each other. A direct message is the username preceded by a d. Again, quite simple, like this :

d wilw Wouldn’t it be cool if you actually followed me and this would work?

In a nutshell, that’s enough to get you started with Twitter. If you need more help, Twitter has a pretty decent help site. I recommend using a client to interact with Twitter, perhaps Twitterific for OSX or Twhirl. Twhirl runs via Adobe AIR, so it’s semi-cross platform, running on all the majors. Twitter has a list of a few clients on their site.

There are two other Twitter syntaxes I want to touch on briefly. First, there’s the concept of a Re-Tweet. Simply put, a Re-Tweet is a message that someone receives and passes on to their followers. The accepted method of Re-Tweeting is to merely put RT before the message, like so :

RT @wilw You should all follow @XenoPhage, he’s incredible!

Finally, there are hashtags. Hashtags are a mechanism that can be used to search for topics quickly. Hashtags are added in any message by preceding a word with a #, like so :

This #twitter thing is pretty slick. I’m really getting the hang of it. Time to install #twitterific!

Now, if you head over to hashtags.org, you can follow topics and trends, find new people to follow, and more. It’s an interesting way to add metadata that can be used by others without cluttering up a conversation.

So what about the future of Twitter? Well, the future, as usual, is uncertain. That said, there were rumors in April about Google possibly purchasing Twitter, though those talks apparently broke down. Right now, Twitter continues to grow in features and popularity. There is speculation about the future, but no one really knows what will happen. I’m hoping Twitter sticks around for a while, it’s a fun distraction that has some really good uses.

 

That no good, nothing Internet

At the end of May, the New Yorker hosted a panel discussion called “The Future of Filmmaking.” At that panel, Michael Lynton, Chairman and CEO of Sony Pictures Entertainment, made the following comment (paraphrased version from Wikipedia):

“I’m a guy who doesn’t see anything good having come from the Internet, period. [The internet has] created this notion that anyone can have whatever they want at any given time. It’s as if the stores on Madison Avenue were open 24 hours a day. They feel entitled. They say, ‘Give it to me now,’ and if you don’t give it to them for free, they’ll steal it.”

This statement was like a shot across the bow of the blogosphere and incited ridicule, derision, and a general uproar. In many cases, though, the response was one of incredulity that a CEO of a major content company doesn’t see the bigger picture and cannot see the absolutely amazing advances the Internet have made possible.

Mr. Lynton responded by writing an article for the Huffington Post. He expanded on his comment saying that the Internet has spawned nothing but piracy and has had a massive impact on “legitimate” business and threatened a number of industries including music, newspapers, books, and movies. He goes on to say that the Internet should be regulated, much like how the Interstate Highway System was regulated when it was built in the 1950’s.

The problem with his response is that he overlooks the reason behind much of the piracy, as well as making a flawed comparison of the Internet to a highway system. This is a gentleman who was formerly the CEO of America Online, one of the first Internet providers. Having been at the forefront of the Internet revolution, I would have expected more from him, but apparently not.

At the moment, he’s the head of a major media organization that makes their money by creating content that the viewers pay for. For many years the movie industry has created content and released it in a controlled fashion, first to the theatre, then to VHS/DVD, and finally to cable television stations. Each phase of the release cycle opened up a new revenue stream for the movie companies, allowing them a continuous source of income. One of Mr. Lynton’s chief complaints is that the Internet has broken this business model. His belief is that people are no longer willing to wait for content and are willing to break the law to get their content.

In a way, he’s right. The Internet has allowed this. Of course, this is the price of advancement. Guns allowed murderers and robbers to threaten and kill more people. Cars allowed robbers to escape the scene of the crime faster, making it more difficult for the police to chase them. Telephones have made fraud and deception easy and difficult to trace. Every advancement in technology has both positive and negative effects.

As new technology is used and as people become more comfortable with it, the benefits generally start to outweigh the drawbacks. Because the Internet is having a global effect, it has shaken up a number of industries. Those industries that are not willing to change and adapt will die, much like the industries of old. When cars were invented, the horse and buggie industry did what they could to make owning a car difficult. In the end, they failed, and went out of business. When movies were invented, the theater companies protested and tried to stop movies. In the end, movies mostly killed off theaters. Of course, in both instances, traces remain.

The Internet is forcing changes all over. For instance, users are finding their news online through social media such as blogs, email, and more. Newspapers have been slow to provide online content and are suffering. Because of the instant nature of the Internet, users are more likely to find their news online, rather than wait for the newspaper to be printed and delivered to their home.

Users want more content in an instant manner, and the industries need to adapt to the new climate. Media companies have not adapted quickly enough and users have found alternate methods of providing the content that people want, often leading to piracy. And this, I think, is the crux of the problem. If the content is available in a quick and easy manner, people will be more likely to obtain it legally. But it has to be provided in a reasonable manner.

Media companies have decided to provide content, with restrictions. They claim the restrictions are there to prevent piracy and protect their so-called intellectual property, but if you look closely, the restrictions always mean that they make more money. Music and movie companies add DRM to their content, restricting its use and, in many cases, causing numerous interoperability problems. Content is provided via the company, but if the company vanishes, so does the content you paid for. In many instances, the company holds the key to whether or not you can view or listen to your content, and if the company disappears, so do the keys.

When movies were provided on VHS, and music was on tapes and CDs, people were able to freely copy them. There was piracy back then, too. But the overall effect on the industry was nil. Now, with the advent of the Internet, distribution is easier. What’s interesting to note, however, is that distribution (both legal and illegal) increases awareness. X-Men, the pirated movie that Mr. Lynton mentions in his article, still opened with massive revenues. Why? The pirating of the movie was big news as the FBI was brought in and as the movie company ranted and raved. As a result, interest in the movie grew resulting in a big opening weekend.

It doesn’t always have to be that way, though. Every day, I hear about interesting and new things via the Internet. I have discovered new music, movies, books, and more. I have payed for content I received free over the Internet, purely to give back to the creators. In some cases there is an additional benefit to buying the content, but in others, it’s a desire to own a copy. For example, a number of stories by Cory Doctorow were re-imagined as comics. You can freely download the comic online, which I did. At the same time, I’m a fan and I wanted to own a copy of the comic, so I went and purchased a copy. I’ve done the same with books, music, and movies. All things I learned about through the Internet.

In the end, industries must evolve or die. There are many, many companies out there who “get it.” Look at Valve and their Steam service. How about Netflix and their streaming video content. How about the numerous music services such as iTunes. It is possible to evolve and live. The trick is to know when you have to. Maybe it’s time for Mr. Lynton to find a new business model.

 

Search gets … smarter?

Wolfram Research, makers of Mathematica, a leading computational software program, have developed a new search engine, Wolfram Alpha. Wolfram Alpha has been hailed by some as a “Google Killer,” and as a possible “Propaganda Machine” by others. Although, incidentally, if you type “iraq war” into Wolfram Alpha as the propaganda article mentions, you get the following :

And that seems to be the major difference between Wolfram Alpha and a typical search engine. Wolfram Alpha is more of a calculation machine rather than a search engine. Type in something that can’t really be calculated, say you’re looking for a ferrari, and you get the following:

Wolfram Alpha just doesn’t know what to do with that. Of course, that should cut down on the porn spam quite a bit…

There are some funny bits, though. For instance, ask for a calculation such as “What is the airspeed velocity of an unladen swallow,” and you may actually get an answer:

Or, perhaps, “What is the ultimate answer to life, the universe, and everything?

Overall, Wolfram Alpha seems to be a pretty decent source for statistical and mathematical information. For instance, type in “Google” and you get a plethora of information about Google, the company:

Choose to view the information on Google as a word, and you get this:

Though, I find it surprising that it doesn’t suggest the origin of the word itself, “googol.” However, if you search for “googol” it does have an accurate answer:

Ultimately, I don’t think it’s anything close to a “Google Killer,” but it definitely has potential, both in the academic community, and with students overall. Google won’t just roll over, though, and has announced the launch of a new Google Labs project, Google Squared. Google Squared is an attempt to organize the data on the web into a format the seems to be more usable for researchers. Time will tell, though, as Squared hasn’t been launched yet.

I encourage you to take the Wolfram Alpha engine for a spin, see what you can find. I think you’ll be pleasantly surprised at the incredible amount of useful information it has. And, assuming it survives, it will only get better as time goes on.

 

Slaves to Technology?

Over the past few years I have slowly moved from carrying cash to using my debit card for purchases. It’s pretty convenient for me, and reduces, somewhat, any loss I suffer from a lost wallet or something similar. I’m sure I’m not the only one doing this. However, this means I rely on technology a bit more. And when that technology fails, life becomes difficult. This bit me again this week.

I received a new debit card a few months ago and found that after just a few months, the magnetic strip on the back of the card started to rub off. I guess they’re using something different to fabricate these newer cards as my previous card lasted several years, and was still good, when it expired and I needed a new one. So, I went about ordering a new one and life went on.

Now, a mere month or so later, the strip has yet again rubbed off. Again, I’ve ordered one and I’m expecting it any day now. In the meantime, I had to run to the market the other day. I run around, gather the stuff I need, and proceed to checkout. I normally use the self-checkout, if only to avoid the usually long lines elsewhere. I go through the ritual of scanning everything, placing them into bags, etc. When I ran my card through, it failed, pretty much as I expected. I tried running it through a few times, and even tried the “bag” trick which also failed.

So what do you do in this situation? I thought there was a pretty simple solution to this, so I asked the girl at the counter to run it through by hand. This, apparently, was a big mistake. What resulted was a 20 minute ordeal as they ran to get a manual card machine, screwed it up three times and had to keep running to get new carbon sheets. Once they finally figured out how to use the manual machine, they had to enter the data into the computer. Of course, they screwed this up innumerable times. All said and done, they were finally able to get the transaction to go through.

Seriously? Come on… I do this on the Internet all the time! Enter the card number, name, expiration, and CVV. Done! I even mentioned this and was told that it was “far more complicated than that.” …. Ummm…. ok … ?

So in the end, they have a physical copy of the card (albeit a fairly crappy one… they had to hold on to my card to read the numbers because it didn’t copy well), and they have the computer transaction receipt as well. The computer receipt has the exact same information on it that a normal transaction has… So what was the problem again?

And it’s not just this particular store, I’ve had problems elsewhere. Burger King has no alternate plan if their credit card processing fails. At most, I was offered the option of running to get cash to pay with or wait for their computer to reboot… In hindsight, I should have gone to get cash.. Apparently they’re running the slowest computers on earth.

Lowes? The girl at the counter got frantic when the card wouldn’t read. She called for help, and the help got frantic too. Luckily it scanned after the umpteenth time, otherwise I may have been witness to a nervous breakdown.

Dunkin Donuts! Well, apparently they’re fairly competent there. My card failed to scan so the girl at the counter asked for it back, typed in the numbers, and ran it through manually. Took an extra few seconds. Done.

So let this be a lesson. Technology is great when it works, but you may be in trouble when it fails… At the very least, it can be incredibly inconvenient. And to think… Only a few years ago, credit cards had to be manually handled, with the carbon paper and all. And it only took a few minutes back then… How times change…

 

Bad crawler, no cookie!

My wife is a professional SEO consultant with her own business. I work with her on occasion, helping out with the server end of things. It’s fun and challenging, and I think we work pretty well together.

So, the other day she comes to me with an odd question. Why is Google Analytics suddenly showing a high bounce rate for new keywords? Interesting problem, of course. One of the first things that popped into my mind was either a blackhat SEO or a rival of some sort. It sounds paranoid, but it does happen.

So I pulled the access logs and started pouring through them. Since the bounce rate came from a keyword search, it was easy enough to locate the offending entries. There were hundreds of log entries, all coming from the same 65.55.0.0/16 address space. A couple more seconds of digging showed that 65.55.0.0/16 was owned by Microsoft. Reverse DNS on some of the IPs revealed that these IPs were all part of the MSN web crawler. MSN apparently doesn’t provide reverse DNS for all of their IPs. No matter, there were enough to prove that this was MSN. Here’s an example from the log:

65.55.110.195 – – [24/Mar/2009:03:08:05 -0400] “GET /index.html HTTP/1.0” 200 58838 “http://search.live.com/results.aspx?q=keyword” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)”

So what in the world is going on here? Why are we getting pounded by hundreds upon hundreds of requests from the MSN crawler? And why is the MSN crawler reporting itself as Internet Explorer 6.0? The referrer URL showed the source of the request to be from a live.com search, but these being crawler addresses, I’m willing to bet this was programmed in rather than a result of an actual search. It doesn’t really matter, though, because whatever it is, it’s causing a high bounce rate and really screwing up the site statistics. The high bounce rate may be affecting the Google ranking as well.

Before we blocked these requests, though, we wanted to make sure this was unwanted behavior, so we started digging for info. One of the pages we came across described the same behavior we were seeing. As it turns out, this strange activity is intended. Live.com claims they do this to detect cloaking. Of course, it was quite easy to identify these IPs as coming from Microsoft, and determine (rather quickly) that they are sourcing from a search engine. It would be very simple to broaden any cloaking to include those IPs, making this crazy technique useless.

Microsoft claims they are continuing to tune their crawler to reduce the spam and make the keywords more relevant. The point is, though, that this seems to hurt more than it helps. As a result, many webmasters are blocking the referrer spam, at risk of having MSN blacklist the site. We have followed suit, deeming both MSN and Live.com to be irrelevant search engines.

Of course, if someone out there has a better idea of how to handle this, I’m listening…

 

I can’t believe it’s over …

So, it’s finally over.. The final show was… absolutely incredible. Of course, I’m talking about Battlestar Galactica. If you haven’t seen the finale yet, then stop reading now, go watch BSG at the UN instead. I’m not here to spoil the ending, but I am going to talk about some of the themes and how they tied up some loose ends.

This review is a tad late, but I didn’t get to see the show until Sunday. Thank the gods for DVRs… I was pleasantly surprised when I noticed that the finale was a full two hours. I was afraid that it would only be an hour long and I had no idea how they would tie everything up in an hour. Even more to my amazement, they tied up most of the major loose ends by the end of the first hour. In fact, I had lost track of time and thought that was the end! Obviously, though, it was not.

I had a few questions throughout the show that they wrapped up quite nicely at the end. Both the original series and this new series had Earth as the mythical destination for the thirteenth colony. Based on this, one would assume that this Earth represents the real Earth today, as it did in the original series. So, it was a bit of a shock when they made it to “Earth” and it was a nuclear wasteland. I had a hard time accepting this and wondered how they would pull the series together after destroying the main goal.

Starbuck leading them to their final destination wasn’t much of a surprised based on the build-up to the finale, but the ultimate question about her remained. If she wasn’t a Cylon, then how could she have died, yet come back? And although there was no definitive answer to this question, I think they cleaned it up quite nicely. What made it more interesting, though, was that she questioned her own existence. She didn’t know what she was, making me think that maybe there were more than 12 models of Cylon. Interestingly enough, that would make her the thirteenth model.. Ah, magic 13.

The final big battle was very exciting. Galactica proved that she could take a serious beating and still complete her mission. Just as impressive was the ability of the crew to plot a jump that landed them directly next to the Cylon colony, keeping them in the same stable orbit around the singularity that the colony was in.

The CGI effects were spectacular. The ship to ship battle lit up the sky with laser and missile fire, explosions, and eventually squadrons of fighters going head to head. Inside, the early model Cylons sped through the corridors, blazing away at each other. They spared nothing putting these sequences together.

One part of the final sequences did bother me, though. In the end, Cyril commits suicide, seemingly for no reason. For someone so determined to live and bring forward a method of survival for his people, he gave up very rapidly. Ronald Moore, one of the producers and writers, explained this as a realization of futility. I’m not sure I buy that, but it doesn’t detract much from the overall story.

I loved the ending, though. Jumping ahead in time, landing right in Times Square with Six and Baltar was incredible. The various videos of robotics breakthroughs all over the screens was a nice touch. It definitely makes one think about the future, what may be possible, and what the consequences of those possibilities may be. I think we have an excellent chance at creating a true AI, and maybe it can all go wrong. In the end, will it be worth it to try?

 

Roll the … Building?

From the air, it looks like something blown over onto it’s side, just another casualty of mother nature. From the ground, it looks like an art sculpture, interesting and colorful. In reality, it’s a transformer. No, not the cool shape-shifting alien robots from Cybertron. This Transformer is a building concept designed by Rem Koolhaas and the Office for Metropolitan Architecture, and built by Prada.

The Transformer is flipped and rolled by a group of large cranes, placing the building into one of four configurations. When the hexagon face is flat on the ground, the building serves as a platform for fashion exhibits. Place the circular face on the ground and you have a raised platform for special events. The circular platform in the middle also serves as a projector when the platform is placed on the square face. The square face has raised seating, making it perfect for movie viewing. And finally, when the cross-shaped face is placed on the ground, the building is in the perfect configuration for art exhibits.

The entire building will be covered with a smooth elastic membrane, serving as walls, keeping the entire pavilion free from rain and wind. It remains to be seen how durable that membrane will be with the pavilion being rolled around.

The structure is quite impressive and different. I wonder, however, about the necessity to use large cranes to move it around. Obviously these are necessary as the pavilion likely weighs several hundred tons. The presence of the cranes, however, detracts from the attractiveness of the structure, as well as causing damage to the surrounding grounds. Of course, how else are you going to shift the pavilion from one configuration to another.

The transformer is currently located in Seoul, Korea and will be there from March until July, 2009.

 

Introducing, The Touchbook

Engadget posted a story about a new Netbook from a company called Always Innovating. A press release about the product can be found here. In short, it’s a netbook, and a tablet PC, but without the typical “fold it over on top of the keyboard” scenario. The screen literally detaches from the keyboard and becomes an autonomous unit.

Inside this little beast is an ARM OMAP3 processor with 8 Gig of storage on a micro SD chip. They don’t specify which OMAP3 processor is included, so both speed and die size is unknown. It touts an 8.9″ screen, typical of the current netbook generation. For network access it has 802.11b/g/n Wifi. Bluetooth is also included, so the possibility of tethering exists as well.
Both the tablet and keyboard have built-in batteries. Battery life is expected to be 10 to 15 hours when both the tablet and keyboard are used in-concert. The tablet is expected to last between 3 and 5 hours on battery when it is disconnected from the keyboard. Battery life in the keyboard is, of course, irrelevant.

Always Innovating demonstrated the Touchbook at DEMO 2009, a technical conference that wrapped up yesterday. The demonstration video is included below:

Overall, this looks to be a fairly decent device. I’m a bit concerned about the ARM processor, and I wonder what sort of OS support it will have. The TouchBook OS will be installed by default, though from reading the FAQs, it appears that it will run anything from Android, to Ubuntu, to Windows CE.

I’m also curious as to what the device will be using for memory. Is memory shared on the SD card? Or will there be actual RAM in the device? All questions I hope to have answers for in the near future. Looks good, though, and I’m excited at the prospect of possibly getting one. Definitely something I could put to good use!