Windows Live Writer Beta

I’m writing this post using the new Windows Live Writer Beta. It’s a blogging tool that allows you to write your blog entries offline and upload them later. Useful, I guess, if you’re not connected all the time. For me, it’s just something to play with. Time will tell whether I like it or not.

To use Writer with a Serendipity blog you’ll need to install the XML-RPC plugin. Once that’s up and working you need to tell Writer what kind of blog you’re using. After it fails the auto-detect you’ll need to choose the API to use. I’m using the Metaweblog API and it seems to be working fine. It also asks for the URL for publishing. For the XML-RPC plugin, the URL will be something like this : http://www.example.com/blog/serendipity_xmlrpc.php

So, for now, I’m just messing around with the system to see what it’s capable of. It seems to be a fairly nice system, pretty at least. Just a document editor with the standard font options on the surface. Hyperlinks are available (as they should be), and it seems to handle media as well such as pictures, movies, audio, etc. I’ve haven’t dealt with media yet on this blog, so I’m not that interested in those capabilities.

Writer won’t download the categories I have set up on my blog, so I’ll have to hand-edit that after I publish. No big deal I guess, but kinda defeats the purpose of this utility. I also don’t see a way to add serendipity tags, so that’s another hand-edit. You can add third party tags such as those from Technorati, LiveJournal, and others, but I have no interest in that.

The web preview is pretty nice. It shows you exactly what the web page will look like when you publish it. It’s pretty cool and seems to work well.

Well, I guess it’s a little nicer than the JavaScript WYSIWYG editor that’s built into serendipity, but between the need for XML-RPC and the lack of serendipity features, I don’t think I’ll be continuing to use Live Writer. While trying to get Writer to work, I also ran across two other tools, w.bloggar and Performancing. The first is a program similar to Writer that seems to allow offline editing. The second is a Firefox plugin that seems to have a ton of features. I’ll be checking both out in the near future.

Firefox 2.0

The latest incarnation of the Firefox browser is nearing release. Version 2.0 brings with it a smattering of nifty features as well as an updated UI and enhanced add-on handling.

I’m particularly fond of the built-in spell checker which comes in really handy. It works in a fashion similar to how the spell checker in MS Office and Openoffice works. Each misspelled work is underlined in red. When you right click on the underlined word, Firefox pops up a list of suggestions. You can choose one of the suggested replacements, or add the word to your dictionary. The spell checker only checks text boxes by default, but you can right click on any text entry field to force a spell check.

The new UI places a close icon on each tab, allowing you to close a tab in a rapid fashion. I can see this causing slight problems with people that are too quick to click as it doesn’t prompt you to close the tab. If you have a large number of tabs open, it begins to suppress the close button on all but the current tab. There is also a drop down on the far right side of the tab bar that shows you all of the open tabs in a list, allowing you to read the full title before jumping to the tab you need.

Firefox now defaults to opening all links in new tabs instead of new windows. I prefer this behavior to simply opening new windows. In addition, the popup blocker has apparently been enhanced. Since installing 2.0, I have not seen a single popup.

The default search bar now supports suggestions. As you type, the search engine you have chosen will offer suggestions for search terms, helping you find the information you want. This is the same technology that Google uses for Google Suggest. The new search engine manager allows you to add in additional search engines as well.

Overall, I think this is a real positive step in Firefox’s evolution. You should check it out, it’s a really great browser!

Windows XP ISO Mount Utility

I was looking around earlier today for a tool that would allow me to mount .iso images in Windows XP. I stumbled across a tool Microsoft wrote called the Virtual CD Control Panel. Unfortunately I can’t seem to find a page on the Microsoft web site that directly references this tool, but it is a download from a Microsoft site, and it made it through my virus checker, so my best guess is that it’s ok.

 

 

It’s pretty easy to install. Copy the VCdRom.sys file into your system32\drivers folder and then run the executable. From there use the Driver Control button to load and start the driver and then you can add virtual drives that can be used to mount .iso files. Simple!

Just thought I might share my find. I find it extremely easy to mount .iso files in Linux and wanted something on the Microsoft side as well.

 

The Patchwork OS

Twelve patches, Twenty Three vulnerabilities.

Tuesday was Microsoft Patch day. Of the twelve patches, nine were for the Windows OS, two for Office, and one for Internet Explorer. A breakdown of the severity of each patch can be found on the ISC Website.

 

I mention this because of the severity of these flaws. There is already an exploit in the wild taking advantage of MS06-040, a flaw in the Server service. This is yet another flaw in the RPC functionality of Windows. Ports 139/tcp and 445/tcp are again the attack vector used to exploit this. For those that remember the past few years, these ports are notorious for being used as vectors to exploit the RPC service. Most commonly associated with Netbios, these are probably the most blocked ports on the Internet.

In addition to the above gem, there are also vulnerabilities in DNS resolution, the Windows Management Console, and more. You can find more information on all of these exploits at the link mentioned above. I highly recommend patching your system ASAP since exploits are in the wild and this could easily turn into another Blaster style attack. Even the Department of Homeland Security is recommending that you patch immediately. According to some reports, Microsoft is already bracing for an attack.

 

I find the frequency and number of exploitable bugs in the Windows OS to be disturbing. Linux and OSX have bugs, but nothing as frequent as Windows seems to have. A lot of the reports that compare the various operating systems seems to miss the fact that Windows as an OS (minus any Office or IE patches) has a higher number of critical exploits as compared to Linux or OSX. Often the exploits of other packages such as apache, ftp, etc are lumped in with the Linux count and assumed to be part of the OS. While most Linux distros ship with much more than the Linux Kernel itself, it’s unfair to count those exploits as part of the whole. Other reports seem to realize these facts and produce results much closer to the truth.

I think, however, that Microsoft has helped the computer industry. They helped popularize the personal computer and provided much of the software for the initial PC boom. They have invested billions of dollars into creating their products and bringing them to market. But, I think it’s high time for them to make some major changes. I would like to see them embrace the Open Source community and learn how to build and market open source products. If they embraced the Linux OS and helped extend it instead of fighting against it, I think the computer industry could take another giant leap forward. They can certainly continue to create and sell the various applications they currently have, and even produce new ones. The very act of running their apps on a Linux system may help to enhance security across the entire industry. Linux itself has proven to be very resilient to attack.

One of the biggest myths about Linux seems to be the belief that all software running on a Linux system has to be open source. Nothing could be further from the truth, however. It is certainly acceptable to run closed source products on an open source OS provided that you play within the rules. I’m not 100% clear on all of the ramifications of the GPL license, but as I understand it, you are permitted to modify any OSS product out there provided you make the source available. But, I believe you are permitted to build closed source apps using OSS libraries and not distribute the source *if* you use unaltered versions of the libraries. I may be wrong here, so please correct me if I am. Regardless, the ability to write closed source programs that run on an OSS platform definitely exists.

 

Modchips

I stumbled upon a blog entry on Ozymandias about Modchips. Ozymandias is the blog for Andre Vrignaud, an XBox Team Member. I found his comments to be interesting, but I disagree on a few points.

Andre cites three “main” reasons used to defend modchips :

 

  • the ability to copy and play pirated games
  • the ability to play import games
  • the ability to add new functionality (such as running homebrew software)

Like Andre, I’ll comment on these one at a time.

 

 

Pirated games… What can be said about this? Piracy is, in the end, wrong. There are a number of reasons given for piracy ranging from the pure view of “I want it and I don’t want to pay for it”, to the almost forgivable, “I need it to survive but I can’t afford it.” The former is just pure piracy and is akin to stealing a physical object. There are arguments that software is a different beast because stealing a copy doesn’t mean there is one less copy in the world, but, in fact, that there is one more. But, in general terms, I can agree that this is stealing.

The latter excuse is more interesting. There are several instances of people pirating software for the simple reason that they need it to produce a viable product. However, they don’t have the up-front money to pay for the pirated software. In some cases, they purchase the pirated software after they’ve earned the money to do so. This excuse is becoming less viable at time goes on, however. With the advent of Open Source software, there are numerous OSS packages that can produce results similar to commercial products. One has to be careful, however, since some of these OSS products include licenses to prevent commercialization.

Regardless of the reasons for piracy though, I agree with Andre. If you’re modding your console for the express reason of pirating games, then you’re wrong. This is probably the main reason Modchips get such a bad name. Those who know what modchips are think you’re doing it to pirate games, not to unlock features or make homebrew a reality.

 

Next up is imports. Imports are a bit of a wierd beast. In the not too distant past, consoles were able to play any game, import or local. The main reasons for importing a game were to get something that wasn’t available on the local market. The downside was that you usually needed to learn a new language to play the game! Unfortunately, my Japanese is basically nonexistant, so playing imports is tough.

More recently, however, console manufacturers have “region locked” their consoles rendering imports useless. There are a number of reasons for region locking such as different release dates across countries, preventing illegal content in certain countries, and increased revenue due to pricing differences between countries. Vendors feel pretty strongly about these points and even have the backing of the US Government in the form of the much hated Digital Millenium Copyright Act (DMCA). The DMCA has a specific clause that restricts circumventing these protections.

With the exception of preventing illegal content from entering certain countries, this all appears to be about money. The vendor can region lock a game or movie, and sell that title at varying prices depending on where in the world they are. Obviously this allows them to maximize their profits by taking advantage of the local market.

However, there is a slight problem with this. Some people enjoy watching foreign films, or playing imported games. For some, it may even be a means to stem the tide of homesickness. For others, it’s a chance to play something that won’t be released in their home region. I see this as a perfectly valid reason for wanting to mod your console. You paid for the console, you paid for the movie/game, why can’t you just use the two together? Andre states the following :

But sometimes companies have good reasons to either not release a title into a region or release it at different dates. It may be because of the time and cost of localization, marketing plans, ad buys, cultural considerations, or perhaps even because of the impact of piracy in the region. Whatever the case, it’s safe to assume the publisher has thought about it.

First of all, if I’m importing a game, there’s a good chance I know it hasn’t been localized. And for a lot of people, that’s the point. So concerns about time and money for localization are moot. As for piracy, I’m not sure what to say there. Because of possible piracy in a region, a company is unwilling to allow anyone at all to purchase the title? Give me a break, money is money. I can understand that they don’t want to localize and market the product, but if it’s been localized and marketed elsewhere, why prevent anyone in that region from buying and using it? It just doesn’t make sense to me. If they want to pirate it, they likely have modded consoles anyways, so the argument is pointless.

I’m quite sure the publisher has thought it through though. If you weigh the cost vs revenue it makes sense to not bother marketing some areas. For instance, there are a large number of games that are popular in Japan that just don’t have a chance in the US. So it makes sense for them to skip localization and marketing for the US. But, if I happen to speak and read Japanese, and I have an interest in the game, why would they want to prevent me from handing over my hard earned money to purchase it? In fact, that’s extra, unforseen revenue. Isn’t that a good thing?

 

The last item Andre cites is the desire for homebrew. I can definitely identify with this desire. I own a PSP and I’ve been looking long and hard at the Undiluted Platinum PSP Modchip. This chip allows the user to switch between 2 versions of firmware on the PSP, allowing you to stick with version 1.5 for homebrew, or the latest version for compatibility with the latest games. Of course, this means you need to alter the PSP, void the warranty, etc. And who knows, maybe Sony will come up with a workaround to disable it. But the desire to be able to do this is pretty strong.

According to Andre, the industry currently uses a razor/razor blade model. In short, this means that they sell the console at a loss with the hope that the end user will buy enough games and peripherals to make up the cost. Not a bad model for something like a razor. Chances are you’re going to buy blades in order to use that razor. Though, as one person commented, you can always use them to prop open windows…

So the argument is that since the console manufacturers sell at a loss, we should be locked into using the console to their specifications and no others. Is it my fault that the vendor decided to sell at a loss? Did I make some sort of deal with them stating that if they sold the console at a loss, I would make up the difference in games/movies and peripherals? They’re right that the lower cost is an incentive to buy. If the PSP was twice it’s current price, I probably wouldn’t have purchased it. And Andre hits on that point :

Some folks point to the fact that they bought the hardware and believe they should be able to do anything they wish with it. Unfortunately, this argument ignores the fact that they’re buying that hardware at below cost, and it’s the razor/razor blade model that makes it even possible to buy at that price. The other solution would be to sell the hardware at a price that covers cost and also includes a profit margin so that selling the console alone (with no game/peripheral/service sales) could be a stand-alone business.

And he goes on to state some problems with this reasoning :

Problem is A) this model already exists (it’s called a PC), and B) selling a console at PC prices (especially with the capabilities the console has in it) would simply be too expensive and no one would buy it. At the end of the day, the cost difference needs to be made up somewhere, and that’s why we need to you buy those razor blades.

So, reason number one is that the PC already exists. Well, it does, but is it portable? Does everyone have the same exact PC as you? The same reasons for creating content on a console are relevant to the desire for homebrew as well. It’s often much easier to develop for a single static platform than it is for a platform that varies from unit to unit. You also need to keep in mind that most, if not all, of the users desiring the ability to create homebrew software already own a PC. It’s the desire to work on a different platform that drives us.

Andre’s second reason is cost. And here I have to agree slightly. If they were to sell the console at cost, then it may be to expensive. Or would it? How much are these companies losing per console? I’ve heard varying numbers, but I think the vendor is the only one who knows for certain.

So, yes, the difference should be made up with peripherals. Hrm. A thought has occured to me. Maybe they could sell a software development kit! And the necessary hardware to copy code from the PC to the console! Couldn’t that make up a portion of the cost? Yes, I’m aware that they already have development kits for the console, but I can’t afford it, can you? If they released a slimmed down version of the software, minus all of the specialty hardware that usually ships with the SDK (commonly because the actual console has yet to exist prior to them shipping the SDK), then the cost can be reduced quite a bit. Don’t offer support for the SDK, just release it to the public and the public will create the support. Don’t believe me? How about ps2dev which supports both PS2 and PSP development? There are hundreds of site on the internet that support PSP development. And hundreds more that support XBox, Gamecube, Gameboy, etc. And none of those console manufactureres has, to my knowledge, released any development code at all. It’s everyday hackers like you and I that are creating the SDKs from scratch and releasing them to the public.

 

So in short, I don’t see a problem with Modchips in general. There are those people who will use them to pirate and steal, but in all honesty, the Modchip isn’t the reason for that. Pirates are out there to pirate for the pure reason that they can make money doing it. And regardless of the existance of a Modchip, the pirate will continue. Perhaps the need for a Modchip can be reduced if the console manufacturers would give up on this idea of region locking, and open up the consoles to the masses. Let the little guys take a crack at coding. Are you afraid they might create something better than what you have to offer?

Patent Wars

And so it begins.

 

Slashdot posted an article today about some patent claims against Open Source developers. They linked to an article by Bruce Perens, a well known OSS advocate, detailing some of the issues surrounding 2 particular patent cases currently pending. The first case is a recent case against RedHat regarding their Hibernate software package. Firestar Software is claiming that they hold a patent on what they call Object Relational Mapping. If I understand correctly, this is a programming technique used to hide the implementation details of a database behind an object. In other words, it’s basically encapsulating the database within an object.

Umm.. yeah. Duh. Ok, so let me get this straight. If I create an object in a programming language that can be used within the program to prevent having to write direct SQL calls, then that falls under this patent? Well, I guess I’ll have someone banging on my door pretty soon. phpTodo uses this same technique! Isn’t this an obvious extension of the object-orientation paradigm in most modern programming languages? It’s the next logical step from creating procedures or functions to accomplish the same thing!

According to the article by Bruce, there is plenty of prior art that covers this. And rightly so! The problem here seems to be the US patent system as a whole. Patents on their own seem, at least to me, to be something useful. At least, useful to a degree. I don’t hold any patents so, if anything, I’m biased against the system. But I do see some worth in it. I can see the need to defend a new, unique idea, at least for a time. However, it seems that patents are being granted on the most ridiculous things! For instance, check out patent number 6,368,227. WHAT? Are you kidding me? A patent for swinging on a swing? Sure, it’s side to side instead of the traditional forward and backward swing, but give me a break. I did this when I was a kid, and probably in the same manner.

Check out this excerpt from the patent itself :

“It should be noted that because pulling alternately on one chain and then the other resembles in some measure the movements one would use to swing from vines in a dense jungle forest, the swinging method of the present invention may be referred to by the present inventor and his sister as ‘Tarzan’ swinging. The user may even choose to produce a Tarzan-type yell while swinging in the manner described, which more accurately replicates swinging on vines in a dense jungle forest. Actual jungle forestry is not required.”

 

 

 

It seems to me that the patent system needs a major overhaul. I swear I’m not trying to jump on the bandwagon here, but when larger companies start leveraging these ridiculous patents, I get a bit scared. I’m just as open to getting sued as RedHat is. I think most of the uproar over the Firestar patent has to do with them suing an Open Source company, but the same remains true for any other company. For instance, the patent dispute against RIM. My main issue with that case isn’t so much the content of the patents, but rather the company that held the patents. NTP is a holding company. The entire reason NTP exists is as an entity that owns patents and collects fees based on usage of those patents. From my point of view, this is extortion. Basically, these companies hold the patents and require the user of the patent to pay fees for continued use. But they never use the patent themselves! In fact, given the task, I doubt any patent holding company could ever hope to implement any of the patents they hold.

But even patents that are blatantly obvious and are easily overturned are still extremely harmful. The second case that Bruce mentions is against a small open-source developer, Bob Jacobsen, who makes no money from his creation, JMRI. KAM, the company that filed the claim, holds the rights to patent 6,530,329 which outlines a method for sending commands from a computer to a model train.

This *sounds* like a patentable idea to me, but, upon further inspection, they haven’t really invented anything. First, they seem to be using pre-existing hardware and merely writing software to control it. Second, it’s basically a queueing system. Essentially, the patent outlines how a queue works. User 1 sends a command and the digital controller sends an acknowledgement; A second user sends a command and the same process occurs; And so and so forth. The interesting part here is that the patent language makes it a point to explain that these acknowledgements are intended to inform the user that the action requested has taken place, when, in fact, it it merely queued. I can think of some other ways to do this, but the idea generally works. So where’s the new invention? It sounds to me like they took a pre-existing system and added a queue. That’s patentable?

So, because they have this patent, they have decided to sue Mr. Jacobsen. They are asking for $19 per user of JMRI. I’m not entirely sure how they determined how many users JMRI has, but my guess is that they merely looked at the number of downloads the software has received. It looks like version 1.4 received about 11000 downloads which is about right for the $200,000 they’re apparently asking for. However, it appears that there may be plenty of prior art to fght this claim, so what’s the big deal? The problem here is that Mr. Jacobsen probably doesn’t have a few thousand dollars lying around that he can use to defend himself. Depending on how the lawsuit proceeds, it can possibly take several months or years to either overturn the patent, or lose the case. Either way, it would cost Mr. Jacobsen a lot of money he likey doesn’t have.

This type of patent abuse only serves to hurt everyone in the long run. Some developers may stop developing, or at least stop releasing their code out of fear. If small developers can be sued like this, even for patents that were so obviously granted without proper review, then they run the risk of losing more than just the right to develop a product. OSS developers are usually independent and don’t have the luxury of a corporate umbrella to protect them. They run the risk of losing everything they own. Something needs to be done about this system.

 

Here are some of my ideas for patent reform. They are listed in no specific order :

  • Existing patents should be re-examined for validity.
  • Any patent over a certain age should be considered public.
  • Any patents held by companies that are not implementing them should be given two choices. Either start working on an implementation of the patent, or sell the patent to a company that will implement it. Either way, a deadline should be set to prevent the company from sitting on the patent. If they exceed the deadline, the patent should be placed into the public domain.
  • All new patents should be scrutinized for validity beyond the current methods. If insufficient expertise is available at the patent office, then an expert in that area should be consulted.
  • All new patents should be open to public review. (I believe this is already the case, but I may be mistaken)
  • All granted patents should have a shelf-life. This shelf-life should be the same across all patents regardless of what the patent is on.
  • Patents on software should either not exist at all, or should be very critically and very carefully reviewed before being granted. There are too many ways patents like this can be exploited.

 

I’m sure there is a lot more that should be covered, but this, at least, is a start. This would put everyone on a level playing field and help prevent the stifling of innovation. Let’s get real here. If patents such as the Object Relational Mapping patent are allowed to survive and are enforceable, then innocent developers such as myself and others are in danger. I have no prior knowledge of the existance of that patent, and I never would have bothered to check. This, to me, seems to be a common sense bit of programming!

 

Hopefully we’ll see a larger movement to reform the current patent system, or to do away with it entirely. While there is worth in the system as it is today, I think it has much more potential to do harm.

BumpTop : Taking your messy desk into cyberspace

Slashdot had an interesting story today about a new type of desktop organization called BumpTop. It’s definitely interesting from a “wow” perspective, but I’m not sure how useful it is in practice. Basically, it allows you to treat your files like magazines on a table. You can stack them, knock them down, toss them about. And then there are some useful tools like sorting, auto stacking, and searching.

 

It seems to be pretty processor intensive from the outside, though. The graphics are decent, but it seems to use true physics to control the movement and behaviour of the icons. They collide against each other, fall over, bounce around, etc. Seems to be a little much, but I guess processor power is increasing while cost is decreasing.

 

There have been other desktop improvements suggested over the years. One of the more popular styles is the 3D desktop design. Sphere is an example of this design. Basically, all of the windows become 3D objects that can be manipulated, moved around in a three dimensional state, tacked up in various areas, etc. I tried it back when it was in Beta. Pretty neat, but not something I wanted to use on a regular basis. Checking today, it looks like they’ve added an IE version as well that looks to do the same thing, but for individual web pages.

 

The idea of an alternate desktop is a neat one. I’m not sure what direction the future will go in, but it’s likely that it will have a lot to do with physical interactions such as pen and touch screens. And, perhaps, even further into the future we’ll see 3D interactive holographic systems. Something along the lines of a Star Trek Holodeck.

 

Wow, the future is exciting…

Firefox turns to the dark side?

I noticed an article over on Slashdot about a new attribute, ping, that Firefox handles. That is, the development version of Firefox. This isn’t your standard network ICMP Echo Request, but rather an HTTP Request designed to track a users movements.

 

Ok, ok.. Stop screaming about privacy and security. I’ve thought about this a bit and I think Firefox is doing the right thing. The intention, as far as I’ve been able to tell, is to actually put more control into the users hands.

 

Let me explain how this “feature” works. There’s a small writeup on the Mozilla Blog that you can read as well. Tracking the browsing habits of a user is actually fairly harmless, at least in my opinion. The idea is to get feedback about what a user at that site likes to see. Do more people click on links to cartoons? Or perhaps to political information? It’s all about creating websites that people want to see.

 

So, Joe User goes to a website. There he sees a link for a new type of fusion rocket. He’s interested, so he clicks the link. Nowadays, tracking happens one of two general ways. The easy one is that the “real” destination is wrapped up and appended to a link to a tracking site. These links usually have the real destination URL in plain text, but some sites obfuscate the URL so the user can’t bypass the tracking. The other method is to use javascript to change the URL after the user clicks on the link. The user never sees this happen, so, in a way, it’s even worse from a privacy perspective.

 

Either method then directs the user to the tracking site, which tracks the request (and could, by the way, take advantage of any exploits that may exist), and then redirects you to the real site. This takes time, and the user is generally left sitting there with a blank screen.

 

The ping attribute, on the other hand, is much nicer. The owner of the website uses the ping attribute to specify tracking urls. When the user clicks on a link, the browser goes directly to the intended site, and then “pings” the tracking sites in the background. This means that there are no redirects, and no “trickery” to get the user tracking info. It all happens in the background, and that’s where all the privacy concerns come from. But, according to the spec, the browser is intended to have controls to allow a user to decide how the pings are handled. A user can choose to disable them completely, or enable them for some sites, etc.

 

Currently, the development version of Firefox has the bare minimum. That is, it sees and obeys the ping attribute, but there are no fancy GUI interfaces to change settings. Of course, this is the DEVELOPMENT version! They have to start somewhere. It’s not like these new features get a complete GUI, implementation, etc the moment they’re added. This stuff takes time! And it’s enabled by default! Light the torches! Stone the oppressors!

 

Seriously though, I feel confident, based on their past record, that the creators of Firefox will get this right. Sure, it’s enabled by default. But so is Javascript. The “correct” path is not always clear cut. If a feature is disabled by default, the chances of it ever getting enabled are slim. Most users just don’t know how! So, enabling it by default, and then popping up a message stating that the feature is active, here’s how to disable it, etc. is the right thing to do. I’m actually interested in this feature because it will allow the web, at large, to remove some of the trickery currently used to track users. It will allow this information to be up front and not hidden, and I think it will allow the end user greater control over their own security and privacy.