Happy belated US Democracy Server Patch Day!

Stumbled across a site with these patch notes…  They’re funny enough that I’m reposting them below.

US Democracy Server: Patch Day

Version 44.0

President

  • Leadership: Will now scale properly to national crises. Intelligence was not being properly applied.
  • A bug has been fixed that allowed the President to ignore the effects of debuffs applied by the Legislative classes.
  • Drain Treasury: There appears to be a bug that allowed loot to be
    transferred from the treasury to anyone on the President’s friends
    list, or in the President’s party. We are investigating.
  • Messages to and from the President will now be correctly saved to the chat log.
  • Messages originating from the President were being misclassified as originating from The American People.
  • A rendering error that frequently caused the President to appear wrapped in the American Flag texture has been addressed.

Vice President

  • The Vice President has been correctly reclassified as a pet.
  • No longer immune to damage from the Legislative and Judicial classes.
  • The Vice President will no longer aggro on friendly targets. This
    bug was identified with Ranged Attacks and the Head Shot ability.
  • Reveal Identity: this debuff will no longer be able to target Covert Operatives.
  • Messages to and from the Vice President will now be correctly saved to the chat log.
  • A rendering bug was affecting the Vice President’s visibility,
    making him virtually invisible to the rest of the server. This has been
    addressed.

Cabinet

  • There was a bug in the last release that prevented the Cabinet from
    disagreeing with the President, which was the cause of a number of
    serious balance issues. This bug has been addressed, and we will
    continue to monitor the situation.

Judiciary

  • Many concerns have been raised regarding balance issues in the
    Supreme Court. This system is maintained on a different patch schedule,
    and will require longer to address.
  • A large number of NPCs in the Judiciary were incorrectly flagged
    “ideological.” We are trying to identify these cases and rectify this
    situation.

Homeland Security

  • Homeland Security Advisory System: We have identified a bug in this
    system that prevents the threat level from dropping below Elevated
    (Yellow). The code for Guarded (Blue) and Low (Green) has been
    commented out. We are testing the fix and hope to have it in by the
    next patch.
  • Torture: This debuff is being removed after a record number of complaints.
  • Item: Large Bottle of Water is incorrectly generating threat with
    TSA Agents when held in inventory. We are looking into the issue.
  • Asking questions about Homeland Security was incorrectly triggering the Chain-Jingoism debuff.

Economy

  • Serious on-going issues with server economy are still being
    addressed. We expect further roll-backs, and appreciate your help
    identifying and fixing bugs. We can’t make these fixes without your
    help.

PVP

  • Reputation with various factions are being rebalanced. The gradated
    reputation scale was erroneously being overwritten by the binary For
    Us/ Against Us flag.

Quests

  • The” Desert Storm” quest chain was displaying an erroneous “Mission Accomplished” message near the beginning of the chain.
  • The quest chain that begins with “There’s no Cake like Yellow Cake”
    and terminates  with “W-M-Denied” has been identified as uncompletable,
    and has been removed.

Reagents

  • Many recipes that currently call for Crude Oil can now be made with
    Wind, Solar, Geothermal and Ethanol reagents. We hope to roll out even
    more sweeping changes in the next patch.

Events

  • The “Axis of Evil” event is drawing to a close. Look forward to the “Rebuilding Bridges” event starting in January.

I can’t wait to see what Obama has in store for the technical side of things.  Too bad he has to start out with technology in the White House that has been compared to the Dark Ages

Storage Area Networks

So I have this new job now and I’m being introduced to some new technology. One of the first things I was able to get my fingers into was the Storage Area Network (SAN). I’m quite familiar with Network Attached Storage (NAS), and was under the belief that SANs were just the same technology with dedicated servers. I was mistaken, however, and the differences are quite interesting.

Network Attached Storage is used quite widely, perhaps even in your own home. NAS uses protocols such as NFS and Samba/CIFS. You may not recognize Samba/CIFS, but this is the protocol used when you share a directory on a Windows machine. NFS is essentially an equivalent protocol used in the UNIX community. (Ok, ok, it’s not *really* equivalent, but let’s not start a holy war here…) In short, you identify which location on the server you want to share, and then you mount that share on the client. Shares are commonly identified by the server address and the directory or path that is being shared. Additionally, the type of filesystem used is abstracted away, preventing the local server from optimizing the storage based on the usage type.

Storage Area Networks, on the other hand, generally use the SCSI protocol for communication. In order to mount a SAN volume, you typically identify it just like any other hard drive. (Note: My experience here is on *nix systems, not Windows. As such, I’m not entirely sure how SANs are mounted via Windows) One fairly large benefit to mounting in this manner is that you can boot a server directly from the SAN rather than using local drives. SAN devices are presented to the operating system as a typical block device, allowing the administrator to choose the filesystem to use, as well as any of the associated filesystem management tools.

There are a number of different SAN types including Fibre Channel (FC), iSCSI, ATA over Ethernet, and more. The SAN I worked on is a Fibre Channel SAN from EMC. Fibre Channel is a high-speed transport technology, originally designed for use in supercomputers. It has since become the transport of choice for SANs. Typically, fiber optics are used as a physical medium, though transport over twisted-pair copper is also possible.

Fibre Channel itself is very similar to Ethernet technology. FC switches are used to provide connectivity between the SAN and the various clients using the SAN. Multiple switches can be connected together, providing both transport over long distances as well as expanding the number of available ports for clients. Multiple SANs can be connected to the switches, allowing clients to connect to shares in multiple locations. More advanced switches, such as the Cisco FC switch, use technology similar to Ethernet VLANs to isolate traffic on the switches, providing additional security and reducing broadcast traffic.

iSCSI is essentially Ethernet-attached storage. The SCSI protocol is tunneled over IP, allowing an existing IP infrastructure to be used for connectivity. This is a major advantage as it reduces the overall cost to deploy a SAN.

A major drawback of SANs is the overall cost to deploy them. While hard drives are relatively inexpensive, the rest of the hardware that makes up a SAN is rather expensive. Even a small SAN can cost upwards of $25,000 or more. But if you’re in the market for extremely high-speed storage, SANs are hard to beat.

Properly configured, SANs can offer a high level of redundancy. Typically, servers are connected to a SAN via multiple paths. As a result, the same storage device is presented to the server multiple times. A technology known as multipath can be used to abstract away these multiple paths and present a single unified device to the server. Multipath then monitors each path, switching between them when necessary, such as when a failure occurs. On the SAN itself, the storage is handled by one or more hard drive arrays. Arrays can be configured with a variety of RAID levels, providing redundancy between hard drives.

SANs are a pretty cool technology. It has definitely been interesting learning about them, and setting them up for the first time. I have to admit, however, that I mostly dealt with the server end of the setup. The SAN itself was already in place and the shares had already been created. After dealing with the software involved in creating these shares, I can’t say I would look forward to using it again. It’s amazing how confusing and unusable such software can be. Overall, though, I’m glad I had the chance to learn.

if (blocked($content))

And the fight rages on… Net Neutrality, to block or not to block.

Senator Byron Dorgan, a Democrat from North Dakota, is introducing new legislation to prevent service providers from blocking Internet content. Dorgan is not new to the arena, having put forth legislation in previous years dealing with the same thing. This time, however, he may be able to push it through.

So what’s different this time? Well, for one, we have a new president. And this new president has already stated that Net Neutrality is high on his list of technology related actions. So, at the very least, it appears that Dorgan has the president in his corner.

Of course, some service providers are not happy about this. Comcast has gone on record with the following:

“We don’t believe legislation is necessary in this area and could harm innovation and investments,” said Sena Fitzmaurice, Comcast’s senior director of government affairs and corporate communications, in a phone interview. “We have consistently said that all our customers have access to content available on the Internet.”

And she’s right! Well.. sort of. Comcast custmers do have access to content. Or, rather, they do now. I do recall a recent period of time where Comcast was “secretly” resetting bittorrent connections, and they have talked about both shaping and capping customers. So, in the end, you may get all of the content, just not all at the same level of service.

But I think, overall, Dorgan has an uphill battle. Net Neutrality is a concept not unlike free speech. It’s a great concept, but sometimes its implementation is questionable. For instance, If we look at pure Net Neutrality, then providers are required to allow all content without any shaping or blocking. Even bandwidth caps can be seen to fall under the umbrella of Net Neutrality. As a result, customers can theoretically use 100% of their alloted bandwidth at all times. This sounds great, until you realize that bandwidth, in some instances, and for perfectly legitimate reasons, is limited.

Take rural areas, for instance, especially in the midwest where homes can be miles away from each other. It can be cost-prohibitive for a service provider to run lines out to remote areas. And if they do, it’s generally done using line extender technology that can allow for decent voice signals over copper, but not high-speed bandwidth. One or two customer connections don’t justify the cost of the equipment. So, those customers are relegated to slower service, and may end up devices with high customer to bandwidth ratios. In those cases, a single customer can cause severe degradation of service for all the others, merely by using a lot of bandwidth.

On the flip side, however, allowing service providers to block and throttle according to their own whims can result in anti-competitive behavior. Take, for instance, IP Telephony. There are a number of IP Telephony providers out there that provide the technology to place calls over a local Internet connection. Skype and Vonage are two examples. Neither of these providers has any control over the local network, and thus their service is dependent on the local service provider. But let’s say the local provider wants to offer VoIP service. What’s to prevent that local provider from throttling or outright blocking Skype and Vonage? And thus we have a problem. Of course, you can fall back to the “let the market decide” argument. The problem with this is that, often, there is only one or two local providers, usually one Telco and one Cable. The Telco provider may throttle and block voice traffic, while the Cable provider does the same for video. Thus, the only choice is to determine which we would rather have blocked. Besides, changing local providers can be difficult as email addresses, phone numbers, etc. are usually tied to the existing provider. And on top of that, most people are just too lazy to change, they would rather complain.

My personal belief is that the content must be available and not throttled. However, I do believe the local provider should have some control over the network. So, for instance, if one type of traffic is eating up the majority of the bandwidth on the network, the provider should be able to throttle that traffic to some degree. However, they must make such throttling public, and they must throttle ALL of that type of traffic. Going back to the IP Telephony example, if they want to throttle Skype and Vonage, they need to throttle their own local VoIP too.

It’s a slippery slope and I’m not sure there is a perfect answer. Perhaps this new legislation will be a step in the right direction. Only time will tell.

Hacking the Infrastructure – How DNS works – Part 2

Welcome back. In part 1, I discussed the technical details of how DNS works. In this part, I’ll introduce you to some of the more common DNS server packages. In a future post I will cover some of the common problems with DNS as well as proposed solutions. So let’s dive right in.

The most popular DNS server, by far, is BIND, the Berkley Internet Name Domain. BIND has long and storied past. On the one hand, it’s one of the oldest packages for serving DNS, dating back to the early 1980’s, and on the other, it has a reputation for being one of the most insecure. BIND started out as a graduate student project at the University of California at Berkley, and was maintained by the Computer Systems Research Group. In the late 1980’s, the Digital Equipment Corporation helped with development. Shortly after that, Paul Vixie became the primary developer and eventually formed the Internet Systems Consortium which maintains BIND to this day.

Being the most popular DNS software out there, BIND suffers from the same malady that affects Microsoft Windows. It’s the most popular, most widely installed, and, as a result, hackers can gain the most by breaking it. In short, it’s the most targeted of DNS server softwares. Unlike Windows, however, BIND is open source and should benefit from the extra scrutiny that usually entails, but, alas, it appears that BIND is pretty tightly controlled by the ISC. From the ISC site, I do not see any publicly accessible software repository, no open discussion of code changes, and nothing else that really marks a truly open source application. The only open-source bits I see are a users mailing list and source code downloads. Beyond that, it appears that you either need to be a member of the “Bind Forum,” or wait for new releases with little or no input.

Not being an active user of BIND, I cannot comment too much on the current state of BIND other than what I can find publicly available. I do know that BIND supports just about every DNS convention there is out there. That includes standard DNS, DNSSEC, TSIG, and IPv6. The latter three of these are relatively new. In fact, the current major version of BIND, version 9, was written from the ground up specifically for DNSSEC support.

In late 1999, Daniel J. Bernstein, a professor at the University of Illinois, wrote a suite of DNS tools known as djbdns. Bernstein is a mathematician, cryptographer, and a security expert. He used all of these skills to produce a complete DNS server that he claimed had no security holes in it. He went as far as offering a security guarantee, promising to pay $1000 to the first person to identify a verifiable security hole in djbdns. To date, no one has been able to claim that money. As recently as 2004, djbdns was the second most popular DNS server software.

The primary reason for the existence of djbdns is Bernstein’s dissatisfaction with BIND and the numerous security problems therein. Having both security and simplicity in mind, Bernstein was able to make djbdns extremely stable and secure. In fact, djbdns was unaffected by the recent Kaminsky vulnerability, which affected both BIND and Microsoft DNS. Additionally, configuration and maintenance are both simple, straightforward processes.

On the other hand, the simplicity of djbdns may become its eventual downfall. Bernstein is critical of both DNSSEC and IPv6 and has offered no support for either of these. While some semblance of IPv6 support was added via a patch provided by a third party, I am unaware of any third-party DNSSEC support. Let me be clear, however, while the IPv6 patch does add additional support for IPv6, djbdns itself can already handle serving the AAAA records required for IPv6. The difference is that djbdns only talks over IPv4 transport while the patch adds support for IPv6 transport.

Currently, it is unclear at to whether Bernstein will ever release a new version of djbdns with support for any type of “secure” DNS.

The Microsoft DNS server has existed since Windows NT 3.51 was shipped back in 1995. It was included as part of the Microsoft BackOffice, a collection of software intended for use by small businesses. As of 2004, it was the third most popular DNS server software. According to Wikipedia, Microsoft DNS is based on BIND 4.3 with, of course, lots of Microsoft extensions. Microsoft DNS has become more and more important with new releases of Windows Server. Microsoft’s Active Directory relies heavily on Microsoft DNS and the dynamic DNS capabilities included. Active Directory uses a number of special DNS entries to identify services and allow machines to locate them. It’s an acceptable use of DNS, to be sure, but really makes things quite messy and somewhat difficult to understand.

I used Microsoft DNS for a period of time after Windows 2000 was released. At the time, I was managing a small dial-up network and we used Active
Directory and Steel-Belted RADIUS for authentication. Active Directory integration allowed us to easily synchronize data between the two sites we had, or so I thought. Because we were using Active Directory, the easiest thing to do was to use Microsoft DNS for our domain data and as a cache for customers. As we found out, however, Microsoft DNS suffered from some sort of cache problem that caused it to stop answering DNS queries after a while. We suffered with that problem for a short period of time and eventually switched over to djbdns.

There are a number of other DNS servers out there, both good and bad. I have no experience with any of them other than to know some of them by reputation. Depending on what happens in the future with the security of DNS, however, I predict that a lot of the smaller DNS packages will fall by the wayside. And while I have no practical experience with BIND beyond using it as a simple caching nameserver, I can only wonder why such a package claiming to be open source, but so guarded as it is, maintains its dominance. Perhaps I’m mistaken, but thus far I have found nothing that contradicts my current beliefs.

Next time we’ll discuss some of the more prevalent problems with DNS and DNS security. This will lead into a discussion of DNSSEC and how it works (or, perhaps, doesn’t work) and possible alternatives to DNSSEC. If you have questions and/or comments, please feel free to leave them in the comment section.

Hacking the Infrastructure – How DNS works – Part 1

Education time… I want to learn a bit more about DNS and DNSSEC in particular, so I’m going to write a series of articles about DNS and how it all works. So, let’s start at the beginning. What is DNS, and why do we need it?

DNS, the Domain Name Service, is a hierarchical naming system used primarily on the Internet. In simple terms, DNS is a mechanism by which the numeric addresses assigned to the various computers, routers, etc. are mapped to alphanumeric names, known as domain names. As it turns out, humans tend to be able to remember words a bit easier than numbers. So, for instance, it is easier to remember blog.godshell.com as opposed to 204.10.167.1.

But, I think I’m getting a bit ahead of myself. Let’s start back closer to the beginning. Back when ARPANet was first developed, the developers decided that it would be easier to name the various computers connected to ARPANet, rather than identifying them by number. So, they created a very simplistic mapping system that consisted of name and address pairs written to a text file. Each line of the text file identified a different system. This file became known as the hosts file.

Initially, each system on the network was responsible for their own hosts file, which naturally resulted in a lot of systems either unaware of others, or unable to contact them easily. To remedy this, it was decided to make an “official” version of the hosts file and store it in a central location. Each node on ARPANet then downloaded the hosts file at a fairly regular interval, keeping the entire network mostly in-sync with new additions. As ARPANet began to grow and expand, the hosts file grew larger. Eventually, the rapid growth of ARPANet made updating and distributing the hosts file a difficult endeavor. A new system was needed.

In 1983, Paul Mockapetris, one of the early ARPANet pioneers, worked to develop the first implementation of DNS, called Jeeves. Paul wrote RFC 882 and RFC 883, the original RFCs describing DNS and how it should work. RFC 882 describes DNS itself and what it aims to achieve. It describes the hierarchical structure of DNS as well as the various identifiers used. RFC 883 describes the initial implementation details of DNS. These details include items such as message formats, field formats, and timeout values. Jeeves was based on these two initial RFCs.

So now that we know what DNS is and why it was developed, let’s learn a bit about how it works.

DNS is a hierarchical system. This means that the names are assigned in an ordered, logical manner. As you are likely aware, domain names are generally strings of words, known as labels, connected by a period, such as blog.godshell.com. The rightmost label is known as the top-level domain. Each label to the left is a sub-domain of the label to the right. For the domain name blog.godshell.com, com is the top-level domain, godshell is a sub-domain of com, and blog is a sub-domain of godshell.com. Information about domain names is stored in the name server in a structure called a resource record.

Each domain, be it a top level domain, or a sub-domain, is controlled by a name server. Some name servers control a series of domains, while others control a single domain. These various areas of control are called zones. A name server that is ultimately responsible for a given zone is known as an authoritative name server. Note, multiple zones can be handled by a single name server, and multiple name servers can be authoritative for the same zone, though they should be in primary and backup roles.

Using our blog.godshell.com example, the com top-level domain is in one zone, while godshell.com and blog.godshell.com are in another. There is another zone as well, though you likely don’t see it. That zone is the root-zone, usually represented by a single period after the full domain name, though almost all modern internet programs automatically append the period at the end, making it unnecessary to specify it explicitly. The root-zone is pretty important, too, as it essentially ties together all of the various domains. You’ll see what I mean in a moment.

Ok, so we have domains and zones. We know that zones are handled individually by different name servers, so we can infer that the name servers talk to each other somehow. If we infer further, we can guess that a single name resolution probably involves more than two name servers. So how exactly does all of this work? Well, that process depends on the type of query being used to perform the name resolution.

There are two types of queries, recursive and non-recursive. The query type is negotiated by the resolver, the software responsible for performing the name resolution. The simpler of the two queries is the non-recursive query. Simply put, the resolver asks the name server for non-recursive resolution and gets an immediate answer back. That answer is generally the best answer the name server can give. If, for instance, the name server queried was a caching name server, it is possible that the domain you requested was resolved before. If so, then the correct answer can be given. If not, then you will get the best information the name server can provide which is usually a pointer to a name server that will know more about that domain. I’ll cover caching more a little later.

Recursive queries are probably the most common type of query. A recursive query aims to completely resolve a given domain name. It does this by following a few simple steps. Resolution begins with the rightmost label and moves left.

  1. The resolver asks one of the root name servers (that handle the root-zone) for resolution of the rightmost label. The root server responds with the address of a server who can provide more information about that domain label.
  2. Query the next server about the next label to the left. Again, the server will respond with the address of a server that will know more about that domain label, or, possibly, an authoritative answer for the domain.
  3. Repeat step 2 until the final answer is given.

These steps are rather simplistic, but give a general idea of how DNS works. Let’s look at an example of how this works. For this example, I will be using the dig command, a standard Linux command commonly used to debug DNS. To simplify things, I’m going to use the +trace option which does a complete recursive lookup, printing the responses along the way.

$ dig +trace blog.godshell.com

; <<>> DiG 9.4.2-P2 <<>> +trace blog.godshell.com
;; global options: printcmd
. 82502 IN NS i.root-servers.net.
. 82502 IN NS e.root-servers.net.
. 82502 IN NS h.root-servers.net.
. 82502 IN NS g.root-servers.net.
. 82502 IN NS m.root-servers.net.
. 82502 IN NS a.root-servers.net.
. 82502 IN NS k.root-servers.net.
. 82502 IN NS c.root-servers.net.
. 82502 IN NS j.root-servers.net.
. 82502 IN NS d.root-servers.net.
. 82502 IN NS f.root-servers.net.
. 82502 IN NS l.root-servers.net.
. 82502 IN NS b.root-servers.net.
;; Received 401 bytes from 192.168.1.1#53(192.168.1.1) in 5 ms

This first snippet shows the very first query sent to the local name server (192.168.1.1) which is defined on the system I’m querying from. This is often configured automatically via DHCP, or hand-entered when setting up the computer for the first time. This output has a number of fields, so let’s take a quick look at them. First, any line preceded by a semicolon is a comment. Comments generally contain useful information on what was queried, what options were used, and even what type of information is being returned.

The rest of the lines above are responses from the name server. As can be seen from the output, the name server responded with numerous results, 13 in all. Multiple results is common and means the same information is duplicated on multiple servers, commonly for load balancing and redundancy. The fields, from left to right, are as follows : domain, TTL, class, record type, answer. The domain field is the current domain being looked up. In the example above, we’re starting at the far right of our domain with the root domain (defined by a single period).

TTL stands for Time To Live. This field defines the number of seconds this data is good for. This information is mostly intended for caching name servers. It lets the cache know how much time has to pass before the cache must look up the answer again. This greatly reduces DNS load on the Internet as a whole, as well as decreasing the time it takes to obtain name resolution.

The class field defines the query class used. Query classes can be IN (Internet), CH (Chaos), HS (Hesiod), or a few others. Generally speaking, most queries are of the Internet class. Other classes are used for other purposes such as databases.

Record type defines the type of record you’re looking at. There are a number of these, the most common being A, PTR, CNAME, MX, and NS. An A record is ultimately what most name resolution is after. It defines a mapping from a domain name to an IP address. A PTR record is the opposite of an A record. It defines the mapping of an IP Address to a domain name. CNAME is a Canonical name record, essentially an alias for another record. MX is a mail exchanger record which defines the name of a server responsible for mail for the domain being queried. And finally, an NS record is a name server record. These records generally define the name server responsible for a given domain.

com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; Received 495 bytes from 199.7.83.42#53(l.root-servers.net) in 45 ms

Our local resolver has randomly chosen an answer from the previous response and queried that name server (l.root-servers.net) for the com domain. Again, we received 13 responses. This time, we are pointed to the gtld servers, owned by Network Solutions. The gtld servers are responsible for the .com and .net top-level domains. These are two of the most popular TLDs available.

godshell.com. 172800 IN NS ns1.emcyber.com.
godshell.com. 172800 IN NS ns2.incyberspace.com.
;; Received 124 bytes from 192.55.83.30#53(m.gtld-servers.net) in 149 ms

Again, our local resolver has chosen a random answer (m.gtld-servers.net) and queried for the next part of the domain, godshell.com. This time, we are told that there are only two servers responsible for that domain.

blog.godshell.com. 3600 IN A 204.10.167.1
godshell.com. 3600 IN NS ns1.godshell.com.
godshell.com. 3600 IN NS ns2.godshell.com.
;; Received 119 bytes from 204.10.167.61#53(ns2.incyberspace.com) in 23 ms

Finally, we randomly choose a response from before and query again. This time we receive three records in response, an A record and two NS records. The A record is the answer we were ultimately looking for. The two NS records are authority records, I believe. Authority records define which name servers are authoritative for a given domain. They are ultimately responsible for giving the “right” answer.

That’s really DNS in a nutshell. There’s a lot more, of course, and we’ll cover more in the future. Next time, I’ll cover the major flavors of name server software and delve into some of the problems with DNS today. So, thanks for stickin’ around! Hopefully you found this informative and useful. If you have questions and/or comments, please feel free to leave them in the comment section.

Super Sub Tiny Pico Computing Platform Thing-a-majiggie

Slashdot reported yesterday on a new cell-phone sized PC. Dubbed the IMOVIO iKit, this small form-factor PC runs an embedded version of Linux and boasts both Wifi and Bluetooth connectivity. IMOVIO is the in-house brand used by COMsciences to market the iKit.

The technical specs for the iKit are as follows (from the iKit presentation):

  • Marvell PXA270, 312MHz CPU
  • 128MB (ROM), 64MB SDRAM (RAM)
  • up to 8 GB Micro SD
  • 240×320, 2.8” TFT 262k color LCD
  • Lithium-Ion battery with up to 250 hours stand-by, 6 hours WiFi use and 6 hours gaming
  • Dimensions: L95mm x W65mm x D15.5mm
  • OS: Linux 2.4.19 (Windows Mobile or Android special order)
  • WiFi 802.11b/g
  • Bluetooth 2.0 EDR

This is a pretty decent little machine. The screen is slightly smaller than the Nintendo DS screen, but larger than most cellphones. Screen resolution is decent enough for basic video, supporting the same color depth as the DS. This should be enough real estate to display simple web pages, and especially for instant messaging.

The core OS is based on Linux, kernel 2.4.19, but Windows Mobile and Google Android are apparently available as well, for special orders. Being Linux, I expect that an SDK of some sort will be released, allowing additional applications to be developed. The basic applications shipping with the unit include a mail client, web browser, instant messaging client, contact manager, photo viewer, music and video player, and possibly additional applications such as a VoIP client. Configuration of the unit seems to be determined by the customer buying the unit.

The targeted customer, however, seems to be carriers as opposed to end-users. IMOVIO expects to sell these units to carriers, specifically configured for the carrier’s service. From there, the carrier deals with selling them to end-users. This is the typical model for cell-phone companies.

So what good is this unit? Is it worth the expected $175 or so dollars for the unit? Well, I suppose that depends on the user, and the performance of this little device. Personally, it would be nice to have a small instant-on unit I can use for quick web lookups, jotting a quick email, or viewing a video or two. However, most cell phones have the same capabilities today. In fact, my own cell phone, the Blackberry 8830, does all this and more. The biggest drawback to the blackberry, however, is the lack of wifi connectivity, reducing speed considerably.

Personally, I’d like to give one of these devices a shot. It would be interesting to see what capabilities the unit truly has, and, at the same time, see if it impacts how I work and play every day.

DirecTV – Ugh…

I’ve written before about my dissatisfaction with DirecTV. So I’ve had the service for about a year and while it’s worked, I’ve noticed that I’m starting to download TV shows more often. Part of this is because I sent care packages to a friend in the Navy, and part of it is due to some of the features I lost when I moved to DirecTV. My family still uses the DVR pretty regularly, though, and there are some shows that I like to watch when they’re on.

The DVR has been acting a little strange lately, though. Actually, for about the last 1-2 weeks. Some of the recordings are inaccessible, showing only a black screen when you try to play them. Some of the recordings have odd periods where artifacts will start to appear and suddenly the show jumps, skipping over portions. So I decided to call DirecTV and see if they have a resolution. What a waste of time.. Here’s the gist of my conversation:

DirecTV: Hi, how can I help you?

Me: I’m having some problems with my DVR.

DirecTV: Ok, how about you explain what problems you are having and we’ll see if we can fix them.

Me: Well, I’m having a few problems. Some of the recordings I have are showing just black screens, no audio or video. And I’m having a problem with live TV when I try to rewind or pause. On some occasions, I am unable to rewind, and on others, I’ll get a message about Live TV having saved the recording and o I want to keep it. Then it jumps me to the current program, often making me lose 10-20 minutes of the program.

DirecTV: Ok, how are you trying to record the programs?

Me: Umm.. Either through the standard timers, or through hitting the record button.

At this point, the rep begins going through an explanation of how to record a program and how you can’t do it from the guide screen, etc. I interrupt and explain that I don’t have a problem recording, it’s the end result that is the problem.

Me: This all started about a week or two ago, so were there any upgrades?

DirecTV: I’m not showing any recent upgrades. I am seeing that these are known issues, however, and they have been escalated to engineering.

Me: Ok… But these issues just started. This has only been happening a short period of time, yet you’re telling me no changes have been made. Is it possible that I have a bad hard drive?

DirecTV: Correct. I’ll let engineering know that you’re experiencing these problems as well. As I said, these are known issues and we are working on them.

Me: Ok. So how do I know if the problem has been resolved? Will I see an upgrade or something?

DirecTV: Just continue using the DVR as you normally do. If the problems go away, the issue has been resolved. Or, you can call us in the future.

Me: *sigh* Ok, thanks I guess…

Seriously.. Come on.. No troubleshooting, other than talking to me. No asking what kind of DVR (though I suppose they could have that info in their records), no asking for verification of software levels, etc. Just told me that it was a known issue. I’m not really convinced, and with the way she basically brushed me off, I’m not at all happy about dealing with DirecTV… Yet I’m locked into a contract… Damn…

Has anyone else seen issues like this? Any tips on how to resolve it? At the moment I’m recording everything I can to DVD. After that’s done, I’ll try re-formatting the hard drive.. That is, if I can find the option to do it. They updated a few months ago and all the stupid menus changed… Argh…

Switching Gears…

Ok, so I did it. I made the switch. I bought a Mac. Or, more specifically, I bought a Macbook Pro.

Why? Well, I had a few reasons. Windows is the standard for most office applications, and it’s great for gaming, but I find it to be a real pain to code in. I’m not talking code for Windows applications, I’m talking code for web applications. Most of my code is perl and PHP and I really have no interest in fighting with Windows to get a stable development platform for these. Sure, I can remotely access the files I need, but then I’m tethered to an Internet connection. I had gotten around this (somewhat) by installing Linux on my Windows machine via VirtualBox. It worked wonderfully, but it’s slower that way, and there are still minor problems with accessibility, things not working, etc.

OSX seemed to fit the bill, though. By default, it comes with apache and PHP, you can install MySQL easily, and it’s built on top of BSD. I can drop to a terminal prompt and interact with it the same way I interact with a Linux machine. In fact, almost every standard command I use on my Linux servers is already on my Macbook.

Installing Apple’s XCode developer tools gives me just about everything else I could need, including a free IDE! Though, this particular IDE is more suited for C++, Java, Ruby, Python, and Cocoa. Still, it’s free and that’s nothing to scoff at. I have been using a trial of Komodo, though, and I’m leaning towards buying myself a copy. $295 is steep, though.

What really sold me on a Mac is the move to Intel processors and their Bootcamp software. I play games, and Mac doesn’t have the widest library of games, so having a Windows machine available is a must. Thanks to Bootcamp, I can continue to play games while keeping my development platform as well. Now I have OSX as my primary OS and a smaller Bootcamp partition for playing games. With the nVidia GeForce card in this beast, as well as a fast processor and 2GB of RAM, I’m set for a while..

There are times, though, when I’d like to have Windows apps at my fingertips, while I’m in OSX. For that, I’ve tried both Parallels and VMWare Fusion. Parallels is nice, and it’s been around for a while. It seems to work really well, and I had no real problems trying it out. VMWare Fusion 2 is currently in beta, and I installed that as well. I’m definitely leaning towards VMWare, though, because I’ve used them in the past, and they really know virtual machines. Both programs have a nifty feature that lets you run Windows apps in such as way as to make it seem like they’re running in OSX. In parallels it’s called Coherence, and in VMWare it’s called Unity. Neat features!

So far I’ve been quite pleased with my purchase. The machine is sleak, runs fast, and allows me more flexibility than I’ve ever had in a laptop. It does run a bit hot at times, but that’s what lapdesks are for.. :)

So now I’m an Apple fan… I’m sure you’ll be seeing posts about OSX applications as I learn more about my Mac. I definitely recommend checking them out if you’ve never used one. And, if you have used one in the past, pre-OSX days, check them out now. I hates the old Mac OS, but OSX is something completely different, definitely work a second look.

Hide that data…

Data security is a pretty hot topic these days, especially when it comes to portable data.  In fact, recent reports put airport laptop theft in the tens of thousands a week.  Most, if not all, of these laptops have sensitive data on them, whether it be sensitive to the user, or sensitive to the user’s employer.  And to make matters worse, most of these laptops lack anything beyond basic security such as a Windows logon password.

But is security that much of an issue?  Is it that difficult to effectively secure the data on a laptop, or any other computer for that matter?  Well, it depends on the type of security we’re talking about.  There are significant differences between securing data on a machine that is not powered as opposed to a machine that is powered and processing that data.  In the latter case, firewalls, anti-virus software, and good programming practices will help to shield that data from nosy intruders.

If your machine is not powered, and the attacker can gain physical access, is there any way to protect the data?  The answer is actually quite simple.  There exists a product that can encrypt the data on your machine, either in chunks, or as a whole.  In fact, with the latest version, you can even choose to have it deploy a decoy operating system, just in case you’re being tortured for your password..  What is this wondrous software, and how much is it going to cost you?  It’s called TrueCrypt, and it’s FREE.

TrueCrypt is a data encryption tool that runs on Windows, Mac OS X, and Linux.  In fact, if you’re a decent programmer, you can probably get it to work on most any operating system as the source is freely available.  The TrueCrypt website highlights the following as main features:

  • Creates a virtual encrypted disk within a file and mounts it as a real disk.
  • Encrypts an entire partition or storage device such as USB flash drive or hard drive.
  • Encrypts a partition or drive where Windows is installed (pre-boot authentication).
  • Encryption is automatic, real-time (on-the-fly) and transparent.
  • Provides two levels of plausible deniability, in case an adversary forces you to reveal the password:
    1) Hidden volume (steganography) and hidden operating system.
    2) No TrueCrypt volume can be identified (volumes cannot be distinguished from random data).
  • Encryption algorithms: AES-256, Serpent, and Twofish. Mode of operation: XTS.

There is a small amount of overhead when using encryption, but for most business applications, that’s an acceptable sacrifice for the security gained.  Even without the use of hidden volumes or decoy operating systems, TrueCrypt offers a safe, secure manner by which you can protect your data.  And, if you so choose, you can mode TrueCrypt volumes between computers and even operating systems, such as on a USB flash drive, while maintaining compatibility.  In fact, I use this feature on a daily basis.  I have a small 1 Gig USB flash drive with a TrueCrypt partition on it where I store some personal information such as a copy of portable Thunderbird.  Included on the USB drive, in an unencrypted area, is a copy of TrueCrypt for Windows, Mac, and Linux.  Thus, if I ever need to mount the drive on an operating system without a copy of TrueCrypt, I’ve brought my own.

TrueCrypt 6.0 was released over the July 4th holiday.  This latest release adds some great new features.  Parallel encryption and decryption, meaning it will use all of the processors (or cores) on a multi-processor system, was added.  This allows TrueCrypt to run substantially faster on multi-processor systems.  Also added was the ability to create and run hidden, or decoy, operating systems.  Hopefully I’ll never find myself in a situation where such a decoy is needed, but perhaps James Bond will find this new feature useful.  A number of minor enhancements were made as well, including a number of bug fixes.  The current version history can be found here, and you can download the latest version here.

TrueCrypt is a wonderful tool, even for personal data protection.  I recommend looking into it, and even integrating it into your everyday life.  It’s a small change, barely noticeable for most, but the security benefits are staggering.  Just don’t forget your password, ok?

Get it while it’s hot….

Firefox 3.0, out now. Get it, it’s definitely worth it.

Oh, are you still here? Guess you need some incentive then. Well, let’s take a quick look at the new features.

Probably the most talked about feature in the new release is the “Awesome Bar.” Yeah, the name is kind of lame, but the functionality is quite cool. The new bar combines the old auto-complete history feature with your bookmarks. In short, when you start typing in the Address Bar, Firefox auto-completes based on history, bookmarks, and tags. A drop-down appears below the location bar, showing you the results that best match what you’re typing. The results include the name of the page, the address, and the tags you’ve assigned (if it’s a bookmark).

While I find this particular feature of the new Firefox to be the most helpful, many people do not. The reason I’ve heard cited for this hatred is that this forces the user into something new, breaking the “simplicity” of Firefox. And while I can agree, somewhat, with that, I don’t think it’s that big a deal. I do agree, however, that the developers should have included a switch to revert back to the old behavior. I did stumble upon a new extension and a few configuration options that can switch you back, though. The extension, called oldbar, modifies the presentation of the results so it resembles the old Firefox 2.0 results. The writer of the extension is quick to point out that the underlying algorithm is still the Firefox 3.0 version.

You can also check out these two configuration options in the about:config screen:

  • browser.urlbar.matchOnlyTyped (default: False)
  • browser.urlbar.maxRichResults (default: 12)

Setting the matchOnlyTyped option to True makes Firefox only display entries that have been previously typed. The maxRichResults option is a number that determines the maximum number of entries that can appear in the drop down. Unfortunately, there is no current way to revert back to the previous search algorithm. This has left a number of people quite upset.

Regardless, I do like the new “Awesome Bar,” though it did take a period of adjustment. One thing I never really liked was pouring through my bookmarks looking for something specific. Even though I meticulously labeled each one, placed it in a special folder, and synchronized them so they were the same on all of my machines, I always had a hard time finding what I needed. The new “Awesome Bar” allows me to search history and bookmarks simultaneously, helping me quickly find what I need.

And to make it even better, Firefox 3.0 adds support for tags. What is a tag, you ask? Well, it’s essentially a keyword you attach to a bookmark. Instead of filing bookmarks away in a tree of folders (which you can still do), you assign one or more tags to a bookmark. Using tags, you can quickly search your bookmarks for a specific theme, helping you find that elusive bookmarks quickly and efficiently. Gone are the days of trying to figure out which folder best matches a page you’re trying to bookmark, only to change your mind later on and desperately search for it in that other folder. Now, just add tags that describe it and file it away in any folder. Just recall one of the tags you used, and you’ll find that bookmark in no time. Of course, I still recommend using folders, for sanity’s sake.

Those are probably two of the most noticeable changes in the new Firefox. The rest is a little more subtle. For instance, speed has increased dramatically, both in rendering, and in JavaScript execution. Memory usage seems to be better as well, taking up much less memory than previous versions.

On the security side of things, Firefox 3 adds support for the new EV-SSL certificates, displaying the owner of the site in green, next to the favicon in the URL bar:

Firefox now tries to warn the user about potential virus and malware sites by checking them against the Google Safe Browsing blacklist. When you encounter a potentially harmful page, a warning message appears:

Similarly, if the page you are visiting appears to be a forgery, likely an attempt at phishing, you get this warning message:

Finally, the SSL error page is a little more clear, trying to explain why a particular page isn’t working. That error looks like this:

There are other security additions including add-on protection, anti-virus integration, parental controls on Windows Vista, and more. Overall, it appears they have put quite a lot of work into making Firefox 3.0 more secure.

There are other new features that you can read about here. Check them out, and then give Firefox 3.0 a shot. Download it, it’s worth it.