Happy belated US Democracy Server Patch Day!

Stumbled across a site with these patch notes…  They’re funny enough that I’m reposting them below.

US Democracy Server: Patch Day

Version 44.0

President

  • Leadership: Will now scale properly to national crises. Intelligence was not being properly applied.
  • A bug has been fixed that allowed the President to ignore the effects of debuffs applied by the Legislative classes.
  • Drain Treasury: There appears to be a bug that allowed loot to be
    transferred from the treasury to anyone on the President’s friends
    list, or in the President’s party. We are investigating.
  • Messages to and from the President will now be correctly saved to the chat log.
  • Messages originating from the President were being misclassified as originating from The American People.
  • A rendering error that frequently caused the President to appear wrapped in the American Flag texture has been addressed.

Vice President

  • The Vice President has been correctly reclassified as a pet.
  • No longer immune to damage from the Legislative and Judicial classes.
  • The Vice President will no longer aggro on friendly targets. This
    bug was identified with Ranged Attacks and the Head Shot ability.
  • Reveal Identity: this debuff will no longer be able to target Covert Operatives.
  • Messages to and from the Vice President will now be correctly saved to the chat log.
  • A rendering bug was affecting the Vice President’s visibility,
    making him virtually invisible to the rest of the server. This has been
    addressed.

Cabinet

  • There was a bug in the last release that prevented the Cabinet from
    disagreeing with the President, which was the cause of a number of
    serious balance issues. This bug has been addressed, and we will
    continue to monitor the situation.

Judiciary

  • Many concerns have been raised regarding balance issues in the
    Supreme Court. This system is maintained on a different patch schedule,
    and will require longer to address.
  • A large number of NPCs in the Judiciary were incorrectly flagged
    “ideological.” We are trying to identify these cases and rectify this
    situation.

Homeland Security

  • Homeland Security Advisory System: We have identified a bug in this
    system that prevents the threat level from dropping below Elevated
    (Yellow). The code for Guarded (Blue) and Low (Green) has been
    commented out. We are testing the fix and hope to have it in by the
    next patch.
  • Torture: This debuff is being removed after a record number of complaints.
  • Item: Large Bottle of Water is incorrectly generating threat with
    TSA Agents when held in inventory. We are looking into the issue.
  • Asking questions about Homeland Security was incorrectly triggering the Chain-Jingoism debuff.

Economy

  • Serious on-going issues with server economy are still being
    addressed. We expect further roll-backs, and appreciate your help
    identifying and fixing bugs. We can’t make these fixes without your
    help.

PVP

  • Reputation with various factions are being rebalanced. The gradated
    reputation scale was erroneously being overwritten by the binary For
    Us/ Against Us flag.

Quests

  • The” Desert Storm” quest chain was displaying an erroneous “Mission Accomplished” message near the beginning of the chain.
  • The quest chain that begins with “There’s no Cake like Yellow Cake”
    and terminates  with “W-M-Denied” has been identified as uncompletable,
    and has been removed.

Reagents

  • Many recipes that currently call for Crude Oil can now be made with
    Wind, Solar, Geothermal and Ethanol reagents. We hope to roll out even
    more sweeping changes in the next patch.

Events

  • The “Axis of Evil” event is drawing to a close. Look forward to the “Rebuilding Bridges” event starting in January.

I can’t wait to see what Obama has in store for the technical side of things.  Too bad he has to start out with technology in the White House that has been compared to the Dark Ages

Storage Area Networks

So I have this new job now and I’m being introduced to some new technology. One of the first things I was able to get my fingers into was the Storage Area Network (SAN). I’m quite familiar with Network Attached Storage (NAS), and was under the belief that SANs were just the same technology with dedicated servers. I was mistaken, however, and the differences are quite interesting.

Network Attached Storage is used quite widely, perhaps even in your own home. NAS uses protocols such as NFS and Samba/CIFS. You may not recognize Samba/CIFS, but this is the protocol used when you share a directory on a Windows machine. NFS is essentially an equivalent protocol used in the UNIX community. (Ok, ok, it’s not *really* equivalent, but let’s not start a holy war here…) In short, you identify which location on the server you want to share, and then you mount that share on the client. Shares are commonly identified by the server address and the directory or path that is being shared. Additionally, the type of filesystem used is abstracted away, preventing the local server from optimizing the storage based on the usage type.

Storage Area Networks, on the other hand, generally use the SCSI protocol for communication. In order to mount a SAN volume, you typically identify it just like any other hard drive. (Note: My experience here is on *nix systems, not Windows. As such, I’m not entirely sure how SANs are mounted via Windows) One fairly large benefit to mounting in this manner is that you can boot a server directly from the SAN rather than using local drives. SAN devices are presented to the operating system as a typical block device, allowing the administrator to choose the filesystem to use, as well as any of the associated filesystem management tools.

There are a number of different SAN types including Fibre Channel (FC), iSCSI, ATA over Ethernet, and more. The SAN I worked on is a Fibre Channel SAN from EMC. Fibre Channel is a high-speed transport technology, originally designed for use in supercomputers. It has since become the transport of choice for SANs. Typically, fiber optics are used as a physical medium, though transport over twisted-pair copper is also possible.

Fibre Channel itself is very similar to Ethernet technology. FC switches are used to provide connectivity between the SAN and the various clients using the SAN. Multiple switches can be connected together, providing both transport over long distances as well as expanding the number of available ports for clients. Multiple SANs can be connected to the switches, allowing clients to connect to shares in multiple locations. More advanced switches, such as the Cisco FC switch, use technology similar to Ethernet VLANs to isolate traffic on the switches, providing additional security and reducing broadcast traffic.

iSCSI is essentially Ethernet-attached storage. The SCSI protocol is tunneled over IP, allowing an existing IP infrastructure to be used for connectivity. This is a major advantage as it reduces the overall cost to deploy a SAN.

A major drawback of SANs is the overall cost to deploy them. While hard drives are relatively inexpensive, the rest of the hardware that makes up a SAN is rather expensive. Even a small SAN can cost upwards of $25,000 or more. But if you’re in the market for extremely high-speed storage, SANs are hard to beat.

Properly configured, SANs can offer a high level of redundancy. Typically, servers are connected to a SAN via multiple paths. As a result, the same storage device is presented to the server multiple times. A technology known as multipath can be used to abstract away these multiple paths and present a single unified device to the server. Multipath then monitors each path, switching between them when necessary, such as when a failure occurs. On the SAN itself, the storage is handled by one or more hard drive arrays. Arrays can be configured with a variety of RAID levels, providing redundancy between hard drives.

SANs are a pretty cool technology. It has definitely been interesting learning about them, and setting them up for the first time. I have to admit, however, that I mostly dealt with the server end of the setup. The SAN itself was already in place and the shares had already been created. After dealing with the software involved in creating these shares, I can’t say I would look forward to using it again. It’s amazing how confusing and unusable such software can be. Overall, though, I’m glad I had the chance to learn.

Hacking the Infrastructure – How DNS works – Part 2

Welcome back. In part 1, I discussed the technical details of how DNS works. In this part, I’ll introduce you to some of the more common DNS server packages. In a future post I will cover some of the common problems with DNS as well as proposed solutions. So let’s dive right in.

The most popular DNS server, by far, is BIND, the Berkley Internet Name Domain. BIND has long and storied past. On the one hand, it’s one of the oldest packages for serving DNS, dating back to the early 1980’s, and on the other, it has a reputation for being one of the most insecure. BIND started out as a graduate student project at the University of California at Berkley, and was maintained by the Computer Systems Research Group. In the late 1980’s, the Digital Equipment Corporation helped with development. Shortly after that, Paul Vixie became the primary developer and eventually formed the Internet Systems Consortium which maintains BIND to this day.

Being the most popular DNS software out there, BIND suffers from the same malady that affects Microsoft Windows. It’s the most popular, most widely installed, and, as a result, hackers can gain the most by breaking it. In short, it’s the most targeted of DNS server softwares. Unlike Windows, however, BIND is open source and should benefit from the extra scrutiny that usually entails, but, alas, it appears that BIND is pretty tightly controlled by the ISC. From the ISC site, I do not see any publicly accessible software repository, no open discussion of code changes, and nothing else that really marks a truly open source application. The only open-source bits I see are a users mailing list and source code downloads. Beyond that, it appears that you either need to be a member of the “Bind Forum,” or wait for new releases with little or no input.

Not being an active user of BIND, I cannot comment too much on the current state of BIND other than what I can find publicly available. I do know that BIND supports just about every DNS convention there is out there. That includes standard DNS, DNSSEC, TSIG, and IPv6. The latter three of these are relatively new. In fact, the current major version of BIND, version 9, was written from the ground up specifically for DNSSEC support.

In late 1999, Daniel J. Bernstein, a professor at the University of Illinois, wrote a suite of DNS tools known as djbdns. Bernstein is a mathematician, cryptographer, and a security expert. He used all of these skills to produce a complete DNS server that he claimed had no security holes in it. He went as far as offering a security guarantee, promising to pay $1000 to the first person to identify a verifiable security hole in djbdns. To date, no one has been able to claim that money. As recently as 2004, djbdns was the second most popular DNS server software.

The primary reason for the existence of djbdns is Bernstein’s dissatisfaction with BIND and the numerous security problems therein. Having both security and simplicity in mind, Bernstein was able to make djbdns extremely stable and secure. In fact, djbdns was unaffected by the recent Kaminsky vulnerability, which affected both BIND and Microsoft DNS. Additionally, configuration and maintenance are both simple, straightforward processes.

On the other hand, the simplicity of djbdns may become its eventual downfall. Bernstein is critical of both DNSSEC and IPv6 and has offered no support for either of these. While some semblance of IPv6 support was added via a patch provided by a third party, I am unaware of any third-party DNSSEC support. Let me be clear, however, while the IPv6 patch does add additional support for IPv6, djbdns itself can already handle serving the AAAA records required for IPv6. The difference is that djbdns only talks over IPv4 transport while the patch adds support for IPv6 transport.

Currently, it is unclear at to whether Bernstein will ever release a new version of djbdns with support for any type of “secure” DNS.

The Microsoft DNS server has existed since Windows NT 3.51 was shipped back in 1995. It was included as part of the Microsoft BackOffice, a collection of software intended for use by small businesses. As of 2004, it was the third most popular DNS server software. According to Wikipedia, Microsoft DNS is based on BIND 4.3 with, of course, lots of Microsoft extensions. Microsoft DNS has become more and more important with new releases of Windows Server. Microsoft’s Active Directory relies heavily on Microsoft DNS and the dynamic DNS capabilities included. Active Directory uses a number of special DNS entries to identify services and allow machines to locate them. It’s an acceptable use of DNS, to be sure, but really makes things quite messy and somewhat difficult to understand.

I used Microsoft DNS for a period of time after Windows 2000 was released. At the time, I was managing a small dial-up network and we used Active
Directory and Steel-Belted RADIUS for authentication. Active Directory integration allowed us to easily synchronize data between the two sites we had, or so I thought. Because we were using Active Directory, the easiest thing to do was to use Microsoft DNS for our domain data and as a cache for customers. As we found out, however, Microsoft DNS suffered from some sort of cache problem that caused it to stop answering DNS queries after a while. We suffered with that problem for a short period of time and eventually switched over to djbdns.

There are a number of other DNS servers out there, both good and bad. I have no experience with any of them other than to know some of them by reputation. Depending on what happens in the future with the security of DNS, however, I predict that a lot of the smaller DNS packages will fall by the wayside. And while I have no practical experience with BIND beyond using it as a simple caching nameserver, I can only wonder why such a package claiming to be open source, but so guarded as it is, maintains its dominance. Perhaps I’m mistaken, but thus far I have found nothing that contradicts my current beliefs.

Next time we’ll discuss some of the more prevalent problems with DNS and DNS security. This will lead into a discussion of DNSSEC and how it works (or, perhaps, doesn’t work) and possible alternatives to DNSSEC. If you have questions and/or comments, please feel free to leave them in the comment section.

Hacking the Infrastructure – How DNS works – Part 1

Education time… I want to learn a bit more about DNS and DNSSEC in particular, so I’m going to write a series of articles about DNS and how it all works. So, let’s start at the beginning. What is DNS, and why do we need it?

DNS, the Domain Name Service, is a hierarchical naming system used primarily on the Internet. In simple terms, DNS is a mechanism by which the numeric addresses assigned to the various computers, routers, etc. are mapped to alphanumeric names, known as domain names. As it turns out, humans tend to be able to remember words a bit easier than numbers. So, for instance, it is easier to remember blog.godshell.com as opposed to 204.10.167.1.

But, I think I’m getting a bit ahead of myself. Let’s start back closer to the beginning. Back when ARPANet was first developed, the developers decided that it would be easier to name the various computers connected to ARPANet, rather than identifying them by number. So, they created a very simplistic mapping system that consisted of name and address pairs written to a text file. Each line of the text file identified a different system. This file became known as the hosts file.

Initially, each system on the network was responsible for their own hosts file, which naturally resulted in a lot of systems either unaware of others, or unable to contact them easily. To remedy this, it was decided to make an “official” version of the hosts file and store it in a central location. Each node on ARPANet then downloaded the hosts file at a fairly regular interval, keeping the entire network mostly in-sync with new additions. As ARPANet began to grow and expand, the hosts file grew larger. Eventually, the rapid growth of ARPANet made updating and distributing the hosts file a difficult endeavor. A new system was needed.

In 1983, Paul Mockapetris, one of the early ARPANet pioneers, worked to develop the first implementation of DNS, called Jeeves. Paul wrote RFC 882 and RFC 883, the original RFCs describing DNS and how it should work. RFC 882 describes DNS itself and what it aims to achieve. It describes the hierarchical structure of DNS as well as the various identifiers used. RFC 883 describes the initial implementation details of DNS. These details include items such as message formats, field formats, and timeout values. Jeeves was based on these two initial RFCs.

So now that we know what DNS is and why it was developed, let’s learn a bit about how it works.

DNS is a hierarchical system. This means that the names are assigned in an ordered, logical manner. As you are likely aware, domain names are generally strings of words, known as labels, connected by a period, such as blog.godshell.com. The rightmost label is known as the top-level domain. Each label to the left is a sub-domain of the label to the right. For the domain name blog.godshell.com, com is the top-level domain, godshell is a sub-domain of com, and blog is a sub-domain of godshell.com. Information about domain names is stored in the name server in a structure called a resource record.

Each domain, be it a top level domain, or a sub-domain, is controlled by a name server. Some name servers control a series of domains, while others control a single domain. These various areas of control are called zones. A name server that is ultimately responsible for a given zone is known as an authoritative name server. Note, multiple zones can be handled by a single name server, and multiple name servers can be authoritative for the same zone, though they should be in primary and backup roles.

Using our blog.godshell.com example, the com top-level domain is in one zone, while godshell.com and blog.godshell.com are in another. There is another zone as well, though you likely don’t see it. That zone is the root-zone, usually represented by a single period after the full domain name, though almost all modern internet programs automatically append the period at the end, making it unnecessary to specify it explicitly. The root-zone is pretty important, too, as it essentially ties together all of the various domains. You’ll see what I mean in a moment.

Ok, so we have domains and zones. We know that zones are handled individually by different name servers, so we can infer that the name servers talk to each other somehow. If we infer further, we can guess that a single name resolution probably involves more than two name servers. So how exactly does all of this work? Well, that process depends on the type of query being used to perform the name resolution.

There are two types of queries, recursive and non-recursive. The query type is negotiated by the resolver, the software responsible for performing the name resolution. The simpler of the two queries is the non-recursive query. Simply put, the resolver asks the name server for non-recursive resolution and gets an immediate answer back. That answer is generally the best answer the name server can give. If, for instance, the name server queried was a caching name server, it is possible that the domain you requested was resolved before. If so, then the correct answer can be given. If not, then you will get the best information the name server can provide which is usually a pointer to a name server that will know more about that domain. I’ll cover caching more a little later.

Recursive queries are probably the most common type of query. A recursive query aims to completely resolve a given domain name. It does this by following a few simple steps. Resolution begins with the rightmost label and moves left.

  1. The resolver asks one of the root name servers (that handle the root-zone) for resolution of the rightmost label. The root server responds with the address of a server who can provide more information about that domain label.
  2. Query the next server about the next label to the left. Again, the server will respond with the address of a server that will know more about that domain label, or, possibly, an authoritative answer for the domain.
  3. Repeat step 2 until the final answer is given.

These steps are rather simplistic, but give a general idea of how DNS works. Let’s look at an example of how this works. For this example, I will be using the dig command, a standard Linux command commonly used to debug DNS. To simplify things, I’m going to use the +trace option which does a complete recursive lookup, printing the responses along the way.

$ dig +trace blog.godshell.com

; <<>> DiG 9.4.2-P2 <<>> +trace blog.godshell.com
;; global options: printcmd
. 82502 IN NS i.root-servers.net.
. 82502 IN NS e.root-servers.net.
. 82502 IN NS h.root-servers.net.
. 82502 IN NS g.root-servers.net.
. 82502 IN NS m.root-servers.net.
. 82502 IN NS a.root-servers.net.
. 82502 IN NS k.root-servers.net.
. 82502 IN NS c.root-servers.net.
. 82502 IN NS j.root-servers.net.
. 82502 IN NS d.root-servers.net.
. 82502 IN NS f.root-servers.net.
. 82502 IN NS l.root-servers.net.
. 82502 IN NS b.root-servers.net.
;; Received 401 bytes from 192.168.1.1#53(192.168.1.1) in 5 ms

This first snippet shows the very first query sent to the local name server (192.168.1.1) which is defined on the system I’m querying from. This is often configured automatically via DHCP, or hand-entered when setting up the computer for the first time. This output has a number of fields, so let’s take a quick look at them. First, any line preceded by a semicolon is a comment. Comments generally contain useful information on what was queried, what options were used, and even what type of information is being returned.

The rest of the lines above are responses from the name server. As can be seen from the output, the name server responded with numerous results, 13 in all. Multiple results is common and means the same information is duplicated on multiple servers, commonly for load balancing and redundancy. The fields, from left to right, are as follows : domain, TTL, class, record type, answer. The domain field is the current domain being looked up. In the example above, we’re starting at the far right of our domain with the root domain (defined by a single period).

TTL stands for Time To Live. This field defines the number of seconds this data is good for. This information is mostly intended for caching name servers. It lets the cache know how much time has to pass before the cache must look up the answer again. This greatly reduces DNS load on the Internet as a whole, as well as decreasing the time it takes to obtain name resolution.

The class field defines the query class used. Query classes can be IN (Internet), CH (Chaos), HS (Hesiod), or a few others. Generally speaking, most queries are of the Internet class. Other classes are used for other purposes such as databases.

Record type defines the type of record you’re looking at. There are a number of these, the most common being A, PTR, CNAME, MX, and NS. An A record is ultimately what most name resolution is after. It defines a mapping from a domain name to an IP address. A PTR record is the opposite of an A record. It defines the mapping of an IP Address to a domain name. CNAME is a Canonical name record, essentially an alias for another record. MX is a mail exchanger record which defines the name of a server responsible for mail for the domain being queried. And finally, an NS record is a name server record. These records generally define the name server responsible for a given domain.

com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; Received 495 bytes from 199.7.83.42#53(l.root-servers.net) in 45 ms

Our local resolver has randomly chosen an answer from the previous response and queried that name server (l.root-servers.net) for the com domain. Again, we received 13 responses. This time, we are pointed to the gtld servers, owned by Network Solutions. The gtld servers are responsible for the .com and .net top-level domains. These are two of the most popular TLDs available.

godshell.com. 172800 IN NS ns1.emcyber.com.
godshell.com. 172800 IN NS ns2.incyberspace.com.
;; Received 124 bytes from 192.55.83.30#53(m.gtld-servers.net) in 149 ms

Again, our local resolver has chosen a random answer (m.gtld-servers.net) and queried for the next part of the domain, godshell.com. This time, we are told that there are only two servers responsible for that domain.

blog.godshell.com. 3600 IN A 204.10.167.1
godshell.com. 3600 IN NS ns1.godshell.com.
godshell.com. 3600 IN NS ns2.godshell.com.
;; Received 119 bytes from 204.10.167.61#53(ns2.incyberspace.com) in 23 ms

Finally, we randomly choose a response from before and query again. This time we receive three records in response, an A record and two NS records. The A record is the answer we were ultimately looking for. The two NS records are authority records, I believe. Authority records define which name servers are authoritative for a given domain. They are ultimately responsible for giving the “right” answer.

That’s really DNS in a nutshell. There’s a lot more, of course, and we’ll cover more in the future. Next time, I’ll cover the major flavors of name server software and delve into some of the problems with DNS today. So, thanks for stickin’ around! Hopefully you found this informative and useful. If you have questions and/or comments, please feel free to leave them in the comment section.

Super Sub Tiny Pico Computing Platform Thing-a-majiggie

Slashdot reported yesterday on a new cell-phone sized PC. Dubbed the IMOVIO iKit, this small form-factor PC runs an embedded version of Linux and boasts both Wifi and Bluetooth connectivity. IMOVIO is the in-house brand used by COMsciences to market the iKit.

The technical specs for the iKit are as follows (from the iKit presentation):

  • Marvell PXA270, 312MHz CPU
  • 128MB (ROM), 64MB SDRAM (RAM)
  • up to 8 GB Micro SD
  • 240×320, 2.8” TFT 262k color LCD
  • Lithium-Ion battery with up to 250 hours stand-by, 6 hours WiFi use and 6 hours gaming
  • Dimensions: L95mm x W65mm x D15.5mm
  • OS: Linux 2.4.19 (Windows Mobile or Android special order)
  • WiFi 802.11b/g
  • Bluetooth 2.0 EDR

This is a pretty decent little machine. The screen is slightly smaller than the Nintendo DS screen, but larger than most cellphones. Screen resolution is decent enough for basic video, supporting the same color depth as the DS. This should be enough real estate to display simple web pages, and especially for instant messaging.

The core OS is based on Linux, kernel 2.4.19, but Windows Mobile and Google Android are apparently available as well, for special orders. Being Linux, I expect that an SDK of some sort will be released, allowing additional applications to be developed. The basic applications shipping with the unit include a mail client, web browser, instant messaging client, contact manager, photo viewer, music and video player, and possibly additional applications such as a VoIP client. Configuration of the unit seems to be determined by the customer buying the unit.

The targeted customer, however, seems to be carriers as opposed to end-users. IMOVIO expects to sell these units to carriers, specifically configured for the carrier’s service. From there, the carrier deals with selling them to end-users. This is the typical model for cell-phone companies.

So what good is this unit? Is it worth the expected $175 or so dollars for the unit? Well, I suppose that depends on the user, and the performance of this little device. Personally, it would be nice to have a small instant-on unit I can use for quick web lookups, jotting a quick email, or viewing a video or two. However, most cell phones have the same capabilities today. In fact, my own cell phone, the Blackberry 8830, does all this and more. The biggest drawback to the blackberry, however, is the lack of wifi connectivity, reducing speed considerably.

Personally, I’d like to give one of these devices a shot. It would be interesting to see what capabilities the unit truly has, and, at the same time, see if it impacts how I work and play every day.

DirecTV – Ugh…

I’ve written before about my dissatisfaction with DirecTV. So I’ve had the service for about a year and while it’s worked, I’ve noticed that I’m starting to download TV shows more often. Part of this is because I sent care packages to a friend in the Navy, and part of it is due to some of the features I lost when I moved to DirecTV. My family still uses the DVR pretty regularly, though, and there are some shows that I like to watch when they’re on.

The DVR has been acting a little strange lately, though. Actually, for about the last 1-2 weeks. Some of the recordings are inaccessible, showing only a black screen when you try to play them. Some of the recordings have odd periods where artifacts will start to appear and suddenly the show jumps, skipping over portions. So I decided to call DirecTV and see if they have a resolution. What a waste of time.. Here’s the gist of my conversation:

DirecTV: Hi, how can I help you?

Me: I’m having some problems with my DVR.

DirecTV: Ok, how about you explain what problems you are having and we’ll see if we can fix them.

Me: Well, I’m having a few problems. Some of the recordings I have are showing just black screens, no audio or video. And I’m having a problem with live TV when I try to rewind or pause. On some occasions, I am unable to rewind, and on others, I’ll get a message about Live TV having saved the recording and o I want to keep it. Then it jumps me to the current program, often making me lose 10-20 minutes of the program.

DirecTV: Ok, how are you trying to record the programs?

Me: Umm.. Either through the standard timers, or through hitting the record button.

At this point, the rep begins going through an explanation of how to record a program and how you can’t do it from the guide screen, etc. I interrupt and explain that I don’t have a problem recording, it’s the end result that is the problem.

Me: This all started about a week or two ago, so were there any upgrades?

DirecTV: I’m not showing any recent upgrades. I am seeing that these are known issues, however, and they have been escalated to engineering.

Me: Ok… But these issues just started. This has only been happening a short period of time, yet you’re telling me no changes have been made. Is it possible that I have a bad hard drive?

DirecTV: Correct. I’ll let engineering know that you’re experiencing these problems as well. As I said, these are known issues and we are working on them.

Me: Ok. So how do I know if the problem has been resolved? Will I see an upgrade or something?

DirecTV: Just continue using the DVR as you normally do. If the problems go away, the issue has been resolved. Or, you can call us in the future.

Me: *sigh* Ok, thanks I guess…

Seriously.. Come on.. No troubleshooting, other than talking to me. No asking what kind of DVR (though I suppose they could have that info in their records), no asking for verification of software levels, etc. Just told me that it was a known issue. I’m not really convinced, and with the way she basically brushed me off, I’m not at all happy about dealing with DirecTV… Yet I’m locked into a contract… Damn…

Has anyone else seen issues like this? Any tips on how to resolve it? At the moment I’m recording everything I can to DVD. After that’s done, I’ll try re-formatting the hard drive.. That is, if I can find the option to do it. They updated a few months ago and all the stupid menus changed… Argh…

Switching Gears…

Ok, so I did it. I made the switch. I bought a Mac. Or, more specifically, I bought a Macbook Pro.

Why? Well, I had a few reasons. Windows is the standard for most office applications, and it’s great for gaming, but I find it to be a real pain to code in. I’m not talking code for Windows applications, I’m talking code for web applications. Most of my code is perl and PHP and I really have no interest in fighting with Windows to get a stable development platform for these. Sure, I can remotely access the files I need, but then I’m tethered to an Internet connection. I had gotten around this (somewhat) by installing Linux on my Windows machine via VirtualBox. It worked wonderfully, but it’s slower that way, and there are still minor problems with accessibility, things not working, etc.

OSX seemed to fit the bill, though. By default, it comes with apache and PHP, you can install MySQL easily, and it’s built on top of BSD. I can drop to a terminal prompt and interact with it the same way I interact with a Linux machine. In fact, almost every standard command I use on my Linux servers is already on my Macbook.

Installing Apple’s XCode developer tools gives me just about everything else I could need, including a free IDE! Though, this particular IDE is more suited for C++, Java, Ruby, Python, and Cocoa. Still, it’s free and that’s nothing to scoff at. I have been using a trial of Komodo, though, and I’m leaning towards buying myself a copy. $295 is steep, though.

What really sold me on a Mac is the move to Intel processors and their Bootcamp software. I play games, and Mac doesn’t have the widest library of games, so having a Windows machine available is a must. Thanks to Bootcamp, I can continue to play games while keeping my development platform as well. Now I have OSX as my primary OS and a smaller Bootcamp partition for playing games. With the nVidia GeForce card in this beast, as well as a fast processor and 2GB of RAM, I’m set for a while..

There are times, though, when I’d like to have Windows apps at my fingertips, while I’m in OSX. For that, I’ve tried both Parallels and VMWare Fusion. Parallels is nice, and it’s been around for a while. It seems to work really well, and I had no real problems trying it out. VMWare Fusion 2 is currently in beta, and I installed that as well. I’m definitely leaning towards VMWare, though, because I’ve used them in the past, and they really know virtual machines. Both programs have a nifty feature that lets you run Windows apps in such as way as to make it seem like they’re running in OSX. In parallels it’s called Coherence, and in VMWare it’s called Unity. Neat features!

So far I’ve been quite pleased with my purchase. The machine is sleak, runs fast, and allows me more flexibility than I’ve ever had in a laptop. It does run a bit hot at times, but that’s what lapdesks are for.. :)

So now I’m an Apple fan… I’m sure you’ll be seeing posts about OSX applications as I learn more about my Mac. I definitely recommend checking them out if you’ve never used one. And, if you have used one in the past, pre-OSX days, check them out now. I hates the old Mac OS, but OSX is something completely different, definitely work a second look.

Get it while it’s hot….

Firefox 3.0, out now. Get it, it’s definitely worth it.

Oh, are you still here? Guess you need some incentive then. Well, let’s take a quick look at the new features.

Probably the most talked about feature in the new release is the “Awesome Bar.” Yeah, the name is kind of lame, but the functionality is quite cool. The new bar combines the old auto-complete history feature with your bookmarks. In short, when you start typing in the Address Bar, Firefox auto-completes based on history, bookmarks, and tags. A drop-down appears below the location bar, showing you the results that best match what you’re typing. The results include the name of the page, the address, and the tags you’ve assigned (if it’s a bookmark).

While I find this particular feature of the new Firefox to be the most helpful, many people do not. The reason I’ve heard cited for this hatred is that this forces the user into something new, breaking the “simplicity” of Firefox. And while I can agree, somewhat, with that, I don’t think it’s that big a deal. I do agree, however, that the developers should have included a switch to revert back to the old behavior. I did stumble upon a new extension and a few configuration options that can switch you back, though. The extension, called oldbar, modifies the presentation of the results so it resembles the old Firefox 2.0 results. The writer of the extension is quick to point out that the underlying algorithm is still the Firefox 3.0 version.

You can also check out these two configuration options in the about:config screen:

  • browser.urlbar.matchOnlyTyped (default: False)
  • browser.urlbar.maxRichResults (default: 12)

Setting the matchOnlyTyped option to True makes Firefox only display entries that have been previously typed. The maxRichResults option is a number that determines the maximum number of entries that can appear in the drop down. Unfortunately, there is no current way to revert back to the previous search algorithm. This has left a number of people quite upset.

Regardless, I do like the new “Awesome Bar,” though it did take a period of adjustment. One thing I never really liked was pouring through my bookmarks looking for something specific. Even though I meticulously labeled each one, placed it in a special folder, and synchronized them so they were the same on all of my machines, I always had a hard time finding what I needed. The new “Awesome Bar” allows me to search history and bookmarks simultaneously, helping me quickly find what I need.

And to make it even better, Firefox 3.0 adds support for tags. What is a tag, you ask? Well, it’s essentially a keyword you attach to a bookmark. Instead of filing bookmarks away in a tree of folders (which you can still do), you assign one or more tags to a bookmark. Using tags, you can quickly search your bookmarks for a specific theme, helping you find that elusive bookmarks quickly and efficiently. Gone are the days of trying to figure out which folder best matches a page you’re trying to bookmark, only to change your mind later on and desperately search for it in that other folder. Now, just add tags that describe it and file it away in any folder. Just recall one of the tags you used, and you’ll find that bookmark in no time. Of course, I still recommend using folders, for sanity’s sake.

Those are probably two of the most noticeable changes in the new Firefox. The rest is a little more subtle. For instance, speed has increased dramatically, both in rendering, and in JavaScript execution. Memory usage seems to be better as well, taking up much less memory than previous versions.

On the security side of things, Firefox 3 adds support for the new EV-SSL certificates, displaying the owner of the site in green, next to the favicon in the URL bar:

Firefox now tries to warn the user about potential virus and malware sites by checking them against the Google Safe Browsing blacklist. When you encounter a potentially harmful page, a warning message appears:

Similarly, if the page you are visiting appears to be a forgery, likely an attempt at phishing, you get this warning message:

Finally, the SSL error page is a little more clear, trying to explain why a particular page isn’t working. That error looks like this:

There are other security additions including add-on protection, anti-virus integration, parental controls on Windows Vista, and more. Overall, it appears they have put quite a lot of work into making Firefox 3.0 more secure.

There are other new features that you can read about here. Check them out, and then give Firefox 3.0 a shot. Download it, it’s worth it.

Headless Linux Testing Clients

As part of my day to day job, I’ve been working on a headless Linux client that can be transported from site to site to automate some network testing.  I can’t really go into detail on what’s being tested, or why, but I did want to write up a (hopefully) useful entry about headless client and some of the changes I made to the basic CentOS install to get everything to work.

First up was the issue of headless operation.  We’re using Cappuccino SlimPRO SP-625 units with the triple Gigabit Ethernet option.  They’re not bad little machines, though I do have a gripe with the back cover on them.  It doesn’t properly cover all of the ports on the back, leaving rather large holes where dust and dirt can get in.  What’s worse is that the power plug is not surrounded and held in place by the case, so I can foresee the board cracking at some point from the stress of the power cord…  But, for a sub-$800 machine, it’s not all that bad.

Anyway, on to the fun.  These machines will be transported to various locations where testing is to be performed.  On-site, there will be no keyboard, no mouse, and no monitor for them.  However, sometimes things go wrong and subtle adjustments may need to be made.  This means we need a way to get into the machine, locally, just in case there’s a problem with the network connection.  Luckily, there’s a pretty simple means of accessing a headless Linux machine without the need to lug around an extra monitor, keyboard, and mouse.  If you’ve ever worked on a switch or router, you’ll know where I’m going with this.

Most technician have access to a laptop, especially if they have to configure routers or switches.  Why not access a Linux box the same way?  Using the agetty command, you can.  A getty is a program that manages terminals within Unix.  Those terminals can be physical, like the local keyboard, or virtual, like a telnet or ssh session.  The agetty program is an alternative getty program that has some non-standard features such as baud rate detection, adaptive tty, and more.  In short, it’s perfect for direct serial, or even dial-in connections.

Setting this all up is a snap, too.  By default, CentOS (and most Linux distros) set up six gettys for virtual terminals.  These virtual terminals use yet another getty, mingetty, which is a minimalized getty program with only enough features for virtual terminals.  In order to provide serial access, we need to add a few lines to enable agettys on the serial ports.

But wait, what serial ports do we have?  Well, assuming they are enabled in the BIOS, we can see them using the dmesg and setserial commands.  The dmesg command prints out the current kernel message buffer to the screen.  This is usually the output from the boot sequence, but if your system has been up a while, it may contain more recent messages.  We can use dmesg to determine the serial interfaces like this :

[friz@test ~]$ dmesg | grep serial
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A

As you can see from the output above, we have both a ttyS0 and a ttyS1 port available on this particular machine.  Now, we use setserial to make sure the system recognizes the ports:

[friz@test ~]$ sudo setserial -g /dev/ttyS[0-1]
/dev/ttyS0, UART: 16550A, Port: 0x03f8, IRQ: 4
/dev/ttyS1, UART: 16550A, Port: 0x02f8, IRQ: 3

The output is similar to dmesg, but setserial actually polls the port to get the necessary information, thereby ensuring that it’s active.  Also note, you will likely need to run this command as root to make it work.

Now that we know what serial ports we have, we just need to add them to the inittab and reload the init daemon.  Adding these to the inittab is pretty simple.  Your inittab will look something like this:

#
# inittab       This file describes how the INIT process should set up
#               the system in a certain run-level.
#
# Author:       Miquel van Smoorenburg, <miquels@drinkel.nl.mugnet.org>
#               Modified for RHS Linux by Marc Ewing and Donnie Barnes
#

# Default runlevel. The runlevels used by RHS are:
#   0 – halt (Do NOT set initdefault to this)
#   1 – Single user mode
#   2 – Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 – Full multiuser mode
#   4 – unused
#   5 – X11
#   6 – reboot (Do NOT set initdefault to this)
#
id:3:initdefault:

# System initialization.
si::sysinit:/etc/rc.d/rc.sysinit

l0:0:wait:/etc/rc.d/rc 0
l1:1:wait:/etc/rc.d/rc 1
l2:2:wait:/etc/rc.d/rc 2
l3:3:wait:/etc/rc.d/rc 3
l4:4:wait:/etc/rc.d/rc 4
l5:5:wait:/etc/rc.d/rc 5
l6:6:wait:/etc/rc.d/rc 6

# Trap CTRL-ALT-DELETE
ca::ctrlaltdel:/sbin/shutdown -t3 -r now

# When our UPS tells us power has failed, assume we have a few minutes
# of power left.  Schedule a shutdown for 2 minutes from now.
# This does, of course, assume you have powerd installed and your
# UPS connected and working correctly.
pf::powerfail:/sbin/shutdown -f -h +2 “Power Failure; System Shutting Down”

# If power was restored before the shutdown kicked in, cancel it.
pr:12345:powerokwait:/sbin/shutdown -c “Power Restored; Shutdown Cancelled”

# Run gettys in standard runlevels
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
3:2345:respawn:/sbin/mingetty tty3
4:2345:respawn:/sbin/mingetty tty4
5:2345:respawn:/sbin/mingetty tty5
6:2345:respawn:/sbin/mingetty tty6

# Run xdm in runlevel 5
x:5:respawn:/etc/X11/prefdm -nodaemon

Just add the following after the original gettys lines:

# Run agettys in standard runlevels
s0:2345:respawn:/sbin/agetty -L -f /etc/issue.serial 9600 ttyS0 vt100
s1:2345:respawn:/sbin/agetty -L -f /etc/issue.serial 9600 ttyS1 vt100

Let me explain quickly what the above means.  Each line is broken into multiple fields, separated by a colon.  At the very beginning of the line is an identifier, s0 and s1 in our case.  Next comes a list of the runlevels for which this program should be spawned.  Finally, the command to run is last.

The agetty command takes a number of arguments:

    • The -L switch disables carrier detect for the getty.
    • The next switch, -f, tells agetty to display the contents of a file before the login prompt, /etc/issue.serial in our case.
    • Next is the baud rate to use.  9600 bps is a good default value to use.  You can specify speeds up to 152,200 bps, but they may not work with all terminal programs.
    • Next up is the serial port, ttyS0 and ttyS1 in our example.
    • Finally, the terminal emulation to use.  VT100 is probably the most common, but you can use others.

Now that you’ve added the necessary lines, reload the init daemon via this command:

[friz@test ~]$ sudo /sbin/init q

At this point, you should be able to connect a serial cable to your Linux machine and access it via a program such as minicom, PuTTY, or hyperterminal.  And that’s all there is to it.

You can also redirect the kernel to output all console messages to the serial port as well.  This is accomplished by adding a switch to the kernel line in your /etc/grub.conf file like this:

# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/hdc3
#          initrd /initrd-version.img
#boot=/dev/hdc
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-53.1.21.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-53.1.21.el5 ro root=LABEL=/ console=ttyS0,9600
initrd /initrd-2.6.18-53.1.21.el5.img

The necessary change is highlighted above.  The console switch tells the kernel that you want to re-direct the console output.  The first option is the serial port to re-direct to, and the second is the baud rate to use.

And now you have a headless Linux system!  These come in handy when you need a Linux machine for remote access, but you don’t want to deal with having a mouse, keyboard, and monitor handy to access the machine locally.