New Tech!

I spent some time out in the wastes today and I stumbled across a cave of sorts.  I know, I know..  caves are dangerous, what with all the various wastelanders and other dangers.  But sometimes you just have to take a chance, right?  Besides, the rad count was pretty normal, so I didn’t think there was anything really nasty in there.

Anyway, I made sure my trusty phase pistol was loaded, turned up the luminosity on my goggles and headed into the cave.  The front of the cave was pretty dull, just the normal skeletal remains of a variety of animals.  I walked about a half a klick before I started wondering if the trip was worth it.  I was just about to call it quits when I tripped over something buried in the dirt.  There was a high pitched whining noise and then the cave was flooded with light.  I had inadvertently triggered some sort of automatic response system.  Luckily enough, there didn’t seem to be any sort of automatic weapon system.

After I stopped shaking, I took a good look around.  I’m not sure what this place used to be, but it was pretty big.  The first thing I noticed was the massive metal doorway built into the rock.  The door itself was covered in brownish dirt that looked like it hadn’t been disturbed in forever.  On the door, towards the top, was a strange symbol I’ve never seen before.  It was a triangle with an eye in the middle.  I wonder what that means.

I didn’t see any immediate way to open the door, so I started looking around a bit more.  Over by one wall of the cave were a bunch of large metal containers.  Each one bore the same image as the door.  As with the door, though, there didn’t seem to be any immediate way to open them.  I was about to turn my attention elsewhere when I heard a loud metallic click come from one of the containers.  Upon further inspection, I noticed that one of the smaller containers had opened.

I cautiously approached the container and nudged it open a bit further with the tip of my gun.  Inside was a small, black cube the pulsed with a strange glowing light.  I’m not sure what this thing is yet, but it’s definitely interesting.  I packed it up and prepared to head home.  Before leaving, I dropped a beacon so I can find the cave again.  I can’t wait to head back.

So, here I sit, staring at my new find.  I haven’t figured out what it is, yet, but I look forward to finding out.  Anyone out there come across anything like this before?

Annual Rabbit Hole Day …  Thanks Warren

CVS to Subversion…

I’ve been using CVS for a number of years and it has served me well. I have looked at Subversion a number of times, but never really had the time to deal with it. That has changed somewhat and I have had the chance to use SVN a bit more recently. SVN feels a bit more elegant, and, in most cases, faster than CVS. But, I’m also having a bit of trouble as well. Perhaps someone out there can provide me with some insight into my problems.

Most, if not all, of my recent coding has been in languages such as Perl and PHP. Additionally, I mainly code alone, so my use of a revisioning system is purely for historical data rather than proper merging. I also use CVS to handle updates of deployed code. This alone has proven to be the strongest reason to continue using a revisioning system.

With CVS, I develop code until I’m ready to deploy it. At that point, I tag the current revision, usually with a tag of RELEASE. Code is then deployed by checking out the code currently tagged as RELEASE. From here, when I update the code for a new release, I use the -F flag to force the RELEASE tag onto the new code. A simple cvs update handles updating the deployed code to the latest release. If the deployed code was changed for some reason, as sometimes happens, CVS handles merging and I can make and necessary adjustments. Overall, this has worked quite well for some time. There are hiccups here and there, but overall it has been pretty flawless.

Recently, I used cvs2svn to convert my existing CVS repositories over to SVN. After some false starts, some research, and a few minor headaches, I have all of my code converted over to SVN. I was able to get websvn running as well, which is a nice change as I can browse the repositories freely. I started playing around a bit and noticed that all of the imports have three additional directories, trunk, tags, and branches. More research and I discovered that SVN doesn’t handle tags the same way that CVS does… This concerned me as I used tags pretty heavily for deployment.

So now we come to my problem. I have identified how to create new tags using svn copy. This works great for the first copy of a given tag, but it breaks down when updating a tag. A second copy fails because the files already exist. I can use svn delete to remove the files before copying the new ones, but that’s an additional step I have no desire to do. After all, the purpose of moving to SVN is to make life easier, not harder.

After some more reading, I find that I can merge releases. Presumably, I can check out the tagged version and then merge changes from the trunk version. However, this is still more complicated as I have to merge the code and then commit it back to the repository. So, again, we have more steps than I want to deal with.

I think I understand the reason behind not being able to copy twice. I’m also aware that the way I was using CVS was fairly non-standard, but it worked for me. The code base I normally worked on could have multiple features I’m implementing at any given time, and deployment of one feature may get prioritized. So, merely copying the base to a new tag doesn’t quite work as not everything in that code may be complete at a given time.

So what are my options here? SVN has some advantages that I really like, including the web view and better handling of authentication and permissions. However, being unable to re-tag is kind of a pain. One way or another, I think I’ll be using SVN anyway, but I was kind of hoping to find a decent way to handle everything… Anyone out there have any suggestions?

Happy belated US Democracy Server Patch Day!

Stumbled across a site with these patch notes…  They’re funny enough that I’m reposting them below.

US Democracy Server: Patch Day

Version 44.0

President

  • Leadership: Will now scale properly to national crises. Intelligence was not being properly applied.
  • A bug has been fixed that allowed the President to ignore the effects of debuffs applied by the Legislative classes.
  • Drain Treasury: There appears to be a bug that allowed loot to be
    transferred from the treasury to anyone on the President’s friends
    list, or in the President’s party. We are investigating.
  • Messages to and from the President will now be correctly saved to the chat log.
  • Messages originating from the President were being misclassified as originating from The American People.
  • A rendering error that frequently caused the President to appear wrapped in the American Flag texture has been addressed.

Vice President

  • The Vice President has been correctly reclassified as a pet.
  • No longer immune to damage from the Legislative and Judicial classes.
  • The Vice President will no longer aggro on friendly targets. This
    bug was identified with Ranged Attacks and the Head Shot ability.
  • Reveal Identity: this debuff will no longer be able to target Covert Operatives.
  • Messages to and from the Vice President will now be correctly saved to the chat log.
  • A rendering bug was affecting the Vice President’s visibility,
    making him virtually invisible to the rest of the server. This has been
    addressed.

Cabinet

  • There was a bug in the last release that prevented the Cabinet from
    disagreeing with the President, which was the cause of a number of
    serious balance issues. This bug has been addressed, and we will
    continue to monitor the situation.

Judiciary

  • Many concerns have been raised regarding balance issues in the
    Supreme Court. This system is maintained on a different patch schedule,
    and will require longer to address.
  • A large number of NPCs in the Judiciary were incorrectly flagged
    “ideological.” We are trying to identify these cases and rectify this
    situation.

Homeland Security

  • Homeland Security Advisory System: We have identified a bug in this
    system that prevents the threat level from dropping below Elevated
    (Yellow). The code for Guarded (Blue) and Low (Green) has been
    commented out. We are testing the fix and hope to have it in by the
    next patch.
  • Torture: This debuff is being removed after a record number of complaints.
  • Item: Large Bottle of Water is incorrectly generating threat with
    TSA Agents when held in inventory. We are looking into the issue.
  • Asking questions about Homeland Security was incorrectly triggering the Chain-Jingoism debuff.

Economy

  • Serious on-going issues with server economy are still being
    addressed. We expect further roll-backs, and appreciate your help
    identifying and fixing bugs. We can’t make these fixes without your
    help.

PVP

  • Reputation with various factions are being rebalanced. The gradated
    reputation scale was erroneously being overwritten by the binary For
    Us/ Against Us flag.

Quests

  • The” Desert Storm” quest chain was displaying an erroneous “Mission Accomplished” message near the beginning of the chain.
  • The quest chain that begins with “There’s no Cake like Yellow Cake”
    and terminates  with “W-M-Denied” has been identified as uncompletable,
    and has been removed.

Reagents

  • Many recipes that currently call for Crude Oil can now be made with
    Wind, Solar, Geothermal and Ethanol reagents. We hope to roll out even
    more sweeping changes in the next patch.

Events

  • The “Axis of Evil” event is drawing to a close. Look forward to the “Rebuilding Bridges” event starting in January.

I can’t wait to see what Obama has in store for the technical side of things.  Too bad he has to start out with technology in the White House that has been compared to the Dark Ages

Storage Area Networks

So I have this new job now and I’m being introduced to some new technology. One of the first things I was able to get my fingers into was the Storage Area Network (SAN). I’m quite familiar with Network Attached Storage (NAS), and was under the belief that SANs were just the same technology with dedicated servers. I was mistaken, however, and the differences are quite interesting.

Network Attached Storage is used quite widely, perhaps even in your own home. NAS uses protocols such as NFS and Samba/CIFS. You may not recognize Samba/CIFS, but this is the protocol used when you share a directory on a Windows machine. NFS is essentially an equivalent protocol used in the UNIX community. (Ok, ok, it’s not *really* equivalent, but let’s not start a holy war here…) In short, you identify which location on the server you want to share, and then you mount that share on the client. Shares are commonly identified by the server address and the directory or path that is being shared. Additionally, the type of filesystem used is abstracted away, preventing the local server from optimizing the storage based on the usage type.

Storage Area Networks, on the other hand, generally use the SCSI protocol for communication. In order to mount a SAN volume, you typically identify it just like any other hard drive. (Note: My experience here is on *nix systems, not Windows. As such, I’m not entirely sure how SANs are mounted via Windows) One fairly large benefit to mounting in this manner is that you can boot a server directly from the SAN rather than using local drives. SAN devices are presented to the operating system as a typical block device, allowing the administrator to choose the filesystem to use, as well as any of the associated filesystem management tools.

There are a number of different SAN types including Fibre Channel (FC), iSCSI, ATA over Ethernet, and more. The SAN I worked on is a Fibre Channel SAN from EMC. Fibre Channel is a high-speed transport technology, originally designed for use in supercomputers. It has since become the transport of choice for SANs. Typically, fiber optics are used as a physical medium, though transport over twisted-pair copper is also possible.

Fibre Channel itself is very similar to Ethernet technology. FC switches are used to provide connectivity between the SAN and the various clients using the SAN. Multiple switches can be connected together, providing both transport over long distances as well as expanding the number of available ports for clients. Multiple SANs can be connected to the switches, allowing clients to connect to shares in multiple locations. More advanced switches, such as the Cisco FC switch, use technology similar to Ethernet VLANs to isolate traffic on the switches, providing additional security and reducing broadcast traffic.

iSCSI is essentially Ethernet-attached storage. The SCSI protocol is tunneled over IP, allowing an existing IP infrastructure to be used for connectivity. This is a major advantage as it reduces the overall cost to deploy a SAN.

A major drawback of SANs is the overall cost to deploy them. While hard drives are relatively inexpensive, the rest of the hardware that makes up a SAN is rather expensive. Even a small SAN can cost upwards of $25,000 or more. But if you’re in the market for extremely high-speed storage, SANs are hard to beat.

Properly configured, SANs can offer a high level of redundancy. Typically, servers are connected to a SAN via multiple paths. As a result, the same storage device is presented to the server multiple times. A technology known as multipath can be used to abstract away these multiple paths and present a single unified device to the server. Multipath then monitors each path, switching between them when necessary, such as when a failure occurs. On the SAN itself, the storage is handled by one or more hard drive arrays. Arrays can be configured with a variety of RAID levels, providing redundancy between hard drives.

SANs are a pretty cool technology. It has definitely been interesting learning about them, and setting them up for the first time. I have to admit, however, that I mostly dealt with the server end of the setup. The SAN itself was already in place and the shares had already been created. After dealing with the software involved in creating these shares, I can’t say I would look forward to using it again. It’s amazing how confusing and unusable such software can be. Overall, though, I’m glad I had the chance to learn.

if (blocked($content))

And the fight rages on… Net Neutrality, to block or not to block.

Senator Byron Dorgan, a Democrat from North Dakota, is introducing new legislation to prevent service providers from blocking Internet content. Dorgan is not new to the arena, having put forth legislation in previous years dealing with the same thing. This time, however, he may be able to push it through.

So what’s different this time? Well, for one, we have a new president. And this new president has already stated that Net Neutrality is high on his list of technology related actions. So, at the very least, it appears that Dorgan has the president in his corner.

Of course, some service providers are not happy about this. Comcast has gone on record with the following:

“We don’t believe legislation is necessary in this area and could harm innovation and investments,” said Sena Fitzmaurice, Comcast’s senior director of government affairs and corporate communications, in a phone interview. “We have consistently said that all our customers have access to content available on the Internet.”

And she’s right! Well.. sort of. Comcast custmers do have access to content. Or, rather, they do now. I do recall a recent period of time where Comcast was “secretly” resetting bittorrent connections, and they have talked about both shaping and capping customers. So, in the end, you may get all of the content, just not all at the same level of service.

But I think, overall, Dorgan has an uphill battle. Net Neutrality is a concept not unlike free speech. It’s a great concept, but sometimes its implementation is questionable. For instance, If we look at pure Net Neutrality, then providers are required to allow all content without any shaping or blocking. Even bandwidth caps can be seen to fall under the umbrella of Net Neutrality. As a result, customers can theoretically use 100% of their alloted bandwidth at all times. This sounds great, until you realize that bandwidth, in some instances, and for perfectly legitimate reasons, is limited.

Take rural areas, for instance, especially in the midwest where homes can be miles away from each other. It can be cost-prohibitive for a service provider to run lines out to remote areas. And if they do, it’s generally done using line extender technology that can allow for decent voice signals over copper, but not high-speed bandwidth. One or two customer connections don’t justify the cost of the equipment. So, those customers are relegated to slower service, and may end up devices with high customer to bandwidth ratios. In those cases, a single customer can cause severe degradation of service for all the others, merely by using a lot of bandwidth.

On the flip side, however, allowing service providers to block and throttle according to their own whims can result in anti-competitive behavior. Take, for instance, IP Telephony. There are a number of IP Telephony providers out there that provide the technology to place calls over a local Internet connection. Skype and Vonage are two examples. Neither of these providers has any control over the local network, and thus their service is dependent on the local service provider. But let’s say the local provider wants to offer VoIP service. What’s to prevent that local provider from throttling or outright blocking Skype and Vonage? And thus we have a problem. Of course, you can fall back to the “let the market decide” argument. The problem with this is that, often, there is only one or two local providers, usually one Telco and one Cable. The Telco provider may throttle and block voice traffic, while the Cable provider does the same for video. Thus, the only choice is to determine which we would rather have blocked. Besides, changing local providers can be difficult as email addresses, phone numbers, etc. are usually tied to the existing provider. And on top of that, most people are just too lazy to change, they would rather complain.

My personal belief is that the content must be available and not throttled. However, I do believe the local provider should have some control over the network. So, for instance, if one type of traffic is eating up the majority of the bandwidth on the network, the provider should be able to throttle that traffic to some degree. However, they must make such throttling public, and they must throttle ALL of that type of traffic. Going back to the IP Telephony example, if they want to throttle Skype and Vonage, they need to throttle their own local VoIP too.

It’s a slippery slope and I’m not sure there is a perfect answer. Perhaps this new legislation will be a step in the right direction. Only time will tell.

Hacking the Infrastructure – How DNS works – Part 2

Welcome back. In part 1, I discussed the technical details of how DNS works. In this part, I’ll introduce you to some of the more common DNS server packages. In a future post I will cover some of the common problems with DNS as well as proposed solutions. So let’s dive right in.

The most popular DNS server, by far, is BIND, the Berkley Internet Name Domain. BIND has long and storied past. On the one hand, it’s one of the oldest packages for serving DNS, dating back to the early 1980’s, and on the other, it has a reputation for being one of the most insecure. BIND started out as a graduate student project at the University of California at Berkley, and was maintained by the Computer Systems Research Group. In the late 1980’s, the Digital Equipment Corporation helped with development. Shortly after that, Paul Vixie became the primary developer and eventually formed the Internet Systems Consortium which maintains BIND to this day.

Being the most popular DNS software out there, BIND suffers from the same malady that affects Microsoft Windows. It’s the most popular, most widely installed, and, as a result, hackers can gain the most by breaking it. In short, it’s the most targeted of DNS server softwares. Unlike Windows, however, BIND is open source and should benefit from the extra scrutiny that usually entails, but, alas, it appears that BIND is pretty tightly controlled by the ISC. From the ISC site, I do not see any publicly accessible software repository, no open discussion of code changes, and nothing else that really marks a truly open source application. The only open-source bits I see are a users mailing list and source code downloads. Beyond that, it appears that you either need to be a member of the “Bind Forum,” or wait for new releases with little or no input.

Not being an active user of BIND, I cannot comment too much on the current state of BIND other than what I can find publicly available. I do know that BIND supports just about every DNS convention there is out there. That includes standard DNS, DNSSEC, TSIG, and IPv6. The latter three of these are relatively new. In fact, the current major version of BIND, version 9, was written from the ground up specifically for DNSSEC support.

In late 1999, Daniel J. Bernstein, a professor at the University of Illinois, wrote a suite of DNS tools known as djbdns. Bernstein is a mathematician, cryptographer, and a security expert. He used all of these skills to produce a complete DNS server that he claimed had no security holes in it. He went as far as offering a security guarantee, promising to pay $1000 to the first person to identify a verifiable security hole in djbdns. To date, no one has been able to claim that money. As recently as 2004, djbdns was the second most popular DNS server software.

The primary reason for the existence of djbdns is Bernstein’s dissatisfaction with BIND and the numerous security problems therein. Having both security and simplicity in mind, Bernstein was able to make djbdns extremely stable and secure. In fact, djbdns was unaffected by the recent Kaminsky vulnerability, which affected both BIND and Microsoft DNS. Additionally, configuration and maintenance are both simple, straightforward processes.

On the other hand, the simplicity of djbdns may become its eventual downfall. Bernstein is critical of both DNSSEC and IPv6 and has offered no support for either of these. While some semblance of IPv6 support was added via a patch provided by a third party, I am unaware of any third-party DNSSEC support. Let me be clear, however, while the IPv6 patch does add additional support for IPv6, djbdns itself can already handle serving the AAAA records required for IPv6. The difference is that djbdns only talks over IPv4 transport while the patch adds support for IPv6 transport.

Currently, it is unclear at to whether Bernstein will ever release a new version of djbdns with support for any type of “secure” DNS.

The Microsoft DNS server has existed since Windows NT 3.51 was shipped back in 1995. It was included as part of the Microsoft BackOffice, a collection of software intended for use by small businesses. As of 2004, it was the third most popular DNS server software. According to Wikipedia, Microsoft DNS is based on BIND 4.3 with, of course, lots of Microsoft extensions. Microsoft DNS has become more and more important with new releases of Windows Server. Microsoft’s Active Directory relies heavily on Microsoft DNS and the dynamic DNS capabilities included. Active Directory uses a number of special DNS entries to identify services and allow machines to locate them. It’s an acceptable use of DNS, to be sure, but really makes things quite messy and somewhat difficult to understand.

I used Microsoft DNS for a period of time after Windows 2000 was released. At the time, I was managing a small dial-up network and we used Active
Directory and Steel-Belted RADIUS for authentication. Active Directory integration allowed us to easily synchronize data between the two sites we had, or so I thought. Because we were using Active Directory, the easiest thing to do was to use Microsoft DNS for our domain data and as a cache for customers. As we found out, however, Microsoft DNS suffered from some sort of cache problem that caused it to stop answering DNS queries after a while. We suffered with that problem for a short period of time and eventually switched over to djbdns.

There are a number of other DNS servers out there, both good and bad. I have no experience with any of them other than to know some of them by reputation. Depending on what happens in the future with the security of DNS, however, I predict that a lot of the smaller DNS packages will fall by the wayside. And while I have no practical experience with BIND beyond using it as a simple caching nameserver, I can only wonder why such a package claiming to be open source, but so guarded as it is, maintains its dominance. Perhaps I’m mistaken, but thus far I have found nothing that contradicts my current beliefs.

Next time we’ll discuss some of the more prevalent problems with DNS and DNS security. This will lead into a discussion of DNSSEC and how it works (or, perhaps, doesn’t work) and possible alternatives to DNSSEC. If you have questions and/or comments, please feel free to leave them in the comment section.

Hacking the Infrastructure – How DNS works – Part 1

Education time… I want to learn a bit more about DNS and DNSSEC in particular, so I’m going to write a series of articles about DNS and how it all works. So, let’s start at the beginning. What is DNS, and why do we need it?

DNS, the Domain Name Service, is a hierarchical naming system used primarily on the Internet. In simple terms, DNS is a mechanism by which the numeric addresses assigned to the various computers, routers, etc. are mapped to alphanumeric names, known as domain names. As it turns out, humans tend to be able to remember words a bit easier than numbers. So, for instance, it is easier to remember blog.godshell.com as opposed to 204.10.167.1.

But, I think I’m getting a bit ahead of myself. Let’s start back closer to the beginning. Back when ARPANet was first developed, the developers decided that it would be easier to name the various computers connected to ARPANet, rather than identifying them by number. So, they created a very simplistic mapping system that consisted of name and address pairs written to a text file. Each line of the text file identified a different system. This file became known as the hosts file.

Initially, each system on the network was responsible for their own hosts file, which naturally resulted in a lot of systems either unaware of others, or unable to contact them easily. To remedy this, it was decided to make an “official” version of the hosts file and store it in a central location. Each node on ARPANet then downloaded the hosts file at a fairly regular interval, keeping the entire network mostly in-sync with new additions. As ARPANet began to grow and expand, the hosts file grew larger. Eventually, the rapid growth of ARPANet made updating and distributing the hosts file a difficult endeavor. A new system was needed.

In 1983, Paul Mockapetris, one of the early ARPANet pioneers, worked to develop the first implementation of DNS, called Jeeves. Paul wrote RFC 882 and RFC 883, the original RFCs describing DNS and how it should work. RFC 882 describes DNS itself and what it aims to achieve. It describes the hierarchical structure of DNS as well as the various identifiers used. RFC 883 describes the initial implementation details of DNS. These details include items such as message formats, field formats, and timeout values. Jeeves was based on these two initial RFCs.

So now that we know what DNS is and why it was developed, let’s learn a bit about how it works.

DNS is a hierarchical system. This means that the names are assigned in an ordered, logical manner. As you are likely aware, domain names are generally strings of words, known as labels, connected by a period, such as blog.godshell.com. The rightmost label is known as the top-level domain. Each label to the left is a sub-domain of the label to the right. For the domain name blog.godshell.com, com is the top-level domain, godshell is a sub-domain of com, and blog is a sub-domain of godshell.com. Information about domain names is stored in the name server in a structure called a resource record.

Each domain, be it a top level domain, or a sub-domain, is controlled by a name server. Some name servers control a series of domains, while others control a single domain. These various areas of control are called zones. A name server that is ultimately responsible for a given zone is known as an authoritative name server. Note, multiple zones can be handled by a single name server, and multiple name servers can be authoritative for the same zone, though they should be in primary and backup roles.

Using our blog.godshell.com example, the com top-level domain is in one zone, while godshell.com and blog.godshell.com are in another. There is another zone as well, though you likely don’t see it. That zone is the root-zone, usually represented by a single period after the full domain name, though almost all modern internet programs automatically append the period at the end, making it unnecessary to specify it explicitly. The root-zone is pretty important, too, as it essentially ties together all of the various domains. You’ll see what I mean in a moment.

Ok, so we have domains and zones. We know that zones are handled individually by different name servers, so we can infer that the name servers talk to each other somehow. If we infer further, we can guess that a single name resolution probably involves more than two name servers. So how exactly does all of this work? Well, that process depends on the type of query being used to perform the name resolution.

There are two types of queries, recursive and non-recursive. The query type is negotiated by the resolver, the software responsible for performing the name resolution. The simpler of the two queries is the non-recursive query. Simply put, the resolver asks the name server for non-recursive resolution and gets an immediate answer back. That answer is generally the best answer the name server can give. If, for instance, the name server queried was a caching name server, it is possible that the domain you requested was resolved before. If so, then the correct answer can be given. If not, then you will get the best information the name server can provide which is usually a pointer to a name server that will know more about that domain. I’ll cover caching more a little later.

Recursive queries are probably the most common type of query. A recursive query aims to completely resolve a given domain name. It does this by following a few simple steps. Resolution begins with the rightmost label and moves left.

  1. The resolver asks one of the root name servers (that handle the root-zone) for resolution of the rightmost label. The root server responds with the address of a server who can provide more information about that domain label.
  2. Query the next server about the next label to the left. Again, the server will respond with the address of a server that will know more about that domain label, or, possibly, an authoritative answer for the domain.
  3. Repeat step 2 until the final answer is given.

These steps are rather simplistic, but give a general idea of how DNS works. Let’s look at an example of how this works. For this example, I will be using the dig command, a standard Linux command commonly used to debug DNS. To simplify things, I’m going to use the +trace option which does a complete recursive lookup, printing the responses along the way.

$ dig +trace blog.godshell.com

; <<>> DiG 9.4.2-P2 <<>> +trace blog.godshell.com
;; global options: printcmd
. 82502 IN NS i.root-servers.net.
. 82502 IN NS e.root-servers.net.
. 82502 IN NS h.root-servers.net.
. 82502 IN NS g.root-servers.net.
. 82502 IN NS m.root-servers.net.
. 82502 IN NS a.root-servers.net.
. 82502 IN NS k.root-servers.net.
. 82502 IN NS c.root-servers.net.
. 82502 IN NS j.root-servers.net.
. 82502 IN NS d.root-servers.net.
. 82502 IN NS f.root-servers.net.
. 82502 IN NS l.root-servers.net.
. 82502 IN NS b.root-servers.net.
;; Received 401 bytes from 192.168.1.1#53(192.168.1.1) in 5 ms

This first snippet shows the very first query sent to the local name server (192.168.1.1) which is defined on the system I’m querying from. This is often configured automatically via DHCP, or hand-entered when setting up the computer for the first time. This output has a number of fields, so let’s take a quick look at them. First, any line preceded by a semicolon is a comment. Comments generally contain useful information on what was queried, what options were used, and even what type of information is being returned.

The rest of the lines above are responses from the name server. As can be seen from the output, the name server responded with numerous results, 13 in all. Multiple results is common and means the same information is duplicated on multiple servers, commonly for load balancing and redundancy. The fields, from left to right, are as follows : domain, TTL, class, record type, answer. The domain field is the current domain being looked up. In the example above, we’re starting at the far right of our domain with the root domain (defined by a single period).

TTL stands for Time To Live. This field defines the number of seconds this data is good for. This information is mostly intended for caching name servers. It lets the cache know how much time has to pass before the cache must look up the answer again. This greatly reduces DNS load on the Internet as a whole, as well as decreasing the time it takes to obtain name resolution.

The class field defines the query class used. Query classes can be IN (Internet), CH (Chaos), HS (Hesiod), or a few others. Generally speaking, most queries are of the Internet class. Other classes are used for other purposes such as databases.

Record type defines the type of record you’re looking at. There are a number of these, the most common being A, PTR, CNAME, MX, and NS. An A record is ultimately what most name resolution is after. It defines a mapping from a domain name to an IP address. A PTR record is the opposite of an A record. It defines the mapping of an IP Address to a domain name. CNAME is a Canonical name record, essentially an alias for another record. MX is a mail exchanger record which defines the name of a server responsible for mail for the domain being queried. And finally, an NS record is a name server record. These records generally define the name server responsible for a given domain.

com. 172800 IN NS a.gtld-servers.net.
com. 172800 IN NS b.gtld-servers.net.
com. 172800 IN NS c.gtld-servers.net.
com. 172800 IN NS d.gtld-servers.net.
com. 172800 IN NS e.gtld-servers.net.
com. 172800 IN NS f.gtld-servers.net.
com. 172800 IN NS g.gtld-servers.net.
com. 172800 IN NS h.gtld-servers.net.
com. 172800 IN NS i.gtld-servers.net.
com. 172800 IN NS j.gtld-servers.net.
com. 172800 IN NS k.gtld-servers.net.
com. 172800 IN NS l.gtld-servers.net.
com. 172800 IN NS m.gtld-servers.net.
;; Received 495 bytes from 199.7.83.42#53(l.root-servers.net) in 45 ms

Our local resolver has randomly chosen an answer from the previous response and queried that name server (l.root-servers.net) for the com domain. Again, we received 13 responses. This time, we are pointed to the gtld servers, owned by Network Solutions. The gtld servers are responsible for the .com and .net top-level domains. These are two of the most popular TLDs available.

godshell.com. 172800 IN NS ns1.emcyber.com.
godshell.com. 172800 IN NS ns2.incyberspace.com.
;; Received 124 bytes from 192.55.83.30#53(m.gtld-servers.net) in 149 ms

Again, our local resolver has chosen a random answer (m.gtld-servers.net) and queried for the next part of the domain, godshell.com. This time, we are told that there are only two servers responsible for that domain.

blog.godshell.com. 3600 IN A 204.10.167.1
godshell.com. 3600 IN NS ns1.godshell.com.
godshell.com. 3600 IN NS ns2.godshell.com.
;; Received 119 bytes from 204.10.167.61#53(ns2.incyberspace.com) in 23 ms

Finally, we randomly choose a response from before and query again. This time we receive three records in response, an A record and two NS records. The A record is the answer we were ultimately looking for. The two NS records are authority records, I believe. Authority records define which name servers are authoritative for a given domain. They are ultimately responsible for giving the “right” answer.

That’s really DNS in a nutshell. There’s a lot more, of course, and we’ll cover more in the future. Next time, I’ll cover the major flavors of name server software and delve into some of the problems with DNS today. So, thanks for stickin’ around! Hopefully you found this informative and useful. If you have questions and/or comments, please feel free to leave them in the comment section.

Super Sub Tiny Pico Computing Platform Thing-a-majiggie

Slashdot reported yesterday on a new cell-phone sized PC. Dubbed the IMOVIO iKit, this small form-factor PC runs an embedded version of Linux and boasts both Wifi and Bluetooth connectivity. IMOVIO is the in-house brand used by COMsciences to market the iKit.

The technical specs for the iKit are as follows (from the iKit presentation):

  • Marvell PXA270, 312MHz CPU
  • 128MB (ROM), 64MB SDRAM (RAM)
  • up to 8 GB Micro SD
  • 240×320, 2.8” TFT 262k color LCD
  • Lithium-Ion battery with up to 250 hours stand-by, 6 hours WiFi use and 6 hours gaming
  • Dimensions: L95mm x W65mm x D15.5mm
  • OS: Linux 2.4.19 (Windows Mobile or Android special order)
  • WiFi 802.11b/g
  • Bluetooth 2.0 EDR

This is a pretty decent little machine. The screen is slightly smaller than the Nintendo DS screen, but larger than most cellphones. Screen resolution is decent enough for basic video, supporting the same color depth as the DS. This should be enough real estate to display simple web pages, and especially for instant messaging.

The core OS is based on Linux, kernel 2.4.19, but Windows Mobile and Google Android are apparently available as well, for special orders. Being Linux, I expect that an SDK of some sort will be released, allowing additional applications to be developed. The basic applications shipping with the unit include a mail client, web browser, instant messaging client, contact manager, photo viewer, music and video player, and possibly additional applications such as a VoIP client. Configuration of the unit seems to be determined by the customer buying the unit.

The targeted customer, however, seems to be carriers as opposed to end-users. IMOVIO expects to sell these units to carriers, specifically configured for the carrier’s service. From there, the carrier deals with selling them to end-users. This is the typical model for cell-phone companies.

So what good is this unit? Is it worth the expected $175 or so dollars for the unit? Well, I suppose that depends on the user, and the performance of this little device. Personally, it would be nice to have a small instant-on unit I can use for quick web lookups, jotting a quick email, or viewing a video or two. However, most cell phones have the same capabilities today. In fact, my own cell phone, the Blackberry 8830, does all this and more. The biggest drawback to the blackberry, however, is the lack of wifi connectivity, reducing speed considerably.

Personally, I’d like to give one of these devices a shot. It would be interesting to see what capabilities the unit truly has, and, at the same time, see if it impacts how I work and play every day.

Detecting DNS cache poisoning

I spoke with a good friend of mine last week about his recent trip to NANOG. While he was there, he listened to a talk about detecting DNS cache poisoning. However, this was detection at the authoritative server, not at the cache itself. This is a bit different than detection at a cache because most cache poisoning will happen outside of your domain.

I initially wrote about the Kaminsky DNS bug a while back, and this builds somewhat on that discussion. When a cache poisoning attack is underway, the attacker must spoof the source IP of the DNS response. From what I can tell, this is because the resolver is told by the root servers who the authoritative server is for the domain. Thus, if a response comes back from a non-authoritative IP, it won’t be accepted.

So let’s look at the attack briefly. The attacker starts requesting a large number of addresses, something to the tune of a.example.com, b.example.com, etc. While those packets are being sent, the attacker sends out the responses with the spoofed headers. Since we are now guessing both the QID *and* the port, we miss a lot because the port is incorrect.

When the server receives a packet on a port that is not expecting data, it responds with an ICMP message, “Destination Port Unreachable.” That ICMP message is sent to the source IP of the packet, which is the spoofed authoritative IP. This is known as ICMP backscatter.

Administrators of authoritative name servers can monitor for ICMP backscatter and identify possible cache poisoning attacks. In most cases, there is nothing that can be done directly to mitigate these attacks, but it is possible to identify the cache being attacked and notify the admin. Cooperation between administrators can lead to a complete mitigation of the attack and protection of clients who may be harmed.

This is an excellent example of the type of data you can identify simple through passive monitoring on your local network.