Posts Tagged ‘networking’

Network Enhanced Telepathy

Wednesday, March 18th, 2015

I’ve recently been reading Wired for War by P.W. Singer and one of the concepts he mentions in the book is Network Enhanced Telepathy. This struck me as not only something that sounds incredibly interesting, but something that we’ll probably see hit mainstream in the next 5-10 years.

According to Wikipedia, telepathy is “the purported transmission of information from one person to another without using any of our known sensory channels or physical interaction.“ In other words, you can think *at* someone and communicate. The concept that Singer talks about in the book isn’t quite as “mystical” since it uses technology to perform the heavy lifting. In this case, technology brings fantasy into reality.

Scientists have already developed methods to “read” thoughts from the human mind. These methods are by no means perfect, but they are a start. As we’ve seen with technology across the board from computers to robotics, electric cars to rockets, technological jumps may ramp up slowly, but then they rocket forward at a deafening pace. What seems like a trivial breakthrough at the moment may well lead to the next step in human evolution.

What Singer describes in the book is one step further. If we can read the human mind, and presumably write back to it, then adding a network in-between, allowing communication between minds, is obvious. Thus we have Network Enhanced Telepathy. And, of course, with that comes all of the baggage we associate with networks today. Everything from connectivity issues and lag to security problems.

The security issues associated with something like this range from inconvenient to downright horrifying. If you thought social engineering was bad, wait until we have a direct line straight into someone’s brain. Today, security issues can result in stolen data, denial of service issues, and, in some rare instances, destruction of property. These same issues may exist with this new technology as well.

Stolen data is pretty straightforward. Could an exploit allow an attacker to arbitrarily read data from someone’s mind? How would this work? Could they pinpoint the exact data they want, or would they only have access to the current “thoughts” being transmitted? While access to current thoughts might not be as bad as exact data, it’s still possible this could be used to steal important data such as passwords, secret information, etc. Pinpointing exact data could be absolutely devastating. Imagine, for a moment, what would happen if an attacker was able to pluck your innermost secrets straight out of your mind. Everyone has something to hide, whether that’s a deep dark secret, or maybe just the image of themselves in the bathroom mirror.

I’ve seen social engineering talks wherein the presenter talks about a technique to interrupt a person, mid-thought, and effectively create a buffer overflow of sorts, allowing the social engineer to insert their own directions. Taken to the next level, could an attacker perform a similar attack via a direct link to a person’s mind? If so, what access would the attacker then attain? Could we be looking at the next big thing in brainwashing? Merely insert the new programming, directly into the user.

How about Denial of Service attacks or physical destruction? Could an attacker cause physical damage in their target? Is a connection to the mind enough access to directly modify the cognitive functions of the target? Could an attacker induce something like Locked-In syndrome in a user? What about blocking specific functions, preventing the user from being able to move limbs, or speak? Since the brain performs regulatory control over the body, could an attacker modify the temperature, heart rate, or even induce sensations in their target? These are truly scary scenarios and warrant serious thought and discussion.

Technology is racing ahead at breakneck speeds and the future is an exciting one. These technologies could allow humans to take that next evolutionary step. But as with all technology, we should be looking at it with a critical eye. As technology and biology become more and more intertwined, it is essential that we tread carefully and be sure to address potential problems long before they become a reality.

Suspended Visible Masses of Small Frozen Water Crystals

Friday, March 13th, 2015

The Cloud, hailed as a panacea for all your IT related problems. Need storage? Put it in the Cloud. Email? Cloud. Voice? Wireless? Logging? Security? The Cloud is your answer. The Cloud can do it all.

But what does that mean? How is it that all of these problems can be solved by merely signing up for various cloud services? What is the cloud, anyway?

Unfortunately, defining what the cloud actually is remains problematic. It means many things to many people. The cloud can be something “simple” like extra storage space or email. Google, Dropbox, and others offer a service that allows you to store files on their servers, making them available to you from “anywhere” in the world. Anywhere, of course, if the local government and laws allow you to access the services there. These services are often free for a small amount of space.

Google, Microsoft, Yahoo, and many, many others offer email services, many of them “free” for personal use. In this instance, though, free can be tricky. Google, for instance, has algorithms that “read” your email and display advertisements based on the results. So while you may not exchange money for this service, you do exchange a level of privacy.

Cloud can also be pure computing power. Virtual machines running a variety of operating systems, available for the end-user to access and run whatever software they need. Companies like Amazon have turned this into big business, offering a full range of back-end services for cloud-based servers. Databases, storage, raw computing power, it’s all there. In fact, they have developed APIs allowing additional services to be spun up on-demand, augmenting existing services.

As time goes on, more and more services are being added to the cloud model. The temptation to drop self-hosted services and move to the cloud is constantly increasing. The incentives are definitely there. Cloud services are affordable, and there’s no need for additional staff for support. All the benefits with very little of the expense. End-users have access to services they may not have had access to previously, and companies can save money and time by moving services they use to the cloud.

But as with any service, self-hosted or not, there are questions you should be asking. The answers, however, are sometimes a bit hard to get. But even without direct answers, there are some inferences you can make based on what the service is and what data is being transferred.

Data being accessible virtually anywhere, at any time, is one of major draws of cloud services. But there are downsides. What happens when the service is inaccessible? For a self-hosted service, you have control and can spend the necessary time to bring the service back up. In some cases, you may have the ability to access some or all of the data, even without the service being fully restored. When you surrender your data to the cloud, you are at the mercy of the service provider. Not all providers are created equal and you cannot expect uniform performance and availability across all providers. This means that in the event of an outage, you are essentially helpless. Keeping local backups is definitely an option, but oftentimes you’re using the cloud so that you don’t need those local backups.

Speaking of backups, is the cloud service you’re using responsible for backups? Will they guarantee that your data will remain safe? What happens if you accidentally delete a needed file or email? These are important issues that come up quite often for a typical office. What about the other side of the question? If the service is keeping backups, are those backups secure? Is there a way to delete data, permanently, from the service? Accidents happen, so if you’ve uploaded a file containing sensitive information, or sent/received an email with sensitive information, what recourse do you have? Dropbox keeps snapshots of all uploaded data for 30 days, but there doesn’t seem to be an official way to permanently delete a file. There are a number of articles out there claiming that this is possible, just follow the steps they provide, but can you be completely certain that the data is gone?

What about data security? Well, let’s think about the data you’re sending. For an email service, this is a fairly simple answer. Every email goes through that service. In fact, your email is stored on the remote server, and even deleted messages may hang around for a while. So if you’re using email for anything sensitive, the security of that information is mostly out of your control. There’s always the option of using some sort of encryption, but web-based services rarely support that. So data security is definitely an issue, and not necessarily an issue you have any control over. And remember, even the “big guys” make mistakes. Fishnet Security has an excellent list of questions you can ask cloud providers about their security stance.

Liability is an issue as well, though you may not initially realize it. Where, exactly, is your data stored? Do you know? Can you find out? This can be an important issue depending on what your industry is, or what you’re storing. If your data is being stored outside of your home country, it may be subject to the laws and regulations of the country it’s stored in.

There are a lot of aspects to deal with when thinking about cloud services. Before jumping into the fray, do your homework and make sure you’re comfortable with giving up control to a third party. Once you give up control, it may not be that easy to reign it back in.

Keepin’ TCP Alive

Thursday, February 20th, 2014

I was debugging an odd network issue lately that turned out to have a pretty simple explanation. A client on the network was intermittently experiencing significant delays in accessing the network. Upon closer inspection, it turned out that prior to the delay, the client was being left idle for long periods of time. With this additional information it was pretty easy to identify that there was likely a connection between the client and server that was being torn down for being idle.

So in the end, the cause of the problem itself was pretty simple to identify. The fix, however, is more of a conundrum. The obvious answer is to adjust the timers and prevent the connection from being torn down. But what timers should be adjusted? There are the keepalive timers on the client, the keepalive timers on the server, and the idle teardown timers on the firewall in the middle.

TCP keepalive handling varies between operating systems. If we look at the three major operating systems, Linux, Windows, and OS X, then we can make the blanket statement that, by default, keepalives are sent after two hours of idle time. But, most firewalls seem to have a default TCP teardown timer of one hour. These defaults are not conducive to keeping idle connections alive.

The optimal scenario for timeouts is for the clients to have a keepalive timer that fires at an interval lower than that of the idle tcp timeout on the firewall. The actual values to use, as well as which devices should be changed, is up for debate. The firewall is clearly the easier point at which to make such a change. Typically there are very few firewall devices that would need to be updated as compared to the larger number of client devices. Additionally, there will likely be fewer firewalls added to the network over time, so ensuring that timers are properly set is much easier. On the other hand, the defaults that firewalls are generally configured with have been chosen specifically by the vendor for legitimate reasons. So perhaps the clients should conform to the setting on the firewall? What is the optimal solution?

And why would we want to allow idle connections anyway? After all, if a connection is idle, it’s not being used. Clearly, any application that needed a connection to remain open would send some sort of keepalive, right? Is there a valid reason to allow these sorts of connections for an extended period of time?

As it turns out, there are valid reasons for connections to remain active, but idle. For instance, database connections are often kept for longer periods of time for performance purposes. The TCP handshake can take a considerable amount of time to perform as opposed to the simple matter of retrieving data from a database. So if the database connection remains established, additional data can be retrieved without the overhead of TCP setup. But in these instances, shouldn’t the application ensure that keepalives are sent so that the connection is not prematurely terminated by an idle timer somewhere along the data path? Well, yes. Sort of. Allow me to explain.

When I first discovered the source of the network problem we were seeing, I chalked it up to lazy programming. While it shouldn’t take much to add a simple keepalive system to a networked application, it is extra work. As it turns out, however, the answer isn’t quite that simple. All three major operating systems, Windows, Linux, and OS X, all have kernel level mechanisms for TCP keepalives. Each OS has a slightly different take on how keepalive timers should work.

Linux has three parameters related to tcp keepalives :

tcp_keepalive_time
The interval between the last data packet sent (simple ACKs are not considered data) and the first keepalive probe; after the connection is marked to need keepalive, this counter is not used any further
tcp_keepalive_intvl
The interval between subsequential keepalive probes, regardless of what the connection has exchanged in the meantime
tcp_keepalive_probes
The number of unacknowledged probes to send before considering the connection dead and notifying the application layer

OS X works quite similar to Linux, which makes sense since they’re both *nix variants. OS X has four parameters that can be set.

keepidle
Amount of time, in milliseconds, that the connection must be idle before keepalive probes (if enabled) are sent. The default is 7200000 msec (2 hours).
keepintvl
The interval, in milliseconds, between keepalive probes sent to remote machines, when no response is received on a keepidle probe. The default is 75000 msec.
keepcnt
Number of probes sent, with no response, before a connection is dropped. The default is 8 packets.
always_keepalive
Assume that SO_KEEPALIVE is set on all TCP connections, the kernel will periodically send a packet to the remote host to verify the connection is still up.

Windows acts very differently from Linux and OS X. Again, there are three parameters, but they perform entirely different tasks. All three parameters are registry entries.

KeepAliveInterval
This parameter determines the interval between TCP keep-alive retransmissions until a response is received. Once a response is received, the delay until the next keep-alive transmission is again controlled by the value of KeepAliveTime. The connection is aborted after the number of retransmissions specified by TcpMaxDataRetransmissions have gone unanswered.
KeepAliveTime
The parameter controls how often TCP attempts to verify that an idle connection is still intact by sending a keep-alive packet. If the remote system is still reachable and functioning, it acknowledges the keep-alive transmission. Keep-alive packets are not sent by default. This feature may be enabled on a connection by an application.
TcpMaxDataRetransmissions
This parameter controls the number of times that TCP retransmits an individual data segment (not connection request segments) before aborting the connection. The retransmission time-out is doubled with each successive retransmission on a connection. It is reset when responses resume. The Retransmission Timeout (RTO) value is dynamically adjusted, using the historical measured round-trip time (Smoothed Round Trip Time) on each connection. The starting RTO on a new connection is controlled by the TcpInitialRtt registry value.

There’s a pretty good reference page with information on how to set these parameters that can be found here.

We still haven’t answered the question of optimal settings. Unfortunately, there doesn’t seem to be a correct answer. The defaults provided by most firewall vendors seem to have been chosen to ensure that the firewall does not run out of resources. Each connection through the firewall must be tracked. As a result, each connection uses up a portion of memory and CPU. Since both memory and CPU are finite resources, administrators must be careful not to exceed the limits of the firewall platform.

There is some good news. Firewalls have had a one hour tcp timeout timer for quite a while. As time has passed and new revisions of firewall hardware are released, the CPU has become more powerful and the amount of memory in each system has grown. The default one hour timer, however, has remained in place. This means that modern firewall platforms are much better prepared to handle an increase in the number of connections tracked. Ultimately, the firewall platform must be monitored and appropriate action taken if resource usage becomes excessive.

My recommendation would be to start by setting the firewall tcp teardown timer to a value slightly higher than that of the clients. For most networks, this would be slightly over two hours. The firewall administrator should monitor the number of connections tracked on the firewall as well as the resources used by the firewall. Adjustments should be made as necessary.

If longer lasting idle connections are unacceptable, then a slightly different tactic can be used. The firewall teardown timer can be set to a level comfortable to the administrator of the network. Problematic clients can be updated to send keepalive packets at a shorter interval. These changes will likely only be necessary on servers. Desktop systems don’t have the same need as servers for long-term establishment of idle connections.

Who’s Problem Is It Anyway?

Thursday, June 7th, 2012

This week, Adobe released a security patch for their CS5 product line. While Adobe releasing security patches isn’t really that surprising given their track record with vulnerable products, what is somewhat surprising are the circumstances surrounding the patch. Adobe released the patch somewhat reluctantly.

Sometime in May, possibly earlier, Adobe was made aware of a fairly severe security vulnerability in their CS5 product line. A specially crafted image file was enough to compromise the victim’s computer. Obviously this is a pretty severe flaw and should be fixed ASAP, right? Well, Adobe didn’t really see it that way. Their initial response to the problem was that users who wanted a fixed version would have to pay to upgrade to the CS6 product line, in which the flaw was patched. Eventually they decided to backport the patch to the CS5 version.

Adobe’s initial response and their eventual capitulation leads to a broader discussion. Given any security problem, or even any bug in general, who is responsible for fixing it? The vendor, of course, right? Well… Maybe?

In a perfect world, there would be no bugs, security or otherwise. In a slightly less perfect world, all bugs would be resolved before a product is retired. But neither world exists and bugs seem to prevail. So, given that, who’s problem is it anyway?

There are a lot of justifications vendors make as to when they’ll patch, how they’ll support something, and, of course, excuses. It’s not an easy problem for vendors, though, and some vendors put a lot of thought into their policies. They don’t always get them right, and there’s never a way to make everyone happy.

Patching generally follows a product lifecycle. While the product is supported, patching happens as a normal course of business. When a product is retired, some companies put together a support plan with For instance, when Cisco announces that a product has entered the End-of-Life cycle, they lay out a multi-year plan for support. Typically this involves regular software maintenance for a year, security releases for 2-3 years, and then hardware maintenance for the remainder. This gives businesses ample time to deal with finding a suitable replacement.

Unfortunately, not all vendors act responsibly and often customers are left high and dry when a product is suddenly obsoleted. Depending on the vendor, this sometimes leads to discussions about the possibility of legislation forcing vendors to support products, or to at least address security vulnerabilities. If something like this were to pass, where does it end? Are vendors forced to support products forever? Should they only have to fix severe security problems? And what constitutes a severe security problem?

There are a multitude of reasons that bugs, security or otherwise, are not dealt with. Some justifiable, others not. Working in networking, the primary excuse I’ve heard from hardware vendors over the year is that the management interface of their product is not intended to be on a public network where it can be attacked. Or that the management interfaces should be put behind a firewall where it can’t be attacked. These excuses are garbage, of course, but some vendors just continue to give them. And, unfortunately, you’re not always in a position to drop a vendor and move elsewhere. So, we do what we can to secure the systems and move on.

And sometimes the problem isn’t the vendor, but the customer. How long has it been since Microsoft phased out older versions of it’s Windows operating system? Windows XP is relatively recent, but it’s been a number of years since Windows 2000 was phased out. Or how about Windows 98, 95, and even Windows NT? And customers still have these deployed in their networks. Hell, I know of at least one OS/2 Warp system that’s still deployed in a Telco Central Office!

There is a basis for some regulation, however, and it may affect vendors. When the security of a particular product can significantly impact the public, it can be argued that regulation is necessary. The poster child for this argument are SCADA systems which seem to be perpetually riddled with security holes, mostly due to outdated operating systems.

SCADA systems are what typically control the electrical grid or nuclear power plants. For obvious reasons, security problems with these systems are a deadly serious problem. I often hear that these systems should be air gapped from the Internet, but the lure of easy access and control often pushes users to ignore this advice.

So should SCADA systems be regulated? It’s obvious that the regulations in place already for the industries they are used in aren’t working, so what makes us think that more regulation will help? And if we regulate and force vendors to provide patches for security problems, what makes us think that industries will install them?

This is a complex problem and there are no easy answers. The best we can hope for is a competent administrator who knows how to handle security and deal with threats properly. Until then, let’s hope for incompetent criminals.

Towards Building More Secure Networks

Tuesday, May 15th, 2012

It is no surprise that security is at the forefront of everyone’s minds these days. With high profile breaches, to script kiddies wreaking havoc across the Internet, it is obvious that there are some weaknesses that need to be addressed.

In most cases, complete network redesigns are out of the question. This can be extremely invasive and costly. However, it may be possible to augment the existing network in such a manner as to add additional layers of security. It’s also possible that this may lead to the possibility of being able to make even more changes down the road.

So what do I mean by this? Allow me to explain…

Many networks are fairly simple with only a few subnets, typically a user and a server subnet. Sometimes there’s a bit of complexity on the user side, creating subnets per department, or subnets per building. Often this has more to do with manageability of users rather than security. Regardless, it’s a good practice that can be used to make a network more secure in the long run.

What is often neglected is the server side of things. Typically, there are one, maybe two subnets. Outside users are granted access to the standard web ports. Sometimes more ports such as ssh and ftp are opened for a variety of reasons. What administrators don’t realize, or don’t intend is that they’re allowing outsiders direct access to their core servers, without any sort of security in front of it. Sure, sure, there might be a firewall, but a firewall is there to ensure you only come in on the proper ports, right? If your traffic is destined for port 80, it doesn’t matter if it’s malicious or not, the firewall lets it through anyway.

But what’s the alternative? What can be done instead? Well, what about sending outside traffic to a separate network where the systems being accessed are less critical, and designed to verify traffic before passing it on to your core servers? What I’m talking about is creating a DMZ network and forcing all users through a proxy. Even a simple proxy can help to prevent many attacks by merely dropping illegal traffic and not letting it through to the core server. Proxies can also be heavily fortified with HIDS and other security software designed to look for suspicious traffic and block it.

By adding in this DMZ layer, you’ve put a barrier between your server core and the outside world. This is known as layered defense. You can add additional layers as time and resources allow. For instance, I recommend segmenting away database servers as well as identity management servers. Adding this additional segmentation can be done over time as new servers come online and old servers are retired. The end goal is to add this additional security without disrupting the network as a whole.

If you have the luxury of building a new network from the ground up, however, make sure you build this in from the start. There is, of course, a breaking point. It makes sense to create networks to segregate servers by security level, but it doesn’t make sense to segregate purely to segregate. For instance, you may segregate database and identity management servers away from the rest of the servers, but segregating Oracle servers away from MySQL servers may not add much additional security. There are exceptions, but I suggest you think long and hard before you make such an exception. Are you sure that the additional management overhead is worth the security? There’s always a cost/benefit analysis to perform.

Segregating networks is just the beginning. The purpose here is to enhance security. By segregating networks, you can significantly reduce the number of clients that need to access a particular server. The whole world may need to access your proxy servers, but only your proxy servers need to access the actual web application servers. Likewise, only your web application servers need access to your database servers. Using this information, you can tighten down your firewall. But remember, a firewall is just a wall with holes in it. The purpose is to deflect random attacks, but it does little to nothing to prevent attacks on ports you’ve opened. For that, there are other tools.

At the very edge, simplistic fire walling and generally loose HIDS can be used to deflect most attacks. As you move further within the network, additional security can be used. For instance, deploying an IPS at the very edge of the network can result in the IPS being quickly overwhelmed. Of course, you can buy a bigger, better IPS, but to what end? Instead, you can move the IPS further into the network, placing it where it be more effective. If you place it between the proxy and the web server, you’ve already ensured that the only traffic hitting the IPS is loosely validated HTTP traffic. With this knowledge, you can reduce the number of signatures the IPS needs to have, concentrating on high quality HTTP signatures. Likewise, an IPS between the web servers and database servers can be configured with high quality database signatures. You can, in general, direct the IPS to block any and all traffic that falls outside of those parameters.

As the adage goes, there is no silver bullet for security. Instead, you need to use every weapon in your arsenal and put together a solid defense. By combining all of these techniques together, you can defend against many attacks. But remember, there’s always a way in. You will not be able to stop the most determined attacker, you can only hope to slow him down enough to limit his access. And remember, securing your network is only one aspect of security. Don’t forget about the other low hanging fruit such as SQL injection, cross site scripting, and other common application holes. You may have the most secure network in existence, but a simple SQL injection attack can result in a massive data breach.

Helpful Rules for OSSEC

Friday, June 17th, 2011

OSSEC has quickly become a primary weapon in my security toolkit.  It’s flexible, fast, and very easy to use.  I’d like to share a few rules I’ve found useful as of late.

I primarily use OSSEC in a server/client setup.  One side effect of this is that when I make changes to the agent’s configuration, it takes some time for it to push out to all of the clients.  Additionally, clients don’t restart automatically when a new agent config is received.  However, it’s fairly easy to remedy this.

First, make sure you have syscheck enabled and that you’re monitoring the OSSEC directory for changes.  I recommend monitoring all of /var/ossec and ignoring a few specific directories where files change regularly. You’ll need to add this to both the ossec.conf as well as the agent.conf.

<directories check_all="yes">/var</directories>
<ignore type="sregex">^/var/ossec/queue/</ignore>
<ignore type="sregex">^/var/ossec/logs/</ignore>
<ignore type="sregex">^/var/ossec/stats/</ignore>

The first time you set this up, you’ll have to manually restart the clients after the new config is pushed to them. All new clients should work fine, however.

Next, add the following rules to your local_rules.xml file (or whatever scheme you’re using).

<rule level="12" id="100005">
   <if_matched_group>syscheck</if_matched_group>
   <description>agent.conf changed, restarting OSSEC</description>
   <match>/var/ossec/etc/shared/agent.conf</match>
</rule>

This rule looks for changes to the agent.conf file and triggers a level 12 alert. Now we just need to capture that alert and act on it. To do that, you need to add the following to your ossec.conf file on the server.

<command>
    <name>restart-ossec</name>
    <executable>restart-ossec.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>no</timeout_allowed>
</command>
<active-response>
    <command>restart-ossec</command>
    <location>local</location>
    <rules_id>100005</rules_id>
</active-response>

You need to add this to the top of your active response section, above any other rules. OSSEC matches the first active-response block and ignores any subsequent ones. The restart-ossec.sh script referenced in the command section should exist already in your active-response/bin directory as it’s part of the distribution.

And that’s all there is to it. Whenever the agent.conf file changes on a client, it’ll restart the OSSEC agent, reading in the new configuration.

Next up, a simple DoS prevention rule for apache web traffic. I’ve had a few instances where a single IP would hammer away at a site I’m responsible for, eating up resources in the process. Generally speaking, there’s no real reason for this. So, one solution is to temporarily block IPs that are abusive.

Daniel Cid, the author of OSSEC, helped me out a little on this one. It turned out to be a little less intuitive than I expected.

First, you need to group together all of the “normal” error response codes. The actual error responses (400/500 errors) are handled with other, more aggressive rules, so you can ignore most of them. For our purposes, we want to trigger on 200/300/400 error codes.

<rule id="131105" level="1">
      <if_sid>31101, 31108, 31100</if_sid>
      <description>Group of all "normal" 200/300/400 error codes.</description>
</rule>

Next, we want to create a composite rule that will fire after a set frequency and time limit. In short, we want this rule to fire if X matches are made in Y seconds.

<rule id="131106" level="10" frequency="500" timeframe="60">
      <if_matched_sid>131105</if_matched_sid>
      <same_source_ip />
      <description>Excessive access, Temporary block</description>
</rule>

That should be all you need provided you have active response already enabled. You can also add a specific active response for this rule that blocks for a shorter, or longer, period of time. That’s the beauty of OSSEC, the choice is in your hands.

I hope you find these rules helpful. If you have any questions or comments, feel free to post them below.

if (blocked($content))

Tuesday, November 18th, 2008

And the fight rages on… Net Neutrality, to block or not to block.

Senator Byron Dorgan, a Democrat from North Dakota, is introducing new legislation to prevent service providers from blocking Internet content. Dorgan is not new to the arena, having put forth legislation in previous years dealing with the same thing. This time, however, he may be able to push it through.

So what’s different this time? Well, for one, we have a new president. And this new president has already stated that Net Neutrality is high on his list of technology related actions. So, at the very least, it appears that Dorgan has the president in his corner.

Of course, some service providers are not happy about this. Comcast has gone on record with the following:

“We don’t believe legislation is necessary in this area and could harm innovation and investments,” said Sena Fitzmaurice, Comcast’s senior director of government affairs and corporate communications, in a phone interview. “We have consistently said that all our customers have access to content available on the Internet.”

And she’s right! Well.. sort of. Comcast custmers do have access to content. Or, rather, they do now. I do recall a recent period of time where Comcast was “secretly” resetting bittorrent connections, and they have talked about both shaping and capping customers. So, in the end, you may get all of the content, just not all at the same level of service.

But I think, overall, Dorgan has an uphill battle. Net Neutrality is a concept not unlike free speech. It’s a great concept, but sometimes its implementation is questionable. For instance, If we look at pure Net Neutrality, then providers are required to allow all content without any shaping or blocking. Even bandwidth caps can be seen to fall under the umbrella of Net Neutrality. As a result, customers can theoretically use 100% of their alloted bandwidth at all times. This sounds great, until you realize that bandwidth, in some instances, and for perfectly legitimate reasons, is limited.

Take rural areas, for instance, especially in the midwest where homes can be miles away from each other. It can be cost-prohibitive for a service provider to run lines out to remote areas. And if they do, it’s generally done using line extender technology that can allow for decent voice signals over copper, but not high-speed bandwidth. One or two customer connections don’t justify the cost of the equipment. So, those customers are relegated to slower service, and may end up devices with high customer to bandwidth ratios. In those cases, a single customer can cause severe degradation of service for all the others, merely by using a lot of bandwidth.

On the flip side, however, allowing service providers to block and throttle according to their own whims can result in anti-competitive behavior. Take, for instance, IP Telephony. There are a number of IP Telephony providers out there that provide the technology to place calls over a local Internet connection. Skype and Vonage are two examples. Neither of these providers has any control over the local network, and thus their service is dependent on the local service provider. But let’s say the local provider wants to offer VoIP service. What’s to prevent that local provider from throttling or outright blocking Skype and Vonage? And thus we have a problem. Of course, you can fall back to the “let the market decide” argument. The problem with this is that, often, there is only one or two local providers, usually one Telco and one Cable. The Telco provider may throttle and block voice traffic, while the Cable provider does the same for video. Thus, the only choice is to determine which we would rather have blocked. Besides, changing local providers can be difficult as email addresses, phone numbers, etc. are usually tied to the existing provider. And on top of that, most people are just too lazy to change, they would rather complain.

My personal belief is that the content must be available and not throttled. However, I do believe the local provider should have some control over the network. So, for instance, if one type of traffic is eating up the majority of the bandwidth on the network, the provider should be able to throttle that traffic to some degree. However, they must make such throttling public, and they must throttle ALL of that type of traffic. Going back to the IP Telephony example, if they want to throttle Skype and Vonage, they need to throttle their own local VoIP too.

It’s a slippery slope and I’m not sure there is a perfect answer. Perhaps this new legislation will be a step in the right direction. Only time will tell.

Bandwidth in the 21st Century

Tuesday, February 26th, 2008

As the Internet has evolved, the one constant has been the typical Internet user.  Typical users used the Internet to browse websites, a relatively low-bandwidth activity.  Even as the capabilities of the average website evolved, bandwidth usage remained relatively low, increasing at a slow rate.

In my own experience, a typical Internet user, accessing the Internet via DSL or cable, only uses a very small portion of the available bandwidth.  Bandwidth is only consumed for the few moments it takes to load a web page, and then usage falls to zero.  The only real difference was the online gamer.  Online gamers use a consistent amount of bandwidth for long periods of time, but the total bandwidth used at any given moment is still relatively low, much lower than the available bandwidth.

Times are changing, however.  In the past few years, peer-to-peer applications such as Napster, BitTorrent, Kazaa, and others have become more mainstream, seeing widespread usage across the Internet.  Peer-to-peer applications are used to distribute files, both legal and illegal, amongst users across the Internet.  Files range in size from small music files to large video files.  Modern applications such as video games and even operating systems have incorporated peer-to-peer technology to facilitate rapid deployment of software patches and updates.

Voice and video applications are also becoming more mainstream.  Software applications such as Joost, Veoh, and Youtube allow video streaming over the Internet to the user’s PC.  Skype allows the user to make phone calls via their computer for little or no cost.  Each of these applications uses bandwidth at a constant rate, vastly different from that of web browsing.

Hardware devices such as the XBox 360, AppleTV, and others are helping to bring streaming Internet video to regular televisions within the home.  The average user is starting to take advantage of these capabilities, consuming larger amounts of bandwidth, for extended periods of time.

The end result of all of this is increased bandwidth within the provider network.  Unfortunately, most providers have based their current network architectures on outdated over-subscription models, expecting users to continue their web-browsing patterns.  As a result, many providers are scrambling to keep up with the increased bandwidth demand.  At the same time, they continue releasing new access packages claiming faster and faster speeds.

Some providers are using questionable practices to ensure the health of their network.  For instance, Comcast is allegedly using packet sniffing techniques to identify BitTorrent traffic.  Once identified, they send a reset command to the local BitTorrent client, effectively severing the connection and canceling any file transfers.  This has caught the attention of the FCC who has released a statement that they will step in if necessary.

Other providers, such as Time Warner, are looking into tiered pricing for Internet access.  Such plans would allow the provider to charge extra for users that exceed a pre-set limit.  In other words, Internet access becomes more than the typical 3/6/9 Mbps access advertised today.  Instead, the high speed access is offset by a total transfer limit.  Hopefully these limits will be both reasonable and clearly defined.  Ultimately, though, it becomes the responsibility of the user to avoid exceeding the limit, similar to that of exceeding the minutes on a cell phone.

Pre-set limits have problems as well, though.  For instance, Windows will check for updates at a regular interval, using Internet bandwidth to do so.  Granted, this is generally a small amount, but it adds up over time.  Another example is PPPoE and DHCP traffic.  Most DSL customers are configured using PPPoE for authentication.  PPPoE sends keep-alive packets to the BRAS to ensure that the connection stays up.  Depending on how the ISP calculates bandwidth usage, these packets will likely be included in the calculation, resulting in “lost” bandwidth.  Likewise, DHCP traffic, used mostly by cable subscribers, will send periodic requests to the DHCP server.  Again, this traffic will likely be included in any bandwidth calculations.

In the end, it seems that substantial changes to the ISP structure are coming, but it is unclear what those changes may be.  Tiered bandwidth usage may be making a comeback, though I suspect that consumers will fight against it.  Advances in transport technology make increasing bandwidth a simple matter of replacing aging hardware.  Of course, replacements cost money.  So, in the end, the cost may fall back on the consumer, whether they like it or not.

Troubleshooting 101

Monday, July 9th, 2007

There seems to be a severe lack of understanding and technique when it comes to troubleshooting these days. It seems to me that a large amount of troubleshooting effort is completely wasted on wild ideas and theories while the simplest and most direct solutions are ignored.

Occam’s Razor states: “entities should not be multiplied beyond necessity.” Simply put, the easiest solution is often the best. This is the perfect mindset for anyone who does troubleshooting. There is no need to delve right into the most obscure reasons for a failure, start with the simple stuff.

For instance, questions like “Is the unit plugged in?”, or “Is the power on?” are perfect questions to start with. While it would be wonderful to believe that everyone you encounter has the common sense to check out these simple solutions, you’ll find that, unfortunately, the majority of the population isn’t that bright.

So, how about a real-world example. It’s 2am and you get paged that a router has gone unreachable. After notifying the proper people, you delve into the problem. Using the Occam’s Razor principle, what’s the first thing you should check? Well, for starters, let’s make sure the router really is unreachable. A simple ping should accomplish that. And just for good measure, ping something close to that router just to make sure you’re connected to the network.

Ok, so the router isn’t pingable, now what? Well, let’s look at the next easiest step, power. Since the router is in a remote location, this isn’t easy to check. However, you can check the uplink on the router. You should be able to get to the router just before the one that’s unreachable. Once there, check the interface that feeds your troubled router. Is it up or down? While you’re there, you can check for traffic and errors as well, but don’t focus on these yet, store them for later.

If the interface is down, then it’s quite possibly a physical line issue or, possibly, power. Just for good measure, I would suggest bouncing the interface to see if it’s something temporary. Sometimes, the interface will come back up and start running errors, indicating a physical line issue. What will often happen is that the interface comes back up and starts running errors, but allows limited traffic to get through. Once the error threshold is passed, the line goes back down. At this point, I’d call a technical to look at the physical line itself.

If the interface is up, try pinging the troubled router from the directly connected router. This process can help identify a routing issue in the network. Directly connected interfaces are considered to be the most specific route unless specifically overridden, which isn’t likely. If the ping is successful, take note of the ping time. If it seems overly high, you may be looking at a traffic issue. Depending on the type of router, traffic may be processor switched and cause high CPU usage. This can be identified by a sluggish interface and high ping times. Notes, high ping times don’t always indicate this. Most routers set a very low priority for ICMP traffic destined for the CPU, deeming throughput more important.

Remember the traffic and error counts you looked at previously? Those come into play now. If the traffic on the interface is very high, notably higher than usual, then this is likely the cause of the problem. Or, rather, an effect of the actual cause which may be a DoS attack or Virus outbreak. DoS, or Denial of Service, attacks are targeted attacks against a specific IP or range of IPs. A side effect of these attacks is that interfaces between the attacker and victim are often overloaded.

There are a number of different DoS attacks out there, but often when you see traffic as the cause of the DoS, you’ll notice that small packets are being used. One way to quickly identify this is to take the current bps on the interface, divide it by the packets per second, and then by 8 to get bytes per packet. Generally speaking, a normal interface ranges average packet size between 1000 and 1500 bytes. NOTE : This is referring to traffic received from a remote source such as a web site. Outgoing traffic, to the website, has a much lower average packet size because these packets generally contain control information such as acknowledgements, ICMP, etc.

Once you’ve identified that there is a traffic issue, the next step is to identify where the traffic is sourced from, or destined to. Remember, the end-goal here is to repair the problem so that normal operations can continue. Since you’re already aware of the overloaded interface, it’s easiest to concentrate your efforts there. Identifying the traffic source and destination is usually pretty easy, provided it’s not a distributed attack. On a Cisco router, you can try the “IP accounting” command. This command will show the source and destination for all output packets on an interface. Included is a count of the number of packets and the bits used by those packets. Simply look for rapidly increasing source and destination pairs and you’ll likely find your culprit.

Another option is to use an access list. If the router can handle it, place an access list on the interface that passes all traffic, but logs each packet. Then you can watch the log and try to identify large sources of traffic. Refine the access list to block that traffic until you’ve halted the attack. Be careful, however, as many routers will processor switch the traffic when an access-list is applied. This may cause a spike in CPU usage, sometimes causing a loss of connectivity to the router. If IP accounting is available, use that instead.

Once you identify the source and/or target of the attack, craft an appropriate access list to block the traffic as far upstream as you can. If the DoS attack is distributed, then the most effective means to stop the attack is probably to remove the targeted routes from the routing table and allow it to be blocked at the edges. This will likely result in an outage for that specific customer, but with a distributed attack, that’s often the only solution. From there you can work with your upstream providers to track down the perpetrator of the attack and take it offline permanently.

The preceding seems a bit long when written down, but in reality, this is a 15-30 minute process. Experienced troubleshooters can identify and resolve these problems even quicker. The point, of course, is to identify the most likely causes in the quickest manner possible. Often times, the simplest solution is the correct solution. Take the extra few seconds to check out the obvious before moving on to the more advanced. Often, you’ll resolve the solution quicker and sometimes wind up with a funny story as a bonus!

Please, troubleshoot responsibly.

Network Graphing

Friday, June 1st, 2007

Visual representations of data can provide additional insight into the inner workings of your network. Merely knowing that one of your main feeds is peaking at 80% utilization isn’t very helpful when you don’t know how long the peak is, at what time, and when it started.

There are a number of graphing solutions available. Some of these are extremely simplistic and don’t do much, while others are overly powerful and provide almost too much. I prefer using Cacti for my graphing needs.

Cacti is a web-based graphing solution built on top of RRDtool. RRDtool is a round-robin data logging and graphing tool developed by Tobias Oetiker of MRTG fame, MRTG being one of the original graphing systems.

Chock full of features, Cacti allows data collection from almost anywhere. It supports SNMP and script-based collection by default, but additional methods can easily be added. Graphs are fully configurable and can display just about any information you want. You can combine multiple sources on a single graph, or create multiple graphs for better resolution. Devices, once added, can be arranged into a variety of hierarchies allowing multiple views for various users. Security features allow the administrator to tailor the data shown to each user.

Cacti is a wonderful tool to have and is invaluable when it comes to tracking down problems with the network. The ability to graph anything that spits out data makes it incredibly useful. For instance, you can create graphs to show you the temperature of equipment, utilization of CPUs, even the number of emails being sent per minute! The possibilities are seemingly endless.

There is a slight learning curve, however. Initial setup is pretty simple, and adding devices is straightforward. The tough part is understanding how Cacti gathers data and relates it all together. There are some really good tutorials on their documentation site that can help you through this part.

Overall, I think Cacti is one of the best graphing tools out there. The graphs come out very professional looking, and the feature set is amazing. Definitely worth looking into.