Customer Dis-Service

In general, I’m a pretty loyal person. Especially when it comes to material things. I typically find a vendor I like and stick with them. Sure, if something new and flashy comes along, I’ll take a look, but unless there’s a compelling reason to change, I’ll stick with what I have.

But sometimes a change is forced upon me. Take, for instance, this last week. I’ve been a loyal Verizon customer for … wow, about 15 years or so. Not sure I realized it had been that long. Regardless, I’ve been using Verizon’s services for a long time. I’ve been relatively happy with them, no major complaints about services being down or getting the runaround on the phone. In fact, my major gripe with them had always been their online presence which seemed to change from month to month. I’ve had repeated problems with trying to pay bills, see my services, etc. But at the end of the day, I’ve always been able to pay the bill and move on. Since that’s really the only thing I used their online service for, I was content to leave well enough alone.

In more recent months, we’ve been noticing that the 3M DSL service we had is starting to lack a bit. Not Verizon’s fault at all, but the fault of an increased strain on the system at our house. Apparently 3M isn’t nearly enough bandwidth to satisfy our online hunger. That, coupled with the price we were paying, had me looking around for other services. Verizon still doesn’t offer anything faster than 3M in the area and, unfortunately, the only other service in the area is from a company that I’d rather not do business with if I could avoid it.

In the end, I thought perhaps I could make some slight changes and at least reduce the monthly bill by a little until we determined a viable solution. I was considering adding a second DSL line, connected to a second wireless router, to relieve the tension a bit. This would allow me to avoid that other company and provide the bandwidth we needed. My wife and I could enjoy our own private upstream and place the rest of the house on the other line.

Ok, I thought, let’s dig into this a bit. First things first, I decided to get rid of the home phone, or at least transfer it to a cheaper solution. My cell provider offered a $10/month plan for home phones. Simple process, port he number over, install this little box in the house, and poof. Instant savings. Best part, that savings would be just about enough to get that second DSL line.

Being cautious, and not wanting to end up without a DSL connection, I contacted Verizon. Having worked for a telco in the past, I knew that some telcos required that you have a home phone line in order to have DSL service. This wasn’t a universal truth, however, and it was easy enough to verify. The first call to Verizon went a little sideways, though. I ended up in an automated system. Sure, everyone uses these automated systems nowadays, but I thought this one was particularly condescending. They added additional sound effects to the prompts so that when you answered a question, the automated voice would acknowledge your request and then type it in. TYPE IT IN. I don’t know why, but this drove me absolutely crazy. Knowing that I was talking to a recorded voice and then having that recorded voice playing sounds like they were typing on a keyboard? Infuriating. And, on top of it, I ended up in some ridiculous loop where I couldn’t get an operator unless I explicitly stated why I wanted an operator, but the automated system apparently couldn’t understand my request.

Ok, time out, walk away, try again later. The second time around, I lied. I ended up in sales, so it seems to have worked. I explained to the lady on the phone what I was looking for. I wanted to cancel my home phone and just keep the DSL. I also wanted to verify that I was not under contract so I wouldn’t end up with some crazy early termination fee. She explained that this was perfectly acceptable and that I could make these changes whenever I wanted. I verified again that I could keep the DSL without issue. She agreed, no problem.

Excellent! Off I went to the cell carrier, purchased (free with a contract) the new home phone box, and had them port the number. The representative cautioned that he saw DSL service listed when he was porting and suggested I contact Verizon to verify that the DSL service would be ok.

I called Verizon again to verify everything would work as intended. I explained what I had done, asked when the port would go through, and stressed that the DSL service was staying. The representative verified the port date and said that the DSL service would be fine.

You can guess where this is going, can’t you. On the day of the port, the phone line switched as expected. The new home phone worked perfectly and I made the necessary changes to the home wiring to ensure that the DSL connection was isolated away from the rest of the wiring. DSl was still up, phone ported, everything was great. Until the next morning.

I woke up the following morning and started my normal routine. Get dressed, go exercise, etc. Except that on the way to exercise, I noticed that the router light was blinking. Odd, I wonder what was going on. Perhaps something knocked the system online overnight? The DSL light on the modem was still on, so I had a connection to the DSLAM. No problem, reboot the router and we’ll be fine. So, I rebooted and walked away. After a few minutes I checked the system and noticed that I was still not able to get online. I walked through a mental checklist and decided that the username and password for the PPPoE connection must be failing. Time to call Verizon and see what’s wrong.

I contacted Verizon and first spoke to a sales rep who informed me that my services had been cancelled per my request. Wonderful. Al that work and they screw it up anyway. I explained what I had done and she took a deeper look into the account. Turns out the account was “being migrated” and she apologized for the mixup. Since I was no longer bundled, the DSL account had to be migrated. I talked with her some more about it and she decided to send me to technical support to verify everything was ok. Off I go to technical support, fully expecting them to ask be to reset my DSL modem. No such luck, however, the technical support rep explained that I had no DSL service.

And back to sales I went. I explained, AGAIN, what was going on. The representative confirmed my story, verified that the account was being migrated, and asked me to check the service again in a few hours. All told, I spent roughly an hour on the phone with Verizon and missed out on my morning exercise.

After rushing through the remainder of my morning routine and explaining to my wife why the Internet wasn’t working, I left for work. My wife checked in a few hours later to let me know that, no, we still did not have an Internet connection. So I called Verizon again. Again I’m told I have no service and that I have cancelled them. Again I explain the problem and what I had done. And this time, the representative explains to me that they do not offer unbundled DSL service anymore, they haven’t had that service in about a year. She goes on to offer me a bundled package with a phone line and explains that I don’t have to use the phone line, I just have to pay for it.

So all of the careful planning I had done was for naught. In an effort to make sure this didn’t happen to anyone else, the rep checked back on my account to see who had informed me about the DSL service. According to the notes, however, I had never called about such a thing. I called to complain about unsolicited phone calls and they referred me to their fraud and abuse office and explains about the magical phone code I could put in to block calls. Ugh! She then went on to detail every aspect of my problem, again so someone else didn’t have this problem.

This is the sort of situation that will, very rapidly, cause me to look elsewhere for service. And that’s exactly what I did. I’ve since cut all ties with Verizon and moved on to a different Internet service provider. I’m not happy with having to deal with this provider, but it’s the only alternative at the moment. Assuming I don’t have any major problems with the service, I’ll probably continue with them for a while. Of course, if I run into problems here, the decision becomes more difficult. A “lesser of two evils” situation, if you will. But for now, I’ll deal with what comes up.

Protecting Sources in the 21st Century

Trust is key in many situations. This can be especially true for journalists interested in reporting on sensitive matters. If journalists couldn’t be trusted to protect the identity of their confidential sources, many news items we take for granted would never have been written, or perhaps they wouldn’t have included some of the crucial information they revealed. For instance, much of the critical information about the Watergate scandal was given to reporters by a confidential source who went by the name of Deep Throat.

Until recently, reporters made contact with their sources via anonymous phone calls, often from pay phones, secret meetings, and dead drops. The identify of sources could be kept secret fairly easily, especially if the meetings were carefully conducted in such a manner as to leave little or no trail for anyone to follow. This meant avoiding the use of phones as they were traceable. Additionally, many journalists were willing to risk jail time instead of revealing their sources.

With the advent of the Internet, it became possible to contact sources, both local and distant, quickly and conveniently via email or some form of instant messaging. The ability to reach out to a source and get an almost immediate answer means journalists can quickly deal with rapidly evolving stories. The anonymity of the Internet means that sources stay anonymous. It’s a win-win situation.

Or is it…

I was listening to an On The Media podcast recently and they featured a story about how reporters using the Internet are, in some cases, exposing their contacts without meaning to, often without even knowing it. You can listen to the story below or read the transcript.

Before the Internet, phone conversations were sometimes considered an acceptable risk for contacting sources. After all, tracing a phone call was something it generally took a court order to accomplish. The Internet, however, is a completely different beast. Depending on the communications software used, tracing the owner of an account can be accomplished very easily by just about anyone. Software such as Netglub or Maltego can be used to quickly gather Intel on someone, starting with something as small and simple as a single email address.

Email accounts are generally accessible from anywhere in the world, protected by only a username and password. Brute forcing software can be used to crack a password in a relatively short time allowing someone direct access to the mail stored in the account. And if the mail is sent in clear text, someone trying to identify the source can easily read email sent between the reporter and their source without anyone being the wiser.

Other accounts can be similarly attacked. The end result of identifying the source can be mere embarrassment, or perhaps the source losing their job. Or, as is often the case when foreign news sources are involved, the source can be hunted down and killed.

For a reporter, protecting a source has always been important, but in some cases, it’s a matter of life and death. In the past few years, unrest overseas in places such as Iran, Egypt, Syria, and others has shown that secure communication methods are necessary to help save the lives of those fighting for change. Governments have been ruthless in hunting down and eliminating those who would oppose them. Using secure methods for communication have become lifelines for opposition forces. Likewise, reporters and anyone else who interacts with these sorts of contacts should also be using whatever methods of security they can to ensure that their sources are protected.

The Internet Arms Race

I’m here in sunny Philadelphia, attending NANOG46, a conference for network operators. The conference, thus far, has been excellent, with some great information being disseminated. One of the talks was by a long-time Internet pioneer, Paul Vixie. Vixie has had his hands in a lot of different projects ranging from being the primary author of BIND for many years, starting MAPS way back in 1996, and more recently, involvement with the Conficker Working Group.

Vixie’s talk was titled “Internet Superbugs and The Art of War,” and was about the struggle between Internet operators and the “criminal” element that uses the Internet for spam, DDOS attack, etc. The crux of the talk centered around the fact that it costs the bad guys next to nothing to continually evolve their attacks and use the network for their nefarious activities. On the flip side, however, it costs the network operators a good deal of time and money to try and stop these attacks.

Years ago, attacks were generally sourced from a single location and it was relatively easy to mitigate them. In addition, tracking down the source of the attack was simple enough, so legal action could be taken. At the very least, the network provider upstream from the attacker could disable the account and stop the attack.

Fast forward to today and we have botnets that are used for sending spam, performing DDOS attacks, and causing other sorts of havoc. It becomes next to impossible to mitigate a DDOS attack because the attack can be sourced from hundreds and thousands of machines simultaneously. This costs the bad guys nothing to deploy because users are largely ignorant and don’t understand the importance of patching and securing their networks. This results in millions of machines on the Internet that are exploitable. The bad guys write viruses, worms, trojans, etc. that infect these machines and turn them into zombie machines for their botnet.

Fighting these attacks becomes an exercise in futility. We use blacklists to block traffic from places we know are sending spam, we use anti-virus software to prevent infection of our machines, and more. When Conficker was detected and analyzed, researchers realized that this infection was a new evolution of attack. Conficker used cryptographic signatures to verify updates, pseudo-random lists of websites for updates, and more. The website lists are an excellent example of the costs paid by the good guys vs the bad guys.

The first generation of Conficker used a generated list of websites for updates. This list was 250 sites per day, making it difficult, but not impossible to mitigate. So, the people fighting this outbreak started buying up these domains in an attempt to prevent Conficker from updating. The authors of Conficker responded by upping this list to 50,000 per day, making it nearly impossible to buy them up. Fortunately, the people working to prevent the outbreak were able to work with ICANN and the various ccTLD companies to monitor and block purchases of these sites. Sites that already existed were thoroughly checked to ensure they weren’t hosting the new version of Conficker.

Vixie brought up an interesting point about all of this activity, though. The authors of Conficker made a relatively simple change to Conficker to make it use 50,000 domains. The people fighting Conficker spent many hours and days, not to mention a significant amount of money, to mitigate this. Smaller ccTLD companies that don’t have 24×7 abuse staff are unable to cope. They don’t have the budget to be able to do all of this work for free. As the workload climbs, they’re more likely to turn a blind eye.

All of this, in turn, means that our current mode of reacting to these attacks and mitigating them does not scale. It merely results in lost revenue and frustration. Additionally, creating lists of places to avoid, generating lists of bad content, etc. will never be able to scale over time. There is a breaking point, somewhere, and at that point we have no recourse unless we change our way of thinking.

Along the same line of thought, I came across a pretty decent quote today, originally posted by Don Franke from ISC(2):

“PC security is no longer about a virus that trashes your hard drive. It’s about botnets made up of millions of unpatched computers that attack banks, infrastructures, governments. Bandwidth caps will contribute to this unless the thinking of Internet providers and OS vendors change. Because we are all inter-connected now.”

If you read the original post, it explains how moving to bandwidth caps will only exacerbate the security problem because users will no longer be interested in wasting time downloading updates, but rather saving that bandwidth for things they’re interested in.

Overall, it was a very interesting talk and a very different way of thinking. There is no definitive answer as to what direction we need to go in to resolve this, but it’s definitely something that needs to be investigated.

 

if (blocked($content))

And the fight rages on… Net Neutrality, to block or not to block.

Senator Byron Dorgan, a Democrat from North Dakota, is introducing new legislation to prevent service providers from blocking Internet content. Dorgan is not new to the arena, having put forth legislation in previous years dealing with the same thing. This time, however, he may be able to push it through.

So what’s different this time? Well, for one, we have a new president. And this new president has already stated that Net Neutrality is high on his list of technology related actions. So, at the very least, it appears that Dorgan has the president in his corner.

Of course, some service providers are not happy about this. Comcast has gone on record with the following:

“We don’t believe legislation is necessary in this area and could harm innovation and investments,” said Sena Fitzmaurice, Comcast’s senior director of government affairs and corporate communications, in a phone interview. “We have consistently said that all our customers have access to content available on the Internet.”

And she’s right! Well.. sort of. Comcast custmers do have access to content. Or, rather, they do now. I do recall a recent period of time where Comcast was “secretly” resetting bittorrent connections, and they have talked about both shaping and capping customers. So, in the end, you may get all of the content, just not all at the same level of service.

But I think, overall, Dorgan has an uphill battle. Net Neutrality is a concept not unlike free speech. It’s a great concept, but sometimes its implementation is questionable. For instance, If we look at pure Net Neutrality, then providers are required to allow all content without any shaping or blocking. Even bandwidth caps can be seen to fall under the umbrella of Net Neutrality. As a result, customers can theoretically use 100% of their alloted bandwidth at all times. This sounds great, until you realize that bandwidth, in some instances, and for perfectly legitimate reasons, is limited.

Take rural areas, for instance, especially in the midwest where homes can be miles away from each other. It can be cost-prohibitive for a service provider to run lines out to remote areas. And if they do, it’s generally done using line extender technology that can allow for decent voice signals over copper, but not high-speed bandwidth. One or two customer connections don’t justify the cost of the equipment. So, those customers are relegated to slower service, and may end up devices with high customer to bandwidth ratios. In those cases, a single customer can cause severe degradation of service for all the others, merely by using a lot of bandwidth.

On the flip side, however, allowing service providers to block and throttle according to their own whims can result in anti-competitive behavior. Take, for instance, IP Telephony. There are a number of IP Telephony providers out there that provide the technology to place calls over a local Internet connection. Skype and Vonage are two examples. Neither of these providers has any control over the local network, and thus their service is dependent on the local service provider. But let’s say the local provider wants to offer VoIP service. What’s to prevent that local provider from throttling or outright blocking Skype and Vonage? And thus we have a problem. Of course, you can fall back to the “let the market decide” argument. The problem with this is that, often, there is only one or two local providers, usually one Telco and one Cable. The Telco provider may throttle and block voice traffic, while the Cable provider does the same for video. Thus, the only choice is to determine which we would rather have blocked. Besides, changing local providers can be difficult as email addresses, phone numbers, etc. are usually tied to the existing provider. And on top of that, most people are just too lazy to change, they would rather complain.

My personal belief is that the content must be available and not throttled. However, I do believe the local provider should have some control over the network. So, for instance, if one type of traffic is eating up the majority of the bandwidth on the network, the provider should be able to throttle that traffic to some degree. However, they must make such throttling public, and they must throttle ALL of that type of traffic. Going back to the IP Telephony example, if they want to throttle Skype and Vonage, they need to throttle their own local VoIP too.

It’s a slippery slope and I’m not sure there is a perfect answer. Perhaps this new legislation will be a step in the right direction. Only time will tell.

Bandwidth in the 21st Century

As the Internet has evolved, the one constant has been the typical Internet user.  Typical users used the Internet to browse websites, a relatively low-bandwidth activity.  Even as the capabilities of the average website evolved, bandwidth usage remained relatively low, increasing at a slow rate.

In my own experience, a typical Internet user, accessing the Internet via DSL or cable, only uses a very small portion of the available bandwidth.  Bandwidth is only consumed for the few moments it takes to load a web page, and then usage falls to zero.  The only real difference was the online gamer.  Online gamers use a consistent amount of bandwidth for long periods of time, but the total bandwidth used at any given moment is still relatively low, much lower than the available bandwidth.

Times are changing, however.  In the past few years, peer-to-peer applications such as Napster, BitTorrent, Kazaa, and others have become more mainstream, seeing widespread usage across the Internet.  Peer-to-peer applications are used to distribute files, both legal and illegal, amongst users across the Internet.  Files range in size from small music files to large video files.  Modern applications such as video games and even operating systems have incorporated peer-to-peer technology to facilitate rapid deployment of software patches and updates.

Voice and video applications are also becoming more mainstream.  Software applications such as Joost, Veoh, and Youtube allow video streaming over the Internet to the user’s PC.  Skype allows the user to make phone calls via their computer for little or no cost.  Each of these applications uses bandwidth at a constant rate, vastly different from that of web browsing.

Hardware devices such as the XBox 360, AppleTV, and others are helping to bring streaming Internet video to regular televisions within the home.  The average user is starting to take advantage of these capabilities, consuming larger amounts of bandwidth, for extended periods of time.

The end result of all of this is increased bandwidth within the provider network.  Unfortunately, most providers have based their current network architectures on outdated over-subscription models, expecting users to continue their web-browsing patterns.  As a result, many providers are scrambling to keep up with the increased bandwidth demand.  At the same time, they continue releasing new access packages claiming faster and faster speeds.

Some providers are using questionable practices to ensure the health of their network.  For instance, Comcast is allegedly using packet sniffing techniques to identify BitTorrent traffic.  Once identified, they send a reset command to the local BitTorrent client, effectively severing the connection and canceling any file transfers.  This has caught the attention of the FCC who has released a statement that they will step in if necessary.

Other providers, such as Time Warner, are looking into tiered pricing for Internet access.  Such plans would allow the provider to charge extra for users that exceed a pre-set limit.  In other words, Internet access becomes more than the typical 3/6/9 Mbps access advertised today.  Instead, the high speed access is offset by a total transfer limit.  Hopefully these limits will be both reasonable and clearly defined.  Ultimately, though, it becomes the responsibility of the user to avoid exceeding the limit, similar to that of exceeding the minutes on a cell phone.

Pre-set limits have problems as well, though.  For instance, Windows will check for updates at a regular interval, using Internet bandwidth to do so.  Granted, this is generally a small amount, but it adds up over time.  Another example is PPPoE and DHCP traffic.  Most DSL customers are configured using PPPoE for authentication.  PPPoE sends keep-alive packets to the BRAS to ensure that the connection stays up.  Depending on how the ISP calculates bandwidth usage, these packets will likely be included in the calculation, resulting in “lost” bandwidth.  Likewise, DHCP traffic, used mostly by cable subscribers, will send periodic requests to the DHCP server.  Again, this traffic will likely be included in any bandwidth calculations.

In the end, it seems that substantial changes to the ISP structure are coming, but it is unclear what those changes may be.  Tiered bandwidth usage may be making a comeback, though I suspect that consumers will fight against it.  Advances in transport technology make increasing bandwidth a simple matter of replacing aging hardware.  Of course, replacements cost money.  So, in the end, the cost may fall back on the consumer, whether they like it or not.

Whois Query Fun

network

I ran across a really neat way to use the whois tool in Linux the other day. There is apparently a lot more information available than I knew about! Check out the full article for more.

Basically, in addition to the normal owner/tech contact data that you can get from the standard whois servers, and the IP block assignment information you can get from ARIN, there’s also some additional IP information you can get from Cymru. Specifically, you can run queries against ‘whois.cymru.com’ to determine what ISP hosts/owns the netblock. Check it out :

[user@localhost ~]$ whois -h whois.cymru.com 204.10.167.1

[Querying whois.cymru.com]
[whois.cymru.com]
AS | IP | AS Name

33241 | 204.10.167.1 | EMCS-AS – Endless Mountain Cyb

In addition to that, you can also check another server, ‘v4-peer.whois.cymru.com’ to check for upstream peers. Extremely useful for determining how “connected” a provider is when you’re looking for new service. Or, for determining what providers you need to talk to for help in blocking possible attacks. Check it out :

[user@localhost ~]$ whois -h v4-peer.whois.cymru.com 204.10.167.1


[Querying v4-peer.whois.cymru.com]
[v4-peer.whois.cymru.com]
PEER_AS | IP | AS Name
3593 | 204.10.167.1 | EPIX – EPIX
3737 | 204.10.167.1 | PTD-AS – PenTeleData Inc.

Overall, I find this to be quite useful and I’ll definitely be using it! I hope you find it just as useful…

 

Firefox turns to the dark side?

I noticed an article over on Slashdot about a new attribute, ping, that Firefox handles. That is, the development version of Firefox. This isn’t your standard network ICMP Echo Request, but rather an HTTP Request designed to track a users movements.

 

Ok, ok.. Stop screaming about privacy and security. I’ve thought about this a bit and I think Firefox is doing the right thing. The intention, as far as I’ve been able to tell, is to actually put more control into the users hands.

 

Let me explain how this “feature” works. There’s a small writeup on the Mozilla Blog that you can read as well. Tracking the browsing habits of a user is actually fairly harmless, at least in my opinion. The idea is to get feedback about what a user at that site likes to see. Do more people click on links to cartoons? Or perhaps to political information? It’s all about creating websites that people want to see.

 

So, Joe User goes to a website. There he sees a link for a new type of fusion rocket. He’s interested, so he clicks the link. Nowadays, tracking happens one of two general ways. The easy one is that the “real” destination is wrapped up and appended to a link to a tracking site. These links usually have the real destination URL in plain text, but some sites obfuscate the URL so the user can’t bypass the tracking. The other method is to use javascript to change the URL after the user clicks on the link. The user never sees this happen, so, in a way, it’s even worse from a privacy perspective.

 

Either method then directs the user to the tracking site, which tracks the request (and could, by the way, take advantage of any exploits that may exist), and then redirects you to the real site. This takes time, and the user is generally left sitting there with a blank screen.

 

The ping attribute, on the other hand, is much nicer. The owner of the website uses the ping attribute to specify tracking urls. When the user clicks on a link, the browser goes directly to the intended site, and then “pings” the tracking sites in the background. This means that there are no redirects, and no “trickery” to get the user tracking info. It all happens in the background, and that’s where all the privacy concerns come from. But, according to the spec, the browser is intended to have controls to allow a user to decide how the pings are handled. A user can choose to disable them completely, or enable them for some sites, etc.

 

Currently, the development version of Firefox has the bare minimum. That is, it sees and obeys the ping attribute, but there are no fancy GUI interfaces to change settings. Of course, this is the DEVELOPMENT version! They have to start somewhere. It’s not like these new features get a complete GUI, implementation, etc the moment they’re added. This stuff takes time! And it’s enabled by default! Light the torches! Stone the oppressors!

 

Seriously though, I feel confident, based on their past record, that the creators of Firefox will get this right. Sure, it’s enabled by default. But so is Javascript. The “correct” path is not always clear cut. If a feature is disabled by default, the chances of it ever getting enabled are slim. Most users just don’t know how! So, enabling it by default, and then popping up a message stating that the feature is active, here’s how to disable it, etc. is the right thing to do. I’m actually interested in this feature because it will allow the web, at large, to remove some of the trickery currently used to track users. It will allow this information to be up front and not hidden, and I think it will allow the end user greater control over their own security and privacy.