Who’s Problem Is It Anyway?

This week, Adobe released a security patch for their CS5 product line. While Adobe releasing security patches isn’t really that surprising given their track record with vulnerable products, what is somewhat surprising are the circumstances surrounding the patch. Adobe released the patch somewhat reluctantly.

Sometime in May, possibly earlier, Adobe was made aware of a fairly severe security vulnerability in their CS5 product line. A specially crafted image file was enough to compromise the victim’s computer. Obviously this is a pretty severe flaw and should be fixed ASAP, right? Well, Adobe didn’t really see it that way. Their initial response to the problem was that users who wanted a fixed version would have to pay to upgrade to the CS6 product line, in which the flaw was patched. Eventually they decided to backport the patch to the CS5 version.

Adobe’s initial response and their eventual capitulation leads to a broader discussion. Given any security problem, or even any bug in general, who is responsible for fixing it? The vendor, of course, right? Well… Maybe?

In a perfect world, there would be no bugs, security or otherwise. In a slightly less perfect world, all bugs would be resolved before a product is retired. But neither world exists and bugs seem to prevail. So, given that, who’s problem is it anyway?

There are a lot of justifications vendors make as to when they’ll patch, how they’ll support something, and, of course, excuses. It’s not an easy problem for vendors, though, and some vendors put a lot of thought into their policies. They don’t always get them right, and there’s never a way to make everyone happy.

Patching generally follows a product lifecycle. While the product is supported, patching happens as a normal course of business. When a product is retired, some companies put together a support plan with For instance, when Cisco announces that a product has entered the End-of-Life cycle, they lay out a multi-year plan for support. Typically this involves regular software maintenance for a year, security releases for 2-3 years, and then hardware maintenance for the remainder. This gives businesses ample time to deal with finding a suitable replacement.

Unfortunately, not all vendors act responsibly and often customers are left high and dry when a product is suddenly obsoleted. Depending on the vendor, this sometimes leads to discussions about the possibility of legislation forcing vendors to support products, or to at least address security vulnerabilities. If something like this were to pass, where does it end? Are vendors forced to support products forever? Should they only have to fix severe security problems? And what constitutes a severe security problem?

There are a multitude of reasons that bugs, security or otherwise, are not dealt with. Some justifiable, others not. Working in networking, the primary excuse I’ve heard from hardware vendors over the year is that the management interface of their product is not intended to be on a public network where it can be attacked. Or that the management interfaces should be put behind a firewall where it can’t be attacked. These excuses are garbage, of course, but some vendors just continue to give them. And, unfortunately, you’re not always in a position to drop a vendor and move elsewhere. So, we do what we can to secure the systems and move on.

And sometimes the problem isn’t the vendor, but the customer. How long has it been since Microsoft phased out older versions of it’s Windows operating system? Windows XP is relatively recent, but it’s been a number of years since Windows 2000 was phased out. Or how about Windows 98, 95, and even Windows NT? And customers still have these deployed in their networks. Hell, I know of at least one OS/2 Warp system that’s still deployed in a Telco Central Office!

There is a basis for some regulation, however, and it may affect vendors. When the security of a particular product can significantly impact the public, it can be argued that regulation is necessary. The poster child for this argument are SCADA systems which seem to be perpetually riddled with security holes, mostly due to outdated operating systems.

SCADA systems are what typically control the electrical grid or nuclear power plants. For obvious reasons, security problems with these systems are a deadly serious problem. I often hear that these systems should be air gapped from the Internet, but the lure of easy access and control often pushes users to ignore this advice.

So should SCADA systems be regulated? It’s obvious that the regulations in place already for the industries they are used in aren’t working, so what makes us think that more regulation will help? And if we regulate and force vendors to provide patches for security problems, what makes us think that industries will install them?

This is a complex problem and there are no easy answers. The best we can hope for is a competent administrator who knows how to handle security and deal with threats properly. Until then, let’s hope for incompetent criminals.

Leave a Reply

Your email address will not be published. Required fields are marked *