You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Why Is Our Cybersecurity So Insecure?

Shutterstock

This is the second installment in a series on whether and how to hold software makers financially liable for the insecurity of their products. Part I gave an overview of the problem of liability for insecure code and the difficulty of holding software makers responsible for cybersecurity. 

It’s true: perfectly secure software is a pipedream. Experts agree that we cannot make software of “nontrivial size and complexity” free of vulnerabilities. Moreover, consumers want feature-rich, powerful software and they want it quickly; and this tends to produce huge, bulky, poorly-written software, released early and without adequate care for security.

Those who oppose holding software makers legally accountable for the security of their products often trot out this reality as a kind of trump card. But their conclusion—that it is unreasonable to hold software makers accountable for the quality of their code—doesn’t follow from it. Indeed, all of the evidence suggests that the industry knows how to make software more secure, that is, to minimize security risks in software design and deployment—and that it needs a legal kick in the pants if it is to consistently put that knowledge into practice.

Generally speaking, the term “software security” is used to denote designing, building, testing and deploying software so as to reduce vulnerabilities and to ensure the software’s proper function when under malicious attack. Vulnerabilities are a special subset of software defects that a user can leverage against the software and its supporting systems. A coding error that does not offer a hacker an opportunity to attack a security boundary is not a vulnerability; it may affect software reliability or performance, but it does not compromise its security. Vulnerabilities can be introduced at any phase of the software development cycle—indeed, in the case of certain design flaws, before the coding even begins. Loosely speaking, vulnerabilities can be divided into two categories: implementation-level vulnerabilities (bugs) and design-level vulnerabilities (flaws). Relative to bugs, which tend to be localized coding errors that may be detected by automated testing tools, design flaws exist at a deeper architectural level and can be more difficult to find and fix.

The case that software insecurity is inherent, to some degree, is a strong one. Security is not an add-on or a plug-in that software makers can simply tack onto the end of the development cycle. According to the late Watts Humphrey, known as the “Father of Software Quality,” “Every developer must view quality as a personal responsibility and strive to make every product element defect free.”

James Whittaker, former Google engineering director, offers similar thoughts in his book detailing the process by which the world’s search giant tests software: “Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.” Responsible software, in other words, is software with security baked into every aspect of its development. 

As a general matter, we expect technology to improve in quality over time, as measured in functionality and safety. But the quality of software, computer engineers argue, has followed no such trajectory. This is not wholly attributable to software vendors’ lack of legal liability; the penetrate-and-patch cycle that is standard to the software industry today is a security nightmare borne of characteristics particular to software.

Gary McGraw, among the best-known authorities in the field, attributes software's growing security problems to what he terms the “trinity of trouble”: connectivity, extensibility and complexity. To this list, let’s add a fourth commonly-cited concern, that of software “monoculture.” Together, all of these factors help explain at a systemic level what makes software security difficult to measure, and difficult to achieve. Let’s consider each briefly.

First, ever-increasing connectivity between computers makes all systems connected to the Internet increasingly vulnerable to software-based attacks. McGraw notes that this basic principle is exacerbated by the fact that old platforms not originally designed for the Internet are being pushed online, despite not supporting security protocols or standard plug-ins for authentication and authorization.

Second, an extensible system is one that supports updates and extensions and thereby allows functionality to evolve incrementally. Web browsers, for example, support plug-ins that enable users to install extensions for new document types. Extensibility is attractive for purposes of increasing functionality, but also makes it difficult to keep the constantly-adapting system free of software vulnerabilities.

Third, a 2009 report commissioned to identify and address risk associated with increasing complexity of NASA flight software defines complexity simply: “how hard something is to understand or verify” at all levels from software design to software development to software testing. Software systems are growing exponentially in size and complexity, which makes vulnerabilities unavoidable. Carnegie Mellon University's CyLab Sustainable Computing Consortium estimates that commercial software contains 20 to 30 bugs for every 1,000 lines of code—and Windows XP contains at least 40 million lines of code.

The problems do not end with the mind-boggling math. The complexity of individual software systems creates potential problems for interactions among many different applications. For example, earlier this year, Microsoft was forced to issue instructions for uninstalling its latest security update when unexpected interactions between the update and certain third-party software rendered some users’ computers unbootable.

Finally, there are the dangers associated with monoculture—the notion that system uniformity predisposes users to catastrophic attacks. As examples of the dangers of monoculture, security experts point to the rot that devastated genetically identical potato plants during the Irish Potato Famine and the boll weevil infestation that destroyed cotton crops in the American South in the early twentieth century. As noted by Dan Greer in a famous paper that, as lore has it, got him fired from a Microsoft affiliate: “The security situation is deteriorating, and that deterioration compounds when nearly all computers in the hands of end users rely on a single operating system subject to the same vulnerabilities the world over.”

In other words, to great extent, software is different from other goods and services. And the notion of perfectly secure software almost certainly is a white whale.

But as far as the liability discussion is concerned, it’s also a bit of a red herring. Software does not need to be flawless to be much safer than the code churned out today.

For starters, software vulnerabilities are not all created equal, and they do not all pose the same risk. Based on data from 40 million security scans, the cloud security company Qualys found that just 10 percent of vulnerabilities are responsible for 90 percent of all cybersecurity exposures. Not only are some vulnerabilities are doing an outsize share of the work in creating cybersecurity problems, but many are, from a technical standpoint, inexcusable. Only systemic dysfunction can explain why the buffer overflow, a low-level and entirely preventable bug, remains among the most pervasive and critical software vulnerabilities out there.

That software security is an unusually difficult pursuit, and that the current incentive structure prevents software makers from consistently putting what they know about security into practice, together weigh in favor ofholding software makers accountable for at least some vulnerabilities. The law regularly imposes discipline on other industries that would otherwise be lured toward practices both easy and lucrative. An intelligently designed liability regime should be understood as a way to protect software from itself.

Ironically, software is most insecure where the stakes are highest. For all the progress that traditional software providers have made in creating more secure applications, experts say that embedded device manufacturers, responsible for producing everything from medical devices to industrial control systems, are years behind in secure system development.

The problem is not that the industry has proven unable to develop a disciplined approach to designing secure software. The problem is that this discipline is too often optional.

For example, commercial aircraft are legally required to meet rigorous software safety requisites established by the avionics industry and outlined in a certification document known as DO-178C. Yet not all life-critical systems—indeed, not even all aircraft—are required to comply with such baselines. Software for unmanned aerial vehicles (UAVs) need not meet the DO-178C standard. The discrepancy is hard to justify. As Robert Dewar, president and CEO of the commercial software company AdaCore, put it in an interview last year: “All engineers need to adopt the "failure-is-not-an-option" attitude that is necessary for producing reliable, certified software. UAVs require at least as much care as commercial avionics applications.” History says he has a point. In 2010, a software glitch caused U.S. Navy operators to briefly lose control of a drone, which wandered into restricted Washington airspace before they were able to regain access.

Similarly, some critical infrastructure sectors must meet mandatory cybersecurity standards as defined by federal law, or risk civil monetary penalties. But others not legally bound are instead left to drown in voluntary cybersecurity guidance. In 2011, the U.S. Government Accountability Office analyzed the extent to which seven critical infrastructure sectors issued and adhered to such guidance. That report was appropriately titled: “Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use.”

In 1992, Ward Cunningham, the programmer who gave the world its first wiki, introduced what would become popularly known as "technical debt" to describe the long-term costs of cutting corners when developing code. He wrote: “Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite. Objects make the cost of this transaction tolerable. The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt.”

In 2010, Gartner, the Connecticut-based information technology research firm, projected that technical debt worldwide could hit $1 trillion by 2015. That number is all the more disturbing when you consider the fact that technical debt is unlike financial debt in one crucial sense: private companies are not incurring the costs and risks associated with their choices.

You are.

The goal of a new cyber liability regime should be to change that—to create an incentive to lower technical debt and thereby add discipline to a confusing, inconsistent and fundamentally irrational default regime, under which software makers call the shots and users pay for them.

Image via Shutterstock.