This is the fourth installment in a series on whether and how to hold software makers financially liable for the insecurity of their products. Part I offered an overview of the problem of insecure code. Part II countered the notion that the technical challenges associated with minimizing software vulnerabilities weigh against the creation of any kind of maker-liability regime. Part III explained why leaving software security in the hands of the market is a very bad idea---as bad as the average software user's cyber hygiene.
Software license agreements are typically crammed with boilerplate language freeing the software provider from virtually all forms of liability while binding the commercial user to severe use restrictions. Unhappy with that? Too bad. Anyone who has ever installed software after “consenting” to the terms of the accompanying clickwrap, shrinkwrap or browsewrap understands that the disgruntled user has exactly two choices when it comes to mass market license agreements: take it or leave it.
Software providers typically shunt all the risks associated with their products onto users through these license agreements, which the courts have generally treated as enforceable contracts. Think of contracts as a form of private law-making—the parties agree to impose on themselves obligations not otherwise dictated by the law. Frustrated theorists have looked outside of the contract realm for ways to hold software providers accountable for the harms that users sustain as a result of insecure code. Consumer protection laws would seem to offer one narrow avenue for redress. Alternatively, users have filed suit for compensation on tort grounds, alleging negligence on the part of the software provider or product defect. Recognizing the continuing failure of contract law to provide software users meaningful remedies for harms caused by insecure code, as well as the challenges associated with bringing a successful tort claim under the current law, Professors Michael Rustad and Thomas Koenig have gone so far as to propose enacting a statute to establish an entirely new category of tort—“the tort of negligent enablement of cybercrime.”
Software license agreements have become such a bad joke for software users that it’s hard to believe that once upon a time, it looked like users might be able to leverage contract principles to their advantage. Specifically, commentators speculated that the contract-making principles embodied in the Uniform Commercial Code (UCC)—a set of model laws adopted at least partially by all the states—could be used to “pierc[e] the vendor's boilerplate” and create a legal framework that would equally benefit vendors and users, licensors and licensees.
As one believer declared back in 1988, “[B]ecause fairness and reasonableness are fundamental in the Code, application of the UCC would benefit parties unfamiliar with its provisions.” Another commentator predicted as early as 1985: “The courts have adequate means to protect software vendees from unconscionable contract provisions and the UCC makes requirements for effective disclaimers of warranty clear, so that the UCC will adequately protect software vendees and will not serve as a vehicle for manufacturers to limit their liability."
Unfortunately, the UCC has served as just that: a liability-limitation vehicle. As one critic put it almost two decades ago, treating software licenses as sales governed by the UCC “creates a legal fiction which—contrary to the general intent of the UCC—places the purchaser at a severe disadvantage vis-à-vis the vendor.” This is because the UCC is built on freedom-to-contract principles that assume roughly equal bargaining power between the buyer and the seller. Since roughly the mid-1990s, the courts have accepted that operating assumption and allowed software providers to contract away responsibility for the deficiencies of their products.
Judicial adherence to upholding the terms of the standard software license agreement has prompted Douglas Phillips, general counsel of Promontory Interfinancial Network, to dub it the “legislative license.” In other words, thanks to a long line of court decisions, "the law of the software license has come largely to be defined by each software license itself."
UCC freedom-to-contract principles serve as the pretext by which courts are able to uphold the liability disclaimers and limitations on remedies found in all commercial software licensing agreements. But this is not the end of the story. Other factors help explain why, in one high-profile case after another, software users alleging defects and security breaches get their cases thrown out of court. These factors are important insofar as they offer insight into how the courts understand code—and suggest that the grounds on which courts construe the rules of contract law in favor of software providers would similarly forestall user attempts to impose liability on providers through existing consumer protection laws or through claims sounding in tort. Indeed, software liability is unlikely to get off the ground without the help of legislation or regulation specifically designed to impose certain duties on software providers.
At least three factors other than the disclaimers and limitations crammed into the standard license agreement prevent users from seeking compensation when they are harmed by defective software. To start, much software is free. This is a problem under contract law because courts will not hold software providers liable for harms brought about for products or services for which users did not offer some form of payment—or what lawyers call “consideration.” This is the basic rule underlying Bruce Schneier’s observation that “[f]ree software wouldn't fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer.” Schneier is correct—as long as we’re talking about a private ordering regime. A different legal framework, however, might make for a different rule. For example, providers of free software generate revenue not by extracting money from the users, but rather by extracting data that they are then able to monetize. A statute that creates a duty for software providers to institute safeguards to secure this data or restrict its use might allow users to bring suit in the event of a security breach under tort theories of negligence or misrepresentation.
But in the absence of such a statute, the fact that much software and many Internet services are free will remain a sticking point for users seeking compensation for security-related injuries. Last year the social networking service LinkedIn was hit with a high-profile class action suit after hackers breached the company server and posted 6.5 million hashes corresponding to LinkedIn accounts on a forum. Sixty percent of these hashes were later cracked. The plaintiffs alleged that LinkedIn had failed to utilize industry standard protocols and technology to protect its customers' personally identifiable information, in violation of its own User Agreement and Privacy Policy. A federal court in California threw out the case this spring in part on the grounds that the policy was the same for users of the free and premium versions of the service. Specifically, the court found that the complaint “fails to sufficiently allege that Plaintiffs actually provided consideration for the security services which they claim were not provided.”
The fact that popular Web applications are often free has also proven problematic for users attempting to state a claim for harms stemming from a security breach under existing consumer protection laws. In 2011, in lawsuits filed against Facebook and against Apple for their policies of sharing user data with third parties, two more federal court judges in California ruled that consumer protection laws did not extend to the users of free services. In his order dismissing the Facebook case, Chief Judge James Ware of the U.S. District Court for the Northern District of California wrote, “[A] plaintiff who is a consumer of certain services (i.e., who “paid fees” for those services) may state a claim under certain California consumer protection statutes when a company, in violation of its own policies, discloses personal information about its consumers to the public . . . . Here, by contrast, Plaintiffs do not allege that they paid fees for Defendant's services.”
Here is a second reason software providers tend to prevail under a private-ordering regime, and remain immune even when users bring suit under various tort theories: the courts are resistant to finding an implied warranty of merchantability with respect to security for software products and services that they know cannot be made vulnerability-free. That is, courts tend to treat certain user security expectations as inherently unreasonable. For example, in 2011, several banks sued the payment transaction company that had been holding their customers’ data when it suffered a massive security breach. A Texas federal court rejected the suit, reasoning, “To the extent that the Financial Institution Plaintiffs argue that [the company’s] statements and conduct amounted to a guarantee of absolute data security, reliance on that statement would be unreasonable as a matter of law.” In rejecting the plaintiffs’ claim, the court relied on the logic of yet another court decision, which declared that “in today's known world of sophisticated hackers, data theft, software glitches, and computer viruses, a jury could not reasonably find an implied merchant commitment against every intrusion under any circumstances whatsoever.”
Note that this line of reasoning once got traction in the automobile context. Evans v. General Motors Corp. was a Seventh Circuit case in which the plaintiff alleged that General Motors had been negligent in designing its 1961 Chevrolet station wagon without the perimeter frame rails that were being used in many other cars to protect occupants during a side-impact collision. The Evans court rejected the claim on the grounds that "[a] manufacturer is not under a duty to make his automobile accident-proof or foolproof.” As one commentator pointed out, the court exaggerated the plaintiff’s claim to immunize the manufacturer from liability.
Two years later, the Eighth Circuit rejected this formulation of the claims in the landmark case Larsen v. General Motors Corp., in which the plaintiff alleged negligent design based on the displacement of the steering shaft in the Chevrolet Corvair. Specifically, the Larsen court rejected General Motors’ attempt to frame the issue as one contingent on determining whether it had the duty to produce a crash-proof car, relying instead on the idea that it was possible for General Motors to have designed a vehicle that would minimize the effect of accidents.
Similar standards based on industry best practices could be used to impose liability in the software context, if courts conceived of software as a product that could be designed to minimize, though not eliminate, security vulnerabilities. But the judiciary’s lack of technical expertise and the inherent complexity of software have long prevented the courts from making this leap. In a case dating back to 1986, a federal bankruptcy court declined to enforce the implied warranty of merchantability where a DOS-based computer that represented itself as being Apple-compatible failed to run Apple software. Noting that Apple sells thousands of software programs, the court declared, “We simply cannot determine the extent of the incompatibility and on that failure of proof we conclude that there has been no breach of an implied warranty of merchantability.” The fact that software users have been unsuccessful in asserting breach of implied warranty bodes badly, in turn, for their ability to bring what amounts to the “conceptually indistinguishable” tort claim for negligence against the software maker.
A third factor suggests that courts will continue construing software license agreements—and, as it turns out, tort actions—in favor of software providers: the idea that hackers, not providers, are singularly responsible for security breaches. Last year, a California federal court rejected the claim that Sony had misrepresented the quality of its network security where Sony's privacy policy had stated that its security was not perfect, and moreover also rejected plaintiffs' claims of unfair business practices, since Sony did not benefit financially from the third-party data breach.
The court’s rejection of the unfair business practice claim is noteworthy in that it suggests a narrow view of what constitutes financial benefit. That is, the court reasons that software providers gain nothing when malicious actors bring about security breaches, thereby declining to take an expansive view of the gains that software vendors (unjustly) reap by engaging in easy, shoddy software development and shipping practices that in turn contribute to security vulnerabilities and security breaches.
This cramped focus on the role of the hacker in executing the exploit and the refusal to consider the role of the software maker in creating an environment susceptible to exploit similarly present a challenge for any attempt to bring basic tort claims. Negligence is grounds for a civil lawsuit where the plaintiff is able to establish that the defendant owed a duty, breached that duty, caused harm as a result and should pay damages to make Humpty Dumpty whole again. Establishing the causation element in that chain is difficult, if not impossible, so long as courts choose to fixate on the hacker, not the environment-creator, when assessing who brought about the injury in question.
In sum, it is significant that buttressing the courts’ interpretation of software license agreements are ideas that similarly pose problems for holding software providers liable under consumer protection statutes or under tort theories. But the idea that, in the absence of special legislation or regulation, tort could be a viable avenue for pursuing liability for software providers runs up against a much bigger threshold problem. That is the economic loss doctrine. Broadly speaking, the doctrine restricts tort liability to cases involving bodily injury or damage to other property. This is a special problem for tort claims related to software vulnerabilities, since most security breaches give rise to purely economic losses or data compromises.
Thanks to the economic loss rule, courts have long been spared the uncomfortable task of actually declaring that software vendors have no duty to institute reasonable measures to develop and maintain secure software. For example, back in 2000, the gas and oil company Hou-Tex, Inc. alleged that a software program company had breached both its duty to inform its customer about a bug in the software and its duty to fix the problem. But the Texas state court held that the economic loss rule precluded Hou-Tex's negligence claims against the software company. In a 2010 case, a New York federal judge made no mention of a potential duty, and instead simply dismissed plaintiffs’ claims of negligence, strict liability and gross negligence for damages stemming from defects in the contracted-for software, as barred by New York's economic loss doctrine.
The economic loss doctrine has public policy roots. As the Supreme Court explained in its landmark 1986 decision East River Steamship Corp. v. Transamerica Delaval, Inc., tort law is the appropriate vehicle for addressing unexpectedly dangerous and defective products, since in the case of unexpected personal injury or property damage, the manufacturer is best positioned to bear the cost of and to price the product to spread the loss. Pure financial loss, however, is properly the domain of contract law, particularly the law of warranty, because the rule prompts the parties to set the terms of the bargain. Where the consumer agrees to pay less, the manufacturer can restrict its liability by disclaiming warranties or limiting remedies.
In short, the economic loss doctrine is premised on the idea that, as declared by the East River Steamship court, “a commercial situation generally does not involve large disparities in bargaining power . . . [thus] we see no reason to intrude into the parties' allocation of the risk.” In other words, the rule does not account for the asymmetric bargaining power between software vendors and end-users—which is pretty vast.
And so after very briefly touring some of the problems with the current private-ordering regime, and having learned (in part) why tort law won’t work either, we return, full circle, to the inadequacies of contract law and the UCC in allocating liability between software vendors and users.
The failure of software users to prevail under contract, tort, or consumer protection schemes when it comes to getting compensated for bad code suggests that in the absence of specific legislation or regulation—for example, restricting software vendors’ ability to rely on blanket disclaimers—software users will have little success in holding vendors accountable for vulnerabilities.
To put it simply, the laws on the books must change—or the quality of our software will not.