Editor's note: This article is part of a series of short articles by analysts involved in the Cyberspace Solarium Commission, among others, highlighting and commenting upon aspects of the commission's findings and conclusion.
Liability for insecure software is already a reality. The question is whether Congress will step in to give it shape and a coherent legal structure. Broadly speaking, Congress could do this in one of two ways. It could create a legal framework for claims brought by private citizens or state attorneys general. Or it could delegate the regulation of software security to an agency like the Federal Trade Commission (FTC).
The alternative is the status quo. In the absence of congressional action, the past decade has seen the slow improvisation of a disjointed movement to make software vendors pay for shoddy code and the development processes and patching practices that have rendered insecure software the market norm. That movement has had an episodic, experimental quality, with media attention sporadically converging on high-profile data disasters and the agency warning letters, government-imposed fines, court-supervised settlements and ranging legal theories that come with them.
The most prominent player in this space has been the FTC. To pursue companies for egregious security practices resulting in easily exploited software and hardware vulnerabilities, the FTC has relied on its existing authority under Section 5 of the FTC Act, which allows it to take enforcement actions against “unfair or deceptive acts and practices in or affecting commerce.” For example, in July 2019 the FTC settled its years-long action against the network equipment provider D-Link Systems, Inc., for misrepresentations the company allegedly made regarding the security of its readily hackable wireless routers and internet-connected cameras. The FTC alleged D-Link failed to incorporate basic testing into its software development process and shipped products with unacceptable flaws like hard-coded login credentials; the settlement requires the company to implement a comprehensive software security program.
But Section 5 enforcement has its limitations, particularly when it comes to the complex task of building out a fair, effective software liability scheme, one that provides adequate enforcement guidance to companies that seek to comply with the law and also protects software users from those companies that fail to implement reasonable, transparent development practices or to timely remediate security flaws. The holy grail is an incentives and penalties structure that compensates for the information asymmetry and negative externalities that have caused the market to undersupply secure code—but without imposing unacceptable or inadvertent constraints on an industry that reportedly drives economic growth in every state and, in 2018, directly contributed $845 billion to the U.S. economy.
That’s a tall order, and it is a far way from being filled. This is notwithstanding the emergence of a liability landscape that extends well beyond the FTC and its consumer protection mandate, encompassing lawsuits brought by private plaintiffs, states and the Justice Department using alternative legal theories. Simply put, software security is inherently challenging to measure and regulate, and the evidence to date suggests that without money and an express congressional mandate, efforts to enforce software security will continue but could well result in a body without bones: big occasional settlements that strike fear in the hearts of vendors, but paired with little substantive development of the law to reliably guide vendors’ development, monitoring and patching practices.
If Congress does decide to act, whether directly or by agency proxy, it will need to thread the needle between comprehensiveness and carefulness. Any effort to regulate software (and, to a lesser extent, hardware and firmware) insecurity must address some big questions—questions that have resulted in few definitive answers under the currently emerging settlement-based regime—but without wreaking technical havoc on an industry intertwined with virtually every other.
The first question is one of scope. What kind of software should Congress seek to regulate, at least to start? Unsurprisingly, it’s the low-hanging fruit that has attracted the most significant agency attention to date. This includes smart home products and other “internet of things” devices that package the body with a faulty brain. By all expert assessments, these products are unacceptably easy to hack, and they share characteristics of conventional goods and services that facilitate their regulation under existing laws. The government has further narrowed this universe by going after companies specifically in its capacity as a major technology purchaser. For instance, by invoking federal and state statutes that award double or treble damages for the knowing submission of false statements to the government or fraud against taxpayers, the United States, 15 states and the District of Columbia recently secured an $8.6 million settlement with Cisco for its alleged sale of video surveillance technology with known software flaws to the Department of Homeland Security, the Secret Service, the U.S. military and other government entities.
But the vast majority of cyberattacks target consumers using application software. Legislation or regulation that attempts to reach such services would have to work out a number of knotty issues, such as the parameters under which liability attaches to free software, much of which is open source (or mostly open source) and which ranges from downloadable web browsers like Chrome and Firefox to web applications like Facebook.
A related challenge is determining who owns the liability. Software isn’t always accessible by click; it may come preinstalled on a computer or smartphone, which is sold by original equipment manufacturers and may pass through dealers before reaching retailers and, finally, end users. A workable legislative or regulatory scheme would be one that allows intermediaries in the chain to efficiently shift and share liability—but without simply offloading it onto consumers in the manner of current end-user license agreements.
These threshold inquiries are only examples of the line drawing required to adequately sketch out the universe that will be subject to regulation. And they are but the lead-up to the primary challenge: determining what constitutes unreasonably insecure software or unreasonable security practices warranting enforcement action.
Because all software of nontrivial size and complexity can be expected to contain vulnerabilities, and because the frequency of attacks by malicious third parties could reflect no-fault factors such as the size of the software’s user base, successful exploitation is not by itself a legitimate basis for assessing liability. Consider, for example, the arguments Facebook made in the lead-up to its recent settlement of a class-action lawsuit over a massive 2018 data breach arising out of a vulnerability that allowed attackers to steal users' digital access tokens and see everything in the users' profiles. The private plaintiffs attributed the breach to Facebook's failure to address known flaws and its failure to take proper action after the attack. It’s one thing to penalize software vendors for shipping software with known vulnerabilities, discovered during the development process or disclosed through public databases such as the National Vulnerability Database and Common Vulnerabilities and Exposures listing. But Facebook has denied fault on the ground that “the breach was a result of an unknown and unforeseeable vulnerability,” reportedly caused by the interaction of three different software bugs. A fair accountability schema would take claims of this kind seriously, with reference to industry standards and objective benchmarks to assess whether the vendor’s implementation of best practices should insulate it from liability.
The use of known vulnerabilities as a benchmark for software insecurity would likely feature prominently in any reasonable starting liability schema. For years, experts have decried vendors’ failure to remediate and users’ failure to patch documented security flaws; the 2020 IBM X-Force Threat Intelligence Index revealed that exploits of known vulnerabilities account for 30 percent of attacks. But unlike most goods and services, software evolves, which creates a constellation of sub-issues relating to responsible post-release practices.
To start, developers would need to implement reasonable monitoring and patching practices. This is less straightforward than it sounds. Consider, for example, the $575 million settlement obtained by the FTC in partnership with the Consumer Financial Protection Bureau and most of the country’s state attorneys general. The complaint alleged that Equifax failed to provide “reasonable security” for the sensitive personal data of approximately 147 million people worldwide after the U.S. Computer Emergency Readiness Team alerted it to a critical security vulnerability in open-source software used in Java web applications. The problem wasn’t that Equifax failed to implement a patch or scan to search for unpatched software still on its network. Rather, the FTC’s complaint alleged that the scan was conducted using an improperly configured automatic scanner, leaving the vulnerability in Equifax’s system for over four months.
The mutability and extensibility of software raise other questions. For instance, alerting the world to a flaw instantly creates risks at the same time it serves as the first step to mitigation. Developers (and to some extent, manufacturers and assemblers) will need to not only track, record and patch vulnerability discoveries but also develop and implement responsible vulnerability disclosure policies. On another note, the flip side of software updates is software discontinuation. What are vendors’ obligations regarding software they no longer plan to support—and for how long?
Legislation is not a magic bullet for the complexities and uncertainties of the current, highly uneven software risk landscape. Much will turn on the care with which the legislation and any implementing regulations are drafted, and the consistency and coherence of efforts to interpret and implement those standards, whether through private or parens patriae suits or by way of agency enforcement actions. But one thing is clear: The horse has left the barn. The tide has already turned. Whether or not Congress sees a role for itself in enhancing and standardizing the current software liability regime, bad code is now bad news not only for end users but also for all those deemed responsible for putting it into the stream of commerce. Liability is here. What remains are questions of design and deliberation, ownership and optimization.