This article, originally presented to the Cross-Border Data Forum, expands upon arguments first set forth by the authors in “Flat Light: Data Protection for the Disoriented, From Policy to Practice,” The Hoover Institution, November 20, 2018.
Governments worldwide are in the process of updating the Budapest Convention, also known as the Convention on Cybercrime, which serves as the only major international treaty focused on cybercrime. This negotiation of an additional protocol to the convention provides lawmakers an opportunity the information security community has long been waiting for: modernizing how crimes are defined in cyberspace.
Specifically, the Computer Fraud and Abuse Act (CFAA), codified at 18 U.S.C. § 1030, dictates what constitutes illegal acts in cyberspace in the United States. As one of the earliest laws focused on crimes in cyberspace, it has also had a lasting impact on other, similar international cybercrime laws. But the CFAA over-emphasizes one element: the act of trespass, defining malicious cyber activities as simply stemming from computer access that is “without authorization or exceeding authorized access” (18 U.S.C. § 1030(a)(1)). The law should instead, and more broadly, only criminalize the intentional creation of harms.
Without changing the CFAA—and other cybercrime laws like it—we’re collectively headed for trouble. As it stands, the CFAA disincentivizes the wrong types of activities in cyberspace, while also permitting activities that should be criminalized, thanks to the increasing adoption of new techniques commonly referred to as artificial intelligence and machine learning.
Notably, current law negatively affects network and cybersecurity research. These problems will become more acute as techniques like machine learning are adopted more widely in the future.
The CFAA’s Negative Impact on the Security Community—and Us All
At the time of the CFAA’s enactment in 1986, the activities it sought to prohibit—and the activities it would be applied against in the future—varied widely from modern and emerging criminal activities targeting computers. Thirty-five years ago, for example, the internet was still in its infancy. Cell phones, much less the ubiquitous smartphones of today, were not widely utilized, and tech giants like AOL, Amazon and Facebook did not yet exist.
Over time, however, the types of unauthorized access criminalized by the CFAA have harmed a community of increasing importance: the security research community, whose importance has grown alongside the rapid adoption of networked devices.
As individuals collectively rely on software for more and more daily activities—from communicating with friends and colleagues, to transportation in software-driven cars and airplanes, to financial and other activities dependent on software—understanding new vulnerabilities in these systems has become paramount. The way to comprehend and to manage these new risks is by having a robust and engaged community of researchers focused on these problems. This community is shouldering a larger and larger share of responsibility for the collective risks society faces as individuals digitize more of their everyday lives.
But laws like the CFAA have long hampered this community’s work, largely due to its lack of focus on the intentional creation of harms beyond mere access. At present, there is frequently no legal difference between criminal activities within a network (for nefarious purposes) and a security researcher’s activities within the same network (for research purposes). As the Center for Democracy and Technology has well documented, some security researchers avoid conducting research on any networked device out of concern over violating the CFAA, fearing that their activities will be deemed as illegal as those committed by criminals with malicious intent. As more and more software systems are by definition networked, especially with the adoption of the cloud and the “internet of things,” this is a deeply troubling outcome.
This is not an outcome intended by those who drafted (or even enforce) such laws, either. Indeed, the U.S. Department of Justice itself has publicly noted similar shortcomings of the CFAA. Requiring that malicious intent to create specific harms, beyond authorized access, exist to constitute illegal activity is the first step toward fixing this problem.
The Rise of Machine Learning Will Make These Problems Worse
There are more reasons to update the CFAA than to encourage and protect security research alone. Chief among them are the challenges that machine learning will pose to the criminal framework created by the CFAA. Before we explain these challenges in practice, however, some baseline definitional work is in order.
We define “machine learning,” albeit somewhat loosely, as any set of techniques meant to extract patterns from data. (Aurélien Géron may have put it best in stating that “machine learning is the science (and art) of programming computers so they can learn from data.”) Algorithms used to extract such patterns are referred to as “models,” which can be deployed against data to, say, determine if new input data contain faces, or red lights, or oncoming cars, and much more. Such models are increasingly used in all sorts of scenarios, from self-driving or semiautonomous vehicles (as in the examples above), to financial trading, diagnostics in medical imaging, and an expanding list of use cases.
From a criminal perspective, the key point is that such models make decisions based on input data. Feed the model incorrect or intentionally manipulated input data, and the model will behave in potentially negative ways. As Nicolas Papernot and other researchers have demonstrated, such models present entirely new types of vulnerabilities compared to traditional logic-based software.
Practically speaking, this means that feeding malicious inputs into an image classifier on a vehicle—an act that could literally lead to loss of life or limb—is not clearly illegal under the CFAA. Researchers have already used such attacks, known as “evasion” attacks, to demonstrate the ability to cause autonomous vehicles to misclassify road signs. Arguably, this type of attack would not count as “unauthorized access” in that it simply passively displays an image directed at a model, even if a malicious actor intended to cause the model to malfunction. Similarly, so-called poisoning attacks, in which data on which a model is trained are intentionally polluted to create (or overlook) harmful behavior, likely do not fall under the rubric of malicious access at all. Training data can be altered passively while arguably avoiding the type of activities the CFAA criminalizes. Many more methods to manipulate such models exist, and many more are on their way.
Ensuring that such emerging activities are as illegal as counterpart activity applied to traditional software systems is a key challenge for criminal law—and yet another reason to modernize the CFAA.
A Starting Point for Revisions
The CFAA, along with other analogous cybercrime statutes, should more concretely focus on the intentional creation of harms in cyberspace. Cybercrime laws should, as a result, define digital criminal activity as simply as possible.
Our suggested starting point for a definition of a cybercrime is:
(1) The intentional creation of
(2) a harm related to data or a software program that is
(3) outside the actor’s lawful control or possession
where “harm” itself is defined as
(a) a breach in the confidentiality, integrity, or availability of the data or software system, or
(b) the facilitation of outcomes, either through the use or manipulation of input data or computer code, designed to undermine, slow or retard the performance of any software system.
So, what does this new definition provide?
To start with, the focus on intentional harms beyond trespass provides sufficient clarity as to what constitutes illegal activity. We believe this clarity would empower security researchers to conduct their research without fear of criminal liability. Nonintentionally created harms resulting from this research would not be criminal (though civil penalties could still likely apply).
It is true, of course, that a greater emphasis on the intentional creation of harms places more technical burden on the judiciary, on law enforcement and on lawyers to understand what activities in cyberspace actually signify—in other words, to divine intent from the technical means that malicious and nonmalicious actors utilize. It is equally true that many of these communities may currently lack the technical capabilities, expertise or resources to do so.
But these skills must nevertheless adapt to the law, not the other way around. We believe that these communities are already on their way to becoming more technical, and there are many growing imperatives for them to do so.
In addition, our proposal broadens the definition of illegal activity from simple “unauthorized access,” as relied on by the CFAA, toward the creation of specific harms in cyberspace targeted at data or at software systems as a whole. Those harms may relate to the confidentiality, integrity, or availability of data or software. But harms are also defined as outcomes caused by the manipulation of input data or computer code. This, in turn, ensures that malicious activities targeting machine learning models are categorized as criminal activity while also protecting traditional software systems already covered by the CFAA.
As with all legislation, the devil is in the details—and for this reason we do not intend to offer specific draft language to be enacted. The above language can surely be improved upon. But it offers a good starting point to improve on the shortcomings we identified.
An Opportunity in the Budapest Convention Protocol Negotiations
The Budapest Convention’s member parties are in the process of negotiating a Second Additional Protocol, meant to address new and emerging criminal activities in cyberspace. Negotiations officially began in June 2017 and are currently scheduled to end in December 2019, though they may be extended beyond that point if required. The negotiations are focused on four areas: international cooperation among governments; cooperation between governments and private internet service providers; standards for access and security; and more general data protection requirements.
To date, the protocol’s drafting group has published provisional text for two articles on “emergency mutual assistance” and “language of requests.” The drafting group is also discussing additional articles on video conferencing, an endorsement model for subscriber information requests, jurisdiction issues, direct cooperation with service providers, international production orders, extending searches and access based on credentials, joint investigations and joint investigation teams, and investigative techniques.
While we cannot offer specific insight into the status of the content of the current round of negotiations, we do contend that, as this process continues, cybercrime laws will be top of mind for regulators both in the U.S. and internationally.
These negotiations provide a near-term opportunity to reframe the CFAA’s core assumptions, along with a decent chance—or, at least, our hope—that such a reframing will gather momentum at an international level.
Ultimately the prerogative to modernize the CFAA falls on Congress, which should promptly take up the task. If they do so, legislators and staffers on the Hill will likely find subject matter experts in the Departments of Justice, State and elsewhere involved in the Budapest negotiations who will be more than sympathetic to this cause. The process will not be quick, but it is urgent.
Few environments have developed as quickly or have evolved as much as cyberspace has in such a short period of time. As a result, it is understandable that current U.S. criminal laws, as applied to cyberspace, do not reflect the current digital realities. Now is the time for the law to adapt.
Editor’s Note: This post was updated at 1:49 p.m. Eastern time on June 21 to clarify the authors’ points about the overemphasis of intent to trespass in the CFAA.