How Long Will Unbreakable Commercial Encryption Last?
Most people who follow the debate over unbreakable, end-to-end encryption think that it’s more or less over. Silicon Valley has been committed to offering such encryption since at least the Snowden revelations; the FBI has abandoned its legal campaign against Apple’s device encryption; and prominent national security figures, especially those tied to the National Security Agency,, have sided with industry and against the Justice Department. Attorney General William Barr is still giving speeches claiming that law enforcement is “going dark”—but in this partisan age many Americans will not take his views at face value. And Congress is unwilling to go to bat for an FBI that is increasingly viewed with skepticism on the right as well as the left.
In fact, this complacent view is almost certainly wrong. Enthusiasm for controlling encryption is growing among governments all around the world and by no means only in authoritarian regimes. Even Western democracies are giving their security agencies authorities that nibble away at the inviolability of commercial encryption. Equally importantly, unbreakable user security will increasingly conflict with the commercial and political interests of the big Silicon Valley companies that currently offer encryption as a mass market feature—especially as technology companies take a more aggressive role in content moderation.
While the debate over encryption has stalled in the United States, it’s been growing fiercer abroad as other nations edge closer to direct regulation of commercial encryption.
In 2016, both France and Germany officially called for European-wide regulations that would require companies to decrypt communications at the behest of law enforcement. More recently, Germany’s interior ministry has mooted the possibility of using existing legal authority to demand that messaging companies either decrypt messages or face being banned from the German market. In France, President Emmanuel Macron campaigned on a promise to bring internet companies to account on the issue, saying that companies would be considered “complicit” in future terror attacks if they refused. His pledge came on the heels of the prior French administration’s anti-terrorism legislation, which cemented an existing “technical assistance” obligation on the part of messaging companies and imposed a sixfold increase in fines for noncooperation.
Other members of the European Union have made similar forays into the debate. Responding to a European Council questionnaire, Hungary, Croatia, Latvia and Italy reportedly told the council that they supported lawful access mandates at the European level, while Poland expressly called for a requirement that software and hardware manufacturers install either a backdoor or weakened encryption in their products.
These nations will have a chance to revisit the law enforcement obligations of internet companies when the member states implement the 2018 European Electronic Communications Code, which unifies regulatory obligations of traditional telecom players with the “over the top” applications popularized by internet players. Among the obligations that the code allows member states to impose on internet services is the duty to “enabl[e] legal interception by competent national authorities.” (See Annex I.) Each European nation is required to adopt implementing legislation by 2020, which could offer another opportunity to clarify and perhaps expand the companies’ obligations to enable intercepts.
The English-speaking nations sometimes called the “Five Eyes”—the U.S., U.K., Canada, Australia and New Zealand—have been even more active in recent months. The group issued a memo in September 2018 calling on the five member governments to demand that companies “create customized solutions, tailored to their individual system architectures that are capable of meeting lawful access requirements.” The memo also urged Five Eyes governments to “pursue technological, enforcement, legislative or other measures to achieve lawful access solutions” in the event of continued stagnation of lawful government access to encrypted communications.
The document may reflect the influence of the Australian government, which spent much of late 2018 debating a law that would force technology companies to provide law enforcement and security agencies access to encrypted communications. In the end, the Australian law as passed goes farther than any Western democracy to date—but not quite to the point of requiring that backdoors be built into all commercial products. It does give police broad authority to compel companies to design their services so that the government can covertly access users’ encrypted messages. To reassure critics, however, it abjures any design changes that would create “systemic weakness.”
Canada was once a lone holdout as the rest of the Five Eyes drifted toward lawful access mandates. But it too may be joining the crowd. The Canadian public safety minister, Ralph Goodale, joined the Five Eyes 2018 memo and in 2019 called for a policy that ensures privacy while “at the same time making sure that our platforms and services and systems are not harbouring the kind of behaviour that would exploit children and create victims.” Goodale brushed off critics by saying that no one would argue “that the system should be designed in such a way that it becomes the secret preserve of those who would exploit children, for example.”
The United Kingdom kicked off the latest round of Five Eyes surveillance hikes in 2016 with adoption of its Investigatory Powers Act. And it has kept the pressure on internet companies in the years since. The Investigatory Powers Act did not embrace lawful access design requirements, though it did authorize the U.K. government to issue technical capability notices that some feared could be used for that purpose. And it made many other law-enforcement-friendly changes, ordering web and phone companies to collect and store users’ web browsing histories for one year and authorizing security services to break into cellphones and computers. The U.K. government has proposed using this law to require that communications companies produce requested user data within a single working day, and it has kept up a steady flow of criticism aimed at internet companies that it sees as providing an online haven for terrorists. The U.K. signals intelligence agency, GCHQ, has also joined the fray, arguing that internet messaging companies could respond to lawful access requirements for real-time encrypted communications by covertly adding lawfully authorized agencies as participants to criminal chat groups.
Most recently, a piece of U.S. legislation, the Clarifying Lawful Overseas Use of Data (CLOUD) Act, has opened the door for Five Eyes countries to further pressure platforms and messaging services based in the U.S. to meet government data demands. Enacted to ratify U.S. law enforcement access to data in the possession of U.S. companies no matter where the companies may store that data, the law offers foreign nations quicker access to data held in the U.S. if they negotiate an agreement with the U.S. that provides civil liberties assurances. Given the similarities in U.S. and U.K. legal cultures and systems, Five Eyes countries like the U.K. are likely to be first in line for CLOUD Act agreements, and that will give them even greater authority to demand data from Silicon Valley companies. Encryption that operates only between the user and Silicon Valley’s cloud will no longer defeat foreign law enforcement.
The U.S. Justice Department, which will negotiate such agreements, is likely to welcome that result. Indeed, Attorney General Barr has presented the case for lawful access in more detail and from a higher position than ever before.
In a July 2019 attack on unbreakable encryption that the Washington Post described as “blistering,” the attorney general cited the example of a Mexican cartel that created a WhatsApp chat “for the specific purpose of coordinating the murders of Mexico-based police officials. The cartel ended up murdering hundreds of these police officers. Had we been able to gain access to the chat group on a timely basis, we could have saved these lives.”
Barr dedicated a significant portion of his speech to rebutting the claim—long espoused by tech companies and their techno-libertarian allies—that any mechanism for complying with government access requests would introduce systemic security risks that are by definition unacceptable. Barr expressed skepticism about the factual basis of that claim but also challenged its absolutism: “[A] slight incremental improvement in security,” he argued, was not “worth imposing a massive cost on society in the form of degraded public safety.” His skepticism, he implied, was grounded in the fact that no software company has taken such an absolutist position when its own business interests were at stake. Indeed, Barr argued, all software companies engage in a balancing of security against other values, and do so without provoking security disasters: “[P]roviders design their products to allow access for software updates using centrally managed security keys. We know of no instance where encryption has been defeated by compromise of those provider-maintained keys. Providers have been able to protect them.”
Finally, the attorney general hinted that the Justice Department has in mind a kind of Fabian strategy for achieving the rebalancing it wants. While wielding the threat of U.S. regulation, Barr also noted that “many of our international partners,” including Australia and the U.K., “are already moving on statutory frameworks” to address encrypted communications. With this in mind, he urged Silicon Valley to come to the table—or else. “Key countries, including important allies, have been moving toward legislative and regulatory solutions. … The status quo is exceptionally dangerous, unacceptable, and only getting worse. The rest of the world has woken up to this threat. It is time for the United States to stop debating whether to address it, and start talking about how to address it.”
This argument is one reason why I believe the tech companies are slowly losing the battle over encryption. They’ve been able to bottle up legislation in the United States, where the tech lobby represents a domestic industry producing millions of jobs and trillions in personal wealth. But they have not been strong enough to stop the Justice Department from campaigning for lawful access. And now the department is unabashedly encouraging other countries to keep circling the tech industry, biting off more and more in the form of law enforcement mandates. That’s a lot easier in countries where Silicon Valley is seen as an alien and often hostile force, casually destroying domestic industries and mores.
The Justice Department has learned from its time on the receiving end of such an indirect approach to tech regulation. It has struggled for 30 years against a European campaign to use privacy regulation to prevent tech companies from giving the U.S. government easy access to personal data. But as the tide of opinion turned against U.S. tech companies around the world, the EU was able to impose billions in fines on them in the name of privacy. Soon it really didn’t matter that these companies’ data practices weren’t regulated at home. They had to comply with Europe’s General Data Protection Regulation. And once they accepted that, their will to lobby against similar legislation in the United States was broken. That’s why California—and perhaps the federal government—is inching closer to enacting a privacy law that resembles Europe’s.
Having watched this scenario play out regarding privacy, it’s little wonder that the Justice Department may hope for a similar result in the crypto debates. All it will take is one Western democracy with the determination to enforce a lawful access mandate with real penalties, and the wall of resistance from Silicon Valley will begin to crumble. And if you asked a European Council meeting how many countries were willing to impose billion-dollar fines on Silicon Valley for, well, anything, you’d likely see 28 hands in the air (27 after Brexit).
Governments may be the main threat to big tech companies’ current approach to encryption, but there is another, more surprising threat: their own business interests. The techno-libertarians’ absolutist rejection of lawful access has never been tenable in a commercial context. Barr lambasted Silicon Valley for claiming that government access to consumer devices was never acceptable, even for a purpose as critical as stopping terror attacks, while insisting that its companies had to have access to all their customers’ devices for the purpose of sending them security updates (and, in Apple’s case, promotional copies of unwanted U2 albums). What’s more, Big Tech’s best customers—that is, businesses—don’t want unbreakable end-to-end communications direct to the end user. That encrypted pipe makes it impossible to find and stop malware as it comes in and stolen intellectual property as it goes out. It also thwarts a host of regulatory compliance mandates. So, pace the absolutists, tech companies have found ways to ensure that their business customers can compromise end-to-end security.
All these compromises contradict tech companies’ claims that their opposition to lawful access is grounded in consumer security. In a post-Cambridge Analytica climate of tech cynicism, many will come to believe that industry’s motivation is less elevated: that it is happy to build in backdoors that contribute to the bottom line but less happy about those intended to the fight crime and terrorism.
Finally, a change in Silicon Valley’s own culture may create opposition within the Valley to crypto absolutism. In recent years, the dominant ideology of Silicon Valley shifted from a mild hippie libertarianism to greater openness to content regulation along with a rise in concern over far-right extremism. Tech advocates may shrug off Attorney General Barr’s citation of cartel assassins using an encrypted chat group to kill police. But the reaction in the Valley could be quite different if the next far-right attack on Muslim worshippers is planned in such a forum. Embracing a role as speech police has given technology companies a new appreciation for what the police need. And access to plaintext is high on the list: You can’t police what you can’t decrypt.
One observer, Kalev Leetaru, thinks the shift has already begun. He noted in a perceptive July 2019 article that strict content monitoring can’t be reconciled with absolute encrypted privacy. In another article on the subject, he argued that a Facebook presentation showed that the company’s engineers were already working on a way to compromise user privacy in the name of content moderation. Leetaru thought that the company was exploring the option of installing its content moderation censor on user devices because end-to-end encryption would make centralized content moderation less and less effective. Only a content moderation system that operated inside the crypto walls, Leetaru argued, would solve this problem. It would do so by scanning for suspect content and sending it back to the mother ship for final analysis.
Another way of characterizing such a system, of course, is as an automated, context-aware wiretap. And the step from a corporate wiretap to a government wiretap is not a big one.
Facebook objected heartily to Leetaru’s extrapolation from a single engineering talk, insisting that it has no plans to implement such a system: “To be crystal clear, we have not done this, [and] have zero plans to do so,” wrote one WhatsApp exec. “We understand the serious concerns this type of approach would raise which is why we are opposed to it.”
Taking these assurances at face value, Leetaru still has a point. There is a fundamental conflict between providing absolute privacy and preventing the distribution of unapproved content. If running content moderation on the user’s device isn’t on Silicon Valley’s road map, how is the industry planning to square this circle? There is massive pressure both from governments and from the Silicon Valley workforce to do more content moderation, whether to suppress the speech of Islamists or of white nationalists. Sacrificing end-to-end encryption in order to allow for more effective content moderation would doubtless upset those users who care deeply about their privacy. But sacrificing content moderation in the name of privacy would end in massive fines and a loss of workforce talent.
In the end, I suspect, content moderation is going to win. And end-to-end encryption is going to lose.