Three Things to Remember from Europe’s ‘Right to Be Forgotten’ Decisions
Europe’s highest court issued two huge rulings on Sept. 25 regarding the implementation of the EU’s “Right to Be Forgotten.” Both decisions involve a long-standing dispute between Google and France’s data authority, the Commission Nationale de l’Informatique et des Libertés (CNIL); both have considerable implications for the cross-border regulation of the internet. The big, headline ruling limits the territorial reach of the CNIL’s orders over Google’s internet services. The court held that, while the CNIL can compel Google to remove links to offending material for users located in Europe, it may not currently do so worldwide. This was the key issue that brought so much attention to the case—including the interest of civil society groups and several governments. It was a clear win for Google, even if it was also a setback (more on that in a moment). Then there is the second decision, which received much less attention and involves the handling of “sensitive data”—where the court effectively creates a notice-and-delist regime for certain kinds of information, such as health care records, criminal justice records and so on. This second case is also a sort of win for Google, but the implications for operationalization of the second decision are much less clear (Stanford’s Daphne Keller has a helpful breakdown of the ruling).
There’s much to unpack here, but I thought I would just highlight three things. The first is that Google won the battle: The court concluded that France does not have the current authority to demand that its delisting orders apply around the world. Second, however, Google is losing the larger war against global injunctions: Europe’s highest court explicitly rejected the idea that global orders are inherently problematic or somehow incompatible with a global internet. To the contrary, the court left the door wide open—even put out a welcome mat—for future extraterritorial regulations of the internet. Third, contrary to the general story that Europe views and regulates the internet differently from the U.S., these cases share striking similarities to some of the most pressing regulatory challenges on this side of the Atlantic.
Lesson #1: France’s Global Injunction Was Problematic
The headline takeaway from the case is that France’s injunction ordering Google to remove links to certain material from its search results globally is not valid. In accordance with Europe’s “Right to Be Forgotten” regime, Google had agreed to remove certain offending search results from its search product; the question was just which search products and where? The firm had at least three distinct options:
- Domain-level response: Remove the offending listings only from the country-level domain (in this case, google.fr).
- Location-specific response: Use geolocation technology to identify where the user is located and make sure that users in the relevant jurisdiction (in this case, Europe) do not see links to the offending material.
- Global takedown: Stop listing to the offending material worldwide.
In this case, as in others, Google started at domain-level, then was pushed to location-specific, yet the regulators insisted on a global takedown. Google initially removed offending material from its Google.fr page but not from Google.com. After Google convened an advisory council, which recommended that the firm use its geolocation technologies to remove the content from the results seen by users located in France, Google agreed to do that. But the CNIL concluded this wasn’t good enough, insisting that Google remove the offending links on all of its search results wherever they are seen; in short, the CNIL wanted a global injunction.
The European Court of Justice concluded, however, that the CNIL did not have express authority for such an order under European data protection law (specifically Directive 95/46 and Regulation 2016/679), and even if such authority might be implied, the CNIL did not sufficiently weigh countervailing considerations. The court noted that while the drafters of European privacy law sought to “guarantee a high level of protection of personal data throughout the European Union,” and that global delisting would “meet that objective in full,” the “right to the protection of personal data is not an absolute right.” Rather, a right to privacy “must be considered in relation to its function in society and be balanced against other fundamental rights,” including “the freedom of information of internet users.” The court found that while European privacy law strikes “a balance between that right and that freedom so far as the Union is concerned” (emphasis added), it does not do so for users outside the union. In sum, there was no express EU authorization for the CNIL’s global order, and because the order does not balance privacy rights with freedom of information concerns, the court killed it.
Lesson #2: Global Injunctions Are Not Inherently Problematic
While the court said that EU law does not give France express authority to compel Google to remove listings worldwide, “it also does not prohibit such a practice.” The Court of Justice suggested that, even absent any new statutory authority from the EU, regulators—though it is unclear which ones—might nonetheless have good reason to order a global injunction, as long as they balance privacy with freedom of information:
Accordingly, a supervisory or judicial authority of a Member State remains competent to weigh up, in the light of national standards of protection of fundamental rights … a data subject’s right to privacy and the protection of personal data concerning him or her, on the one hand, and the right to freedom of information, on the other, and, after weighing those rights against each other, to order, where appropriate, the operator of that search engine to carry out a de-referencing concerning all versions of that search engine.
This means that even before EU legislators grant express statutory authorization for global injunctions, a data regulator—such as the CNIL—should be able to demonstrate that it has weighed privacy against freedom of information and then, “where appropriate”—a caveat that will surely be the subject of future litigation—issue a global delisting demand.
Not only did the court find nothing wrong about global injunctions, per se, but it also suggested that they are perhaps especially useful as remedies to internet harms, which are often far reaching. The court noted:
In a globalised world, internet users’ access — including those outside the Union — to the referencing of a link referring to information regarding a person whose centre of interests is situated in the Union is thus likely to have immediate and substantial effects on that person within the Union itself.
Such considerations are such as to justify the existence of a competence on the part of the EU legislature to lay down the obligation, for a search engine operator, to carry out, when granting a request for de-referencing made by such a person, a de-referencing on all the versions of its search engine. [Emphasis added.]
This is consistent with the thinking by the Canadian Supreme Court, which upheld a global injunction in 2017 in Google v Equustek. While Google argued that the internet’s global nature made global injunctions offensive, the court found that the argument ran the other way: “The problem in this case is occurring online and globally. The Internet has no borders — its natural habitat is global. The only way to ensure that the interlocutory injunction attained its objective was to have it apply where Google operates — globally.” [Emphasis added.]
There was much ado about the Equustek decision—mostly commentators complaining that this was an improper exercise of extraterritorial jurisdiction, with the suggestion that extraterritoriality is at odds with international comity. As I explain in “Litigating Data Sovereignty,” 128 Yale L.J. 128, 328 (2018), this is wrong. Foreign affairs scholars, principally Bill Dodge, have shown convincingly that comity doctrines are just as much about recognition and accommodation as they are about restraint. The history of extraterritorial, even global, orders is long. The internet will not stop these orders; to the contrary, it might call for more of them.
Lesson #3: The Case Mirrors Two Big Debates in the U.S.: Universal Injunctions and Content Moderation
Reading the EU decisions, I was struck by just how much they echo key law-and-policy debates in the U.S. The same day that the Right to be Forgotten decisions came out, I read Mila Sohoni’s fantastic new article, “The Lost History of the ‘Universal’ Injunction,” forthcoming in the Harvard Law Review. Before the travel ban cases, but especially after them, federal courts scholars have been focused on—consumed by?—the scope of federal injunctions. The federal courts debate is different in key respects from the debate over global internet injunctions—principally because the former is about who may be enjoined, not just where. But the debates overlap to a significant degree, interrogating both the proper role of courts in a democratic society and, just as importantly, the appropriate scope of equitable remedies. It would be interesting to know if one’s views about “universal injunctions” in the federal courts debate are predictive of one’s views on “global injunctions” in the internet context (and vice versa). I suspect they are not.
I was also struck by just how familiar the European Court of Justice’s statements about regulating online speech felt. The court blustered about the need for strong speech regulations, on the one hand, but then acknowledged how, in practice, it is nearly impossible to get right on a massive platform. The court, in its sensitive data ruling, noted that, while Google must get consent before processing sensitive classes of data, it did not expect Google to solve this problem without some help from users about what was in fact sensitive and non-consensual data sharing. The court noted that “[i]n practice, it is scarcely conceivable” (emphasis added) that Google will actually be able to get consent for everything a user posts online. Users must tell Google what content is sensitive and non-consensual, so that Google knows what to take down. This is notice and takedown, a regime we know well in the U.S.
Here in the states, we regularly demand new or better content moderation rules only to later punt because it turns out it’s just really hard to do. Facebook is practically begging not to make the rules anymore—or at least not to do so alone—and yet regulators have been reluctant to help out the firm with clear rules. Then there is the practical problem of moderating huge volumes of content in real time. When Mark Zuckerberg testified about the difficulty of cracking down on Russian influence campaigns using social media platforms, he suggested repeatedly that artificial intelligence (AI) would fix the content moderation problem, but when pressed about the likelihood of that happening or the timeline, he balked. AI is just not up to the task of perfect real-time moderation on massive platforms. That’s why Facebook has so many thousands of folks working on content moderation (as, by the way, does the Chinese government). Content moderation is hard, especially in real time, and the European Court of Justice appears to agree.