Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation

By Evelyn Douek
Wednesday, September 18, 2019, 1:47 PM

The “techlash” of the past few years represents a moment of quasi-constitutional upheaval for the internet. The way a few private companies have been “governing” large parts of the digital world has suffered a crisis of legitimacy. Calls to find mechanisms to limit the arbitrary exercise of power online have gained new urgency. This task of “digital constitutionalism” is one of the great projects of the coming decades. It is especially pressing in the context of content moderation – platforms’ practice of designing and enforcing rules for what they allow to be posted on their services. Historical forms of public and private governance offer limited guidance. Platforms are not nation states. But the unprecedented power major tech platforms wield over individual rights and public discourse also differentiates them from corporations as we have known them. This outsized impact on rights so central to democracy has led to demands for greater accountability for decisions about online speech, but extensive government intervention can be a poor tool for achieving this goal. Governmental involvement in speech regulation is uniquely pernicious, and the cure for the problems with content moderation should not be worse than the disease.

Instead, platforms need to innovate to create new forms of self-regulation to meet the demands of the moment. To date, the most developed proposal to create an independent constraint on its content moderation is that of Facebook’s Oversight Board – a court-like body that will hear appeals about the most difficult and important content moderation decisions Facebook makes and give public reasons for its decisions. This paper takes the Oversight Board concept as the basis of a possible model in examining the promises and limitations of platform self-regulation. While this paper’s focus is on the benefits of an independent appeals body, to be truly effective any such body needs to be embedded in an entire system of governance. As Facebook has itself noted, this is a project that will take years. But the task is also urgent given its importance and real-world effects.

Semi-independent and transparent self-regulatory oversight mechanisms offer significant advantages, not only over the current delegitimized governance structures but in absolute terms. As the actors closest to the front line, platforms will always need to play a significant role in drawing lines for online speech, given the high-volume, fast-moving and context-dependent nature of the decisions involved. A recent French government report acknowledged the benefits of this responsiveness and flexibility in endorsing a model of government regulation that “capitaliz[es] on this self-regulatory approach already being used by the platforms, by expanding and legitimising it.” This expansion and legitimacy can come from internal oversight, which can create a forum for the public contestation of platform rules and their implementation. But it is also true that self-regulatory solutions are likely to be a significant disappointment to many. They will not be able to meet the current expansive demands for due process and transparency in most content moderation decisions. Nor will they be able to create global norms about the appropriate limits of freedom of expression. But these goals set unrealistic benchmarks.

This paper first looks at the puzzle of why a private company might engage in a kind of constitutionalism and the demands that such projects are designed to meet. I then turn to the limitations of these measures before explaining their benefits and the reasons why they will be an important part of the future of online governance.

With the final charter of Facebook's Oversight Board recently announced, this conversation is as timely as it is urgent.