How should the First Amendment apply to laws that tell giant platforms like Facebook or Twitter how to police third-party content? On one view, content moderation is a form of constitutionally protected “speech” in itself, much as a newspaper’s editorial choices are speech. But this view leads to an absurd result in which the First Amendment’s free speech guarantee becomes a mandate for a small number of corporate heads to rule public discourse. This paper therefore offers an alternative: When a law regulates the dominant platforms’ content policies, the law’s downstream effects on the speech of users should determine whether it violates the First Amendment.
This kind of analysis will require significant legal innovation. The dominant platforms today host virality-driven environments whose internal dynamics undermine First Amendment law’s traditional understanding that public discourse can mostly regulate itself. The First Amendment’s high-level purposes will have to translate differently to these spaces, with doctrinal details that often bear little resemblance to the black-letter law that applies in more traditional settings.
At worst, we may find ourselves faced with the question of how much the First Amendment’s traditional guarantees must be watered down to account for the new and dangerous physics of ad-driven viral discourse. But more optimistically, the First Amendment could become a spur for regulators to develop and implement new content-neutral measures for mitigating speech-related harm. These measures might create a new, slower model of online speech—one that is less prone to manipulation and frenzy, less needful of censorship, and therefore more hospitable to the true freedom of speech.
The paper is also available here.