The pandemic is shaping up to be a formative moment for tech platforms.
Evelyn Douek is an S.J.D. candidate at Harvard Law School, studying international and transnational regulation of online speech. Before coming to Harvard to complete a Master of Laws, Evelyn clerked for the Chief Justice of the High Court of Australia, the Hon. Justice Susan Kiefel, and worked as a corporate litigator. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.
Subscribe to this Lawfare contributor via RSS.
The company’s new white paper is a thoughtful document that raises serious questions that regulators, and the rest of us interested in the future of online content regulation, need to reckon with.
The House Ethics Committee has announced that members who share deepfakes or “other audio-visual distortions intended to mislead the public” could face sanctions. It’s a small but noteworthy step.
The new bylaws include a number of promising signs about Facebook’s commitment to the Oversight Board experiment. But the board’s original ambit of operations will be fairly limited.
David Kaye, the United Nations special rapporteur on the promotion and protection of the freedom of opinion and expression, recommended in June 2018 that social media companies adopt international human rights law as the authoritative standard for their content moderation. Before Kaye’s report, the idea was fairly out of the mainstream. But the ground has shifted.
Last week, Facebook announced the final charter for its independent Oversight Board, which will have the power to hear cases and overrule Facebook’s decisions about what can and cannot remain on Facebook’s platforms. This is a potentially pivotal moment in the history of online speech and an unprecedented innovation in private platform governance.
Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation
The “techlash” of the past few years represents a moment of quasi-constitutional upheaval for the internet. The way a few private companies have been “governing” large parts of the digital world has suffered a crisis of legitimacy. Calls to find mechanisms to limit the arbitrary exercise of power online have gained new urgency. This task of “digital constitutionalism” is one of the great projects of the coming decades. It is especially pressing in the context of content moderation – platforms’ practice of designing and enforcing rules for what they allow to be posted on their services.