The conspiracy theory posed genuine danger, but Twitter’s action does not signal a new era of accountability for big technology platforms.
Evelyn Douek is a Lecturer on Law and S.J.D. candidate at Harvard Law School, and Affiliate at the Berkman Klein Center for Internet & Society. She studies online speech regulation and platform governance. Before coming to Harvard to complete a Master of Laws, Evelyn clerked for the Chief Justice of the High Court of Australia, the Hon. Justice Susan Kiefel, and worked as a corporate litigator. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.
Subscribe to this Lawfare contributor via RSS.
The pandemic is shaping up to be a formative moment for tech platforms.
The company’s new white paper is a thoughtful document that raises serious questions that regulators, and the rest of us interested in the future of online content regulation, need to reckon with.
The House Ethics Committee has announced that members who share deepfakes or “other audio-visual distortions intended to mislead the public” could face sanctions. It’s a small but noteworthy step.
The new bylaws include a number of promising signs about Facebook’s commitment to the Oversight Board experiment. But the board’s original ambit of operations will be fairly limited.
David Kaye, the United Nations special rapporteur on the promotion and protection of the freedom of opinion and expression, recommended in June 2018 that social media companies adopt international human rights law as the authoritative standard for their content moderation. Before Kaye’s report, the idea was fairly out of the mainstream. But the ground has shifted.
Last week, Facebook announced the final charter for its independent Oversight Board, which will have the power to hear cases and overrule Facebook’s decisions about what can and cannot remain on Facebook’s platforms. This is a potentially pivotal moment in the history of online speech and an unprecedented innovation in private platform governance.