With little fanfare and less public notice, Congress and the executive branch have cooperated effectively over the past decade to build a legal architecture for military cyber operations.
Aegis: Security Policy in Depth
Aegis explores legal and policy issues at the intersection of technology and national security. Published in partnership with the Hoover Institution National Security, Technology and Law Working Group, it features long-form essays of the working group, examines major new books in the field, and carries podcasts and videos or the working group’s events in Washington and Stanford. Aegis examines the legal and policy options that better shield America, its allies, and civilians worldwide from the risks of the modern world. The Hoover Working Group on National Security, Technology, and Law brings together national and international specialists with broad interdisciplinary expertise to analyze how technology affects national security and national security law.
Our interview is with Mara Hvistendahl, investigative journalist at The Intercept and author of a new book, The Scientist and the Spy: A True Story of China, the FBI, and Industrial Espionage, as well as a deep WIRED article on the least known Chinese AI champion, iFlytek.
Across the United States and Europe, the act of clicking “I have read and agree” to terms of service is the central legitimating device for global tech platforms’ data-driven activities. In the European Union, the General Data Protection Regulation has recently come into force, introducing stringent new criteria for consent and stronger protections for individuals. Yet the entrenched long-term focus on users’ control and consent fails to protect consumers who face increasingly intrusive data collection practices.
Platforms’ ability to assess the context of content plays a major role in determining whether “new school regulation” sets proportional limits to freedom of speech.
Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation
The “techlash” of the past few years represents a moment of quasi-constitutional upheaval for the internet. The way a few private companies have been “governing” large parts of the digital world has suffered a crisis of legitimacy. Calls to find mechanisms to limit the arbitrary exercise of power online have gained new urgency. This task of “digital constitutionalism” is one of the great projects of the coming decades. It is especially pressing in the context of content moderation – platforms’ practice of designing and enforcing rules for what they allow to be posted on their services.
Last week, as part of the Hoover Institution’s Security by the Book series, Jack Goldsmith spoke with Herb Lin and Amy Zegart, co-directors of the Stanford Cyber Policy Program.
This essay closely examines the effect on free-expression rights when platforms such as Facebook or YouTube silence their users’ speech. The first part describes the often messy blend of government and private power behind many content removals, and discusses how the combination undermines users’ rights to challenge state action. The second part explores the legal minefield for users—or potentially, legislators—claiming a right to speak on major platforms.
Subscribe to Aegis: Security Policy in Depth via RSS.