When a state suffers an internationally wrongful act at the hands of another state, international law allows the injured state to respond in a variety of ways. Depending on the nature, scope, and severity of the initial wrongful act, lawful responses can range from a demand for reparations in response to a low-level violation to a forcible act of self-defense in response to an armed attack. Countermeasures offer an additional way for a state to respond to an internationally wrongful act.
Aegis: Security Policy in Depth
Aegis explores legal and policy issues at the intersection of technology and national security. Published in partnership with the Hoover Institution National Security, Technology and Law Working Group, it features long-form essays of the working group, examines major new books in the field, and carries podcasts and videos or the working group’s events in Washington and Stanford. Aegis examines the legal and policy options that better shield America, its allies, and civilians worldwide from the risks of the modern world. The Hoover Working Group on National Security, Technology, and Law brings together national and international specialists with broad interdisciplinary expertise to analyze how technology affects national security and national security law.
With little fanfare and less public notice, Congress and the executive branch have cooperated effectively over the past decade to build a legal architecture for military cyber operations.
Our interview is with Mara Hvistendahl, investigative journalist at The Intercept and author of a new book, The Scientist and the Spy: A True Story of China, the FBI, and Industrial Espionage, as well as a deep WIRED article on the least known Chinese AI champion, iFlytek.
Across the United States and Europe, the act of clicking “I have read and agree” to terms of service is the central legitimating device for global tech platforms’ data-driven activities. In the European Union, the General Data Protection Regulation has recently come into force, introducing stringent new criteria for consent and stronger protections for individuals. Yet the entrenched long-term focus on users’ control and consent fails to protect consumers who face increasingly intrusive data collection practices.
Platforms’ ability to assess the context of content plays a major role in determining whether “new school regulation” sets proportional limits to freedom of speech.
Verified Accountability: Self-Regulation of Content Moderation as an Answer to the Special Problems of Speech Regulation
The “techlash” of the past few years represents a moment of quasi-constitutional upheaval for the internet. The way a few private companies have been “governing” large parts of the digital world has suffered a crisis of legitimacy. Calls to find mechanisms to limit the arbitrary exercise of power online have gained new urgency. This task of “digital constitutionalism” is one of the great projects of the coming decades. It is especially pressing in the context of content moderation – platforms’ practice of designing and enforcing rules for what they allow to be posted on their services.
Last week, as part of the Hoover Institution’s Security by the Book series, Jack Goldsmith spoke with Herb Lin and Amy Zegart, co-directors of the Stanford Cyber Policy Program.
Subscribe to Aegis: Security Policy in Depth via RSS.