“Artificial Intelligence Could Soon Enhance Real-Time Police Surveillance” reads a recent Wall Street Journal headline. Technology companies are working with U.S. police departments to develop facial recognition technology for body cameras—but the United States isn’t alone in its exploration and development of facial recognition technology.
Ashley Deeks is a Professor of Law at the University of Virginia Law School. She joined the Virginia faculty in 2012 after two years as an academic fellow at Columbia Law School. She served for ten years in the Legal Adviser's Office at the State Department, most recently as the Assistant Legal Adviser for Political-Military Affairs. In 2007-08 she held an International Affairs Fellowship from the Council on Foreign Relations. After graduating from the University of Chicago Law School, she clerked for Judge Edward Becker on the U.S. Court of Appeals for the Third Circuit.
Subscribe to this Lawfare contributor via RSS.
The growing military-use of predictive algorithms and artificial intelligence is stirring up corporate and academic protests. Consider, for instance, a recent Reuters report that 50 AI researchers from 30 countries are boycotting South Korea’s top university because it opened an AI weapons lab in partnership with a large South Korean company.
These days, stories about the use of facial recognition software (FRS) are legion. One of us wrote in January about the Chinese government’s extensive use of FRS. Just this month, U.S. Customs and Border Protection began testing facial recognition technology at around a dozen U.S. airports.
British Prime Minister Theresa May made the remarkable statement Monday that, absent a credible response from the Russian government, the United Kingdom will conclude that the use of a nerve agent against a former Russian informant, Sergei Skripal, was “an unlawful use of force by the Russian state against the United Kingdom.” News reports have proliferated, but some of them are inaccur
Every day seems to bring a new article about China’s pervasive use of facial recognition technology.
Microsoft and Google have joined Facebook in revealing that Russia may have purchased ads in an effort to manipulate the 2016 U.S. presidential election. Reactions to this news have been a mix of bewilderment and alarm—but perhaps we should not be so surprised. The fabricated news stories and click-bait headlines that dominated social media throughout the 2016 campaign are not a new tactic for the Russians. They are simply the latest iteration of a practice Moscow has used for nearly a century.
Editor's note: This piece is the second installment in a mutli-blog series building on the Fifth Annual Transatlantic Workshop on International Law and Armed Conflict, as explained in detail here.