There are complicated questions about how explainable artificial intelligence might impact assessments of the legality of autonomous weapon systems.
Latest in Artificial Intelligence
A new project addresses how artificial intelligence might change how states decide to use force against one another.
Some experts view AI as neither artificial nor intelligent—just computer code.
How might adversaries apply AI to the vast amount of data that they collect about American to understand us, predict what we will do and manipulate our behavior in ways that advantage them?
There is value in putting down a marker that using the technology this way is not acceptable.
How do we identify, understand and protect our most valuable AI assets?
How do we protect the valuable national asset of artificial intelligence against a range of threats from hostile foreign actors, and how do we protect ourselves against the threat from AI in the hands of adversaries?
A review of Paul Scharre’s “Army of None: Autonomous Weapons and the Future of War” (W.W. Norton, 2018).
China’s evolving approach to lethal autonomous weapons systems takes the global stage at the U.N.’s Governmental Group of Experts.
In the context of both criminal justice and military operations, predictive algorithms are likely to be used for a similar purpose: making individual predictions about dangerousness and anticipating the location of future acts of violence.