This month, the Defense Advanced Research Projects Agency (DARPA) will assess the first phase of its Explainable AI program—a multi-year, multi-million dollar effort to enable artificial intelligence (AI) systems to justify their decisions.
Latest in Artificial Intelligence
A recent post on the New York Times’s At War blog begins with this hypothetical scenario:
In my first post in this series, I wrote that one definition of artificial intelligence (AI) is a machine that thinks. But is it? Several people with technical backgrounds in the AI field reached out to me after reading that post. One comment I received that I found striking is that AI is neither A nor I. Instead, it is just computer code. Nothing is thinking; a computer is just following directions. And AI is just inputs to outputs for a goal.
This is the third post in my series about the counterintelligence implications of artificial intelligence (AI). The first two are here and here. I’ll start this one with a story.
Chinese human rights practices are in the news again. The White House is reportedly weighing sanctions against Chinese officials and companies that are engaged in or facilitating the mass surveillance and detention of Uighurs in the Xinjiang Uighur Autonomous Region (XUAR).
In the first part of this series on the counterintelligence implications of artificial intelligence (AI), I discussed AI and counterintelligence at a high level and described some features of each that I think are particularly relevant to understanding the intersection between the two fields. That general discussion leads naturally to one particular counterintelligence question related to AI: How do we identify, understand and protect our most valuable AI assets?
Artificial intelligence will change the world. Because so many people and companies believe this, AI and the entire technological ecosystem in which it functions are highly valuable to private-sector organizations and nation-states. That means that nations will try to identify, steal, and corrupt or otherwise counteract the AI and related assets of others, and will use AI against each other in pursuit of their own national interests.
A review of Paul Scharre’s “Army of None: Autonomous Weapons and the Future of War” (W.W. Norton, 2018).
On April 13, China’s delegation to United Nations Group of Governmental Experts on lethal autonomous weapons systems announced the “desire to negotiate and conclude” a new protocol for the Convention on Certain Conventional Weapons “to ban the use of fully autonomous let
The growing military-use of predictive algorithms and artificial intelligence is stirring up corporate and academic protests. Consider, for instance, a recent Reuters report that 50 AI researchers from 30 countries are boycotting South Korea’s top university because it opened an AI weapons lab in partnership with a large South Korean company.