Systems based on artificial intelligence are susceptible to adversarial attack. Vulnerability disclosure and management practices can help address the risk.
Latest in AI
Refer your friends ChinaTalk! If you get 10 people to sign up using your referral link, you'll get a free ChinaTalk mug!
Mara Hvistendahl is a staff writer at The Intercept. In this bitesize edition of ChinaTalk, we discuss pieces of hers on the US Ambassador to China's son and ZTE (The Intercept) and AI voice recognition giant iFlyTek (Wired).
Intro and outtro music liner notes available exclusively to ChinaTalk supporters. Become one here.
Who's spending big? Does it matter?
As the G20 summit in Buenos Aires gets underway, speculation continues to mount over whether U.S. President Donald Trump and Chinese President Xi Jinping can achieve a breakthrough that would put a floor under U.S.-China trade tensions and the ever-deteriorating bilateral relationship.
In my first post in this series, I wrote that one definition of artificial intelligence (AI) is a machine that thinks. But is it? Several people with technical backgrounds in the AI field reached out to me after reading that post. One comment I received that I found striking is that AI is neither A nor I. Instead, it is just computer code. Nothing is thinking; a computer is just following directions. And AI is just inputs to outputs for a goal.
For Lawfare readers interested in law and regulation of autonomous weapon systems (AWS), we’re pleased to note our new essay, recently posted to SSRN, “Debating Autonomous Weapon Systems, Their Ethics, and Their Regulation Under International Law.” It appears as a chapter in a just-published volume, The Oxford Handbook of Law, Regulation, and Technology, edited by Rog