Fault Lines
Fault Lines: Deepfakes and China at the UN
The latest episode of Fault Lines
Latest in deep fakes
The latest episode of Fault Lines
Over the course of two short days, figures affiliated with the GOP published three different deceptively edited videos on social media. Platforms can’t handle the challenge alone.
Comforting claims have circulated in recent days that there is nothing to fear from deepfakes. We profoundly disagree.
The House Ethics Committee has announced that members who share deepfakes or “other audio-visual distortions intended to mislead the public” could face sanctions. It’s a small but noteworthy step.
The good news is that Facebook is finally taking action against deepfakes. The bad news is that the platform’s new policy does not go far enough.
Amid the hubbub of L’Affaire Ukrainienne, you could be forgiven for overlooking another story that has emerged out of Congress over the past week. It’s a grubby, unpleasant story—so much so that it feels ugly to draw attention to it. But the times are ugly, after all, and the story is a concerning harbinger of what might be to come in the lead-up to 2020.
The House Permanent Select Committee on Intelligence will host a hearing entitled "The National Security Challenge of Artificial Intelligence, Manipulated Media, and ‘Deepfakes’" at 9:00 a.m. on Thursday. A video of the hearing is available below.
In the summer of 2016, a meme began to circulate on the fringes of the right-wing internet: the notion that presidential candidate Hillary Clinton was seriously ill. Clinton suffered from Parkinson’s disease, a brain tumor and seizures, among other things, argued Infowars contributor Paul Joseph Watson in a YouTube video. The meme (and allegations) were entirely unfounded.
Back in February, we joined forces in this post to draw attention to the wide array of dangers to individuals and to society posed by advances in “deepfake” technology (that is, the capacity to alter audio or video to make it appear, falsely, that a real person said or did something). The post generated a considerable amount of discussion, which was great, but we understood we had barely scratched the surface of the issue.
Fake news is bad enough already, but something much nastier is just around the corner: As Evelyn Douek explained, the “next frontier” of fake news will feature machine-learning software that can cheaply produce convincing audio or video of almost anyone saying or doing just about anything.