Comforting claims have circulated in recent days that there is nothing to fear from deepfakes. We profoundly disagree.
Latest in deepfake
The president retweeted a deepfake of Joe Biden. The fake was made on an iPhone app that I’d already been researching.
The House Ethics Committee has announced that members who share deepfakes or “other audio-visual distortions intended to mislead the public” could face sanctions. It’s a small but noteworthy step.
The good news is that Facebook is finally taking action against deepfakes. The bad news is that the platform’s new policy does not go far enough.
Amid the hubbub of L’Affaire Ukrainienne, you could be forgiven for overlooking another story that has emerged out of Congress over the past week. It’s a grubby, unpleasant story—so much so that it feels ugly to draw attention to it. But the times are ugly, after all, and the story is a concerning harbinger of what might be to come in the lead-up to 2020.
Livestream: HPSCI Hearing on the National Security Challenges of Artificial Intelligence, Manipulated Media and Deepfakes
The House Permanent Select Committee on Intelligence will host a hearing entitled "The National Security Challenge of Artificial Intelligence, Manipulated Media, and ‘Deepfakes’" at 9:00 a.m. on Thursday. A video of the hearing is available below.
In the summer of 2016, a meme began to circulate on the fringes of the right-wing internet: the notion that presidential candidate Hillary Clinton was seriously ill. Clinton suffered from Parkinson’s disease, a brain tumor and seizures, among other things, argued Infowars contributor Paul Joseph Watson in a YouTube video. The meme (and allegations) were entirely unfounded.
Back in February, we joined forces in this post to draw attention to the wide array of dangers to individuals and to society posed by advances in “deepfake” technology (that is, the capacity to alter audio or video to make it appear, falsely, that a real person said or did something). The post generated a considerable amount of discussion, which was great, but we understood we had barely scratched the surface of the issue.
Fake news is bad enough already, but something much nastier is just around the corner: As Evelyn Douek explained, the “next frontier” of fake news will feature machine-learning software that can cheaply produce convincing audio or video of almost anyone saying or doing just about anything.
Bobby Chesney and Danielle Citron have painted a truly depressing picture of a future in which faked video and audio cannot be distinguished from the real thing. And I think they are right to be depressed about it, though I want to discuss a possible technological solution that they did not address.