On Mar. 10, the FBI’s Cyber Division released a Private Industry Notification (PIN) warning that “Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.” The PIN explains that manipulated images or video—often referred to as “deepfakes”—can be investigated by the FBI when the synthetic content is malicious and “attributed to foreign actors or is otherwise associated with criminal activities.” The report specifically highlights content generated with artificial intelligence or machine learning techniques. It alleges that Russian, Chinese and Chinese-language actors have already used these emerging technologies to create real-looking profile images of nonexistent people in an effort to make their messages appear more authentic to online users. As technology continues to advance, the PIN asserts, the public is increasingly likely to encounter fraudulent, synthesized content online.
The PIN warns that “cyber actors may use synthetic content to create highly believable spearphishing messages or engage in sophisticated social engineering attacks,” citing a November 2020 Europol research report. It further provides guidance for identifying the use of deepfakes in influence operations, using a photo from thispersondoesnotexist.com to illustrate different indications of deepfake technology. And it offers general guidance for combatting disinformation campaigns in a digital landscape littered with synthetic content.
The PIN was coordinated with the Department of Homeland Security’s Cybersecurity & Infrastructure Security Agency (CISA).
You can read the PIN here or below: