Editor’s Note: Rampage killings are a longstanding U.S. problem that is only growing worse. After many attacks, law-enforcement officials discover social media postings or writings that indicated murder was in the air, raising the question of whether this information could have been exploited to prevent the killing in the first place. George Mason's J.D. Maddox, who was a U.S. government leader on counterterrorism and technology issues, argues that there are technical tools to find manifestos and other indicators of looming mass violence online, but that current laws, norms, and policies make it unlikely these will be used.
Rampage killings have attracted new attention from policymakers, given the apparent uptick in attacks and the horror of the attack in Las Vegas in October. Serious students of the problem describe an acceleration of attacks and offer a gamut of possible psychological and sociological reasons for the violence. Technologists are quick to suggest some version of “a sophisticated algorithm” as a hopeful solution. But while this suggestion is technically feasible, it remains politically impractical.
Rampage killings are often preceded by the killer’s publication of an online manifesto (a statement of principles or intent). The manifesto may be one of the few outward signs of impending violence, and offers a momentary opportunity for intervention before attacks. Historically, though, these manifestos are usually dismissed, overlooked, or not taken at face value.
One of the most horrifying rampage killings in U.S. history occurred in June 2015 when a white supremacist opened fire on a Bible study group at the Emanuel African Methodist Church in Charleston, South Carolina. Nine parishioners were killed. While most commentators spent their breath speculating on the shooter’s motivations and psychological problems after the fact, that speculation does not help much to prevent attacks. Only concrete indicators of impending attacks are useful for prevention.
In the case of the Charleston shooting, the attacker posted a 2,444-word manifesto online long before the attack, stating his intent to “go into the ghetto and fight.” It’s easy to excuse our disbelief in an attacker’s manifesto. Words seem to carry little sincerity these days. But the recent repetition of this obvious pattern—an unbelievable manifesto followed by unbelievable violence—suggests a literal reading of these manifestos may be in order.
Besides the Charleston attacker’s manifesto, the murderer of six people in Isla Vista, California, in May 2014 posted a manifesto on YouTube before his attack; the murderer of 49 people in Orlando, Florida, in June 2016 posted his intentions on Facebook a few hours before his attack; and the murderer of six people in Tucson, Arizona, in January 2011 posted a short message of intent on MySpace six hours before his attack.
Of course some attackers don’t issue a manifesto at all, or choose to distribute their manifestos through traditional media. For example, the murderer of 32 people at Virginia Tech in April 2007 (before social media became ubiquitous) mailed his hardcopy manifesto to NBC News in New York, describing his mistreatment and the murders in past tense. These attackers avoid technical media monitoring and limit the broad application of a monitoring system. We should be happy that manifesto data remains obscure. But the trend of publishing manifestos online seems to be on the uptick.
As the IT industry offers improved capabilities to monitor online communications, including complex social media monitoring packages like Synthesio and data-driven journey mapping systems, manifestos offer particular promise as traceable indicators of mass violence. There is no algorithm that would currently guarantee discovery of every genuine statement of violent intent, but a concerted effort to employ technical media monitoring would likely get close quickly. A system that searches for combinations of factors, such as specific phrases, subjunctive mood, blaming, and aggressive or maudlin tone, is feasible right now. A system that correlates these indicators to other databases, such as recent arrest reports and weapons purchase data, would increase effectiveness.
The historic Unabomber case supports the idea that manifestos are useful for discovery and intervention. The Unabomber killed three people in attacks between 1978 and 1995 and sought wide distribution of his 50-page manifesto. He might have used social media for distribution if it had been available. When the Unabomber’s manifesto was eventually published in the Washington Post, his brother and sister identified his writing style and turned him into the FBI. Their identification of his writing is akin to a technical media monitoring system—but in slow motion.
While it may be technically feasible to discover manifestos online and in other communications, the intellectual problem of rapid assessment remains a high hurdle to successful intervention. The July 2011 mass murder of 77 civilians in Norway—mostly children—illustrates the problem. Ninety minutes before the attack, the murderer sent a 1,518-page manifesto to over 1,000 recipients, explicitly describing his imminent attack. Even with the specific details of the attack plan, the attack went unhindered—the attacker was in place, the manifesto was too long to consider in detail before the attacks, and the community probably would not have overcome its disbelief in such violence.
Similar to the intellectual problem of rapid assessment, there’s also the challenge of false positives. Complex systems like this invariably suffer failures during development, and it’s likely that an algorithm trying to find manifestos would mistakenly flag instances of strong political diatribes or other heated speech. The U.S. entertainment industry supports a fantastical narrative against technologies that predict violence, despite reports of their successes. This narrative goes hand-in-hand with public fear of these technologies and fear of encroachments on free speech. Any new system would require extensive and costly testing to avoid real-world failures.
But perhaps more limiting than the problems of rapid assessment and false positives is the unresolved question of legality. As the social media industry licks its wounds after its October lashing on Capitol Hill, it’s grappling with new ways to undermine social media abusers. It might seem like common sense to increasingly employ monitoring systems to detect misuse of social media with destructive or violent intent, but the industry is also beholden to pro-privacy pressure groups who have made headway by publicly condemning the use of these technologies for crowd monitoring and other purposes.
Among the social media industry, any new monitoring effort must now take into account the cautionary tale of Geofeedia. Once a $24-million company specializing in geolocating people during public events based on their social media activity, the company was investigated by the ACLU and lambasted in the media in 2016. Geofeedia lost its social media data access and ended up cutting its staff in half. This story often surfaces during investment pitch meetings: “How would you avoid a Geofeedia outcome?” Commercially, the most likely choice is to avoid the possibility entirely by foregoing the open development of monitoring technology, no matter how well intentioned it may be.
For other segments, though—such as federal law enforcement—the implementation of these algorithms poses a separate set of questions regarding the legal limits and intent of the Privacy Act, their ability to avoid scrutiny, and the imminence of violence. The Privacy Act was written in the 1970s to prevent federal agencies’ storage and manipulation of personally identifying information—the core concept of current social media monitoring. But if we know that an invasive technology promises to prevent some rampage killings—events that are increasing in scale and horror, and possibly contributing to societal and economic instability—does the development and implementation of the technology supersede Privacy Act limitations? It appears the public does not trust the law-enforcement community’s instinctive answer to this question.
The nearly impossible challenge of implementing these new monitoring technologies places law enforcement at a disadvantage—the kind of disadvantage that law enforcement has dealt with throughout its history, and which typically requires a significant tragedy to rebalance. For example, it wasn’t until the infamous 44-minute “North Hollywood Shootout” of February 1997 that police began to routinely carry weapons comparable to the bad guys’ automatic weapons. In the United States, difficult decisions are made only after catastrophes that could have been prevented if the decision had simply been made a little earlier. This particular change will undoubtedly require an extraordinary event—something that transcends the mind-numbing incrementalism of recent attacks.