On Jan. 6, 2021, a mob of more than 2,000 people attacked the U.S. Capitol seeking to overturn the results of the 2020 presidential election. As government agencies scrambled to identify the perpetrators in the aftermath, they were aided by private citizens who became, often without realizing it, accidental open-source investigators. Abbreviated “OSINT,” open-source intelligence is “produced from publicly available information that is collected, exploited, and disseminated … to an appropriate audience for the purpose of addressing a specific intelligence requirement.” While “intelligence” has not traditionally referred to information used in criminal investigations, OSINT practices have been deployed for a wider variety of uses than their state-held, closed-source predecessors. However, “distinguishing between intelligence, information, and evidence” is important, as the thresholds for verifiability differ depending on the information’s eventual use. Of key interest here is its admissibility in subsequent criminal proceedings.
Following the Jan. 6 attack, OSINT contributed to the federal investigations and indictments of perpetrators in countless ways. For example, the public Instagram account @HomeGrownTerrorists (HGT) was created while the attack was still ongoing. HGT began by sharing photo and video content it received from other Instagram users of the Capitol attack, which it shared on its Instagram feed, and the account’s followers then commented on the identity of individuals they knew. This led to varying reactions—some more immediate, like the employment termination of the perpetrators, and others more localized, such as blowback on candidates for local office or other impacts on community reputations. HGT’s account gained hundreds of thousands of followers in the months following the attack—expanding to involve a Twitter presence as well—and it has accumulated videos, photos and even screenshots of conversations that implicate the individuals storming the Capitol. Another unexpected use of OSINT arose with dating apps. App users reported accounts containing videos and photos from the Jan. 6 attack or sent screenshots of conversations on the apps in which other users bragged about their participation. Users reported accounts not only to the dating apps but also as tips to the FBI.
The hundreds of criminal prosecutions following the Jan. 6 attack demonstrate how information gathering has fundamentally changed with the advent of cell phones and all that they contain—mobile camera and video capacity, social media, geotagging and other location-based metadata, and more. But this is far from the first instance of citizens using open-source information to seek justice and accountability for violent acts against democracy.
In 2014, Eliot Higgins was using social media footage, Google Maps, and other forms of open-source information to piece together clues regarding artillery used by combatants in the Syrian civil war. Higgins went on to found Bellingcat, a nonprofit organization and “collective of researchers, investigators and citizen journalists using open source and social media investigation to probe a variety of subjects.” Working from the motto of “Identify, Verify, Amplify,” Bellingcat seeks to use open-source intelligence to investigate serious crimes and aid in the effort to hold their perpetrators accountable. Relying entirely on open-source information, the organization has investigated and successfully exposed the perpetrators responsible for the downing of Malaysia Airlines Flight 17, the Russian chemical attack and assassination attempt of a former MI6 agent and his daughter in the United Kingdom, the Assad regime’s use of chemical weapons during the Syrian civil war, and countless other acts of violence (and crimes under various sources of international law).
Bellingcat prides itself on the integrity and credibility of its investigative work and provides a great deal of publicly available information to those interested in engaging with OSINT. However, the use of open-source information to publicly out the perpetrators of an international crime is one thing, but its use as evidence in a domestic court of law is another. And more specifically, would the masses of information collected by various vigilante social media accounts be admissible in a court of law? In the context of social media being used to investigate the Jan. 6 attack, private citizens are making tips to the FBI without knowing they are wading into the realm of digital evidence, and so their attention to preserving information essential for the content’s authentication is not necessarily a priority. Thus, the admissibility of this material at trial is more uncertain than in situations where practiced OSINT investigators, like those at Bellingcat, are in possession of this information. Furthermore, the resulting barrier to admissibility is with good reason—even when prosecuting those accused of violent acts against the state, preserving the right to a fair trial is essential to due process, the integrity of the judicial system, and legitimacy of the rule of law.
Considerations of admissibility are essential in order for this seemingly endless flow of potential evidence to be used to its fullest prosecutorial potential. Yet even the streaming videos of “protesters” scaling the walls of the Capitol or battering through the doors in a belligerent crowd present numerous obstacles to admissibility. For the purposes of this post, I will discuss three possible evidentiary concerns—authenticity, reliability and hearsay—and how they may be overcome.
Hypothetically, by the time the FBI views the footage following a tip from the HGT Instagram account or a Bumble user, it is beyond second-hand. The footage would likely have passed from the individual filming the Jan. 6 attack (who, also importantly, is not necessarily among the individuals in the video) onto a social media platform—say, Instagram—where it probably would have been first downloaded or otherwise recorded if the original Instagram account is private. The tipster then shares it with another Instagram account, the HGT account, who would post it on their profile, asking their millions of followers to identify the unknown individuals in the video. This saga preceding law enforcement’s obtaining of this information is an impressive feat that demonstrates social media’s role in seeking accountability through the criminal justice system. But in terms of the Federal Rules of Evidence (FRE) Rule 901 on authenticating evidence, this is a nightmare.
The kind of visual evidence provided by social media content of the Jan. 6 attack would typically—though not necessarily—have been authenticated through witness testimony. This poses obvious challenges, as the original creator of the social media content might not be known—and, furthermore, is not likely to make themselves known given that doing so would imply their complicity in a felony. There is also the possibility that this footage would be considered self-authenticating evidence under FRE 902(14) following the 2017 amendments to the Federal Rules of Evidence. In the notes for this rule, the Advisory Committee explained that “the expense and inconvenience of producing an authenticating witness for [digital] evidence is often unnecessary.” The committee went on to detail how electronic devices’ metadata—and more specifically, the “hash value,” which is a numeric code used for data identification—may be used to corroborate the authenticity of digital evidence.
However, even the addition of digital evidence as possibly self-authenticating does not wholly solve authentication challenges posed by OSINT as used here. Much of this footage has been repeatedly recycled and reproduced in different digital mediums as it made its way from the original social media share point to the FBI (a process that, separately, implicates chain of custody issues discussed below), and so the likelihood that the content retained its original metadata is diminished. While pro open-source investigators like those at Bellingcat are meticulous and practiced in preserving online content in ways that retain the integrity of all potentially useful aspects of the digital evidence, those sharing a second-hand-filmed copy of an Instagram story may not be prepared—or be aware—to be as diligent.
Related to these issues of authenticity is the challenge of demonstrating a sound chain of custody in how law enforcement acquired the proposed evidence. This goes to the inherently transient nature of social media—exacerbated by the specific interest in short-term content such as Insta stories and Snap stories. The social engineering around these platforms designed bursts of temporary content specifically for the kind of behavior one would not want being readily available for the long term. While “stories” may commonly be used to show individuals partying, drinking, or smoking, on Jan. 6 the “story” function on Instagram and Snapchat also showed activities like rioting, breaking and entering, and looting. Equally ephemeral regardless of their content, these images would only have been captured and shared with tip accounts in second-hand form, rendering both the useful metadata and the chain of custody over the original footage likely unavailable.
Furthermore, the use of this footage contains reliability issues. While the content’s reliability does not face the same initial barrier to admissibility as authentication, it is likely to be the subject of any cross-examination related to the open-source information. Defense counsel would challenge whether the video footage obtained on Jan. 6 reliably demonstrates the acts of who and what it claims to. Digital content can easily be forged or constructed to convey false information. There are in fact entire businesses dedicated to constructing authentic digital evidence that is simultaneously unreliable. One example is the slew of misinformation campaigns surrounding the 2016 presidential election, which produced troves of digital information and social media content that, even if preserved in its authentic form, had been fabricated or staged to unreliably represent the information it claimed to portray.
Finally, the use of this footage at any of the numerous trials following the Jan. 6 attack is replete with hearsay issues. Hearsay refers to an out-of-court statement offered at trial to prove the truth of the matter asserted. Judicial systems vary in their treatment of hearsay. Commonly, international tribunals such as the International Criminal Court (ICC) do not have any explicit hearsay rules, and admissibility in general is left to judges’ considerable discretion. While ICC judges have shown a “notable trend towards a stricter approach regarding the use of … anonymous hearsay and other open source materia,” this does not compare to the procedural labyrinth of the hearsay rules in the United States’ Federal Rules of Evidence. The dozens of rules regulating the admissibility of hearsay statements—and their many exceptions and exclusions—are meant to preserve truth at trial and protect defendants’ constitutional right to confront someone who bears witness against them as found in the Confrontation Clause of the Sixth Amendment. This in turn upholds the integrity of the judicial process and legitimacy of the rule of law.
The social media footage from the Jan. 6 attack certainly contains no lack of statements, especially incriminating ones like those calling for acts of violence against members of Congress (see here and here) and the vice president. HGT and other accounts have also produced text message conversations allegedly involving perpetrators of the Jan. 6 attack. Were these sought to be used at trial for their truth content, defense attorneys have fairly clear-cut means to challenge their admission on hearsay grounds where these sources contain written or verbal statements. That said, some hearsay exceptions may also apply, including those for an excited utterance or present sense impression, per FRE 803. Furthermore, where the individual in the video or text conversation is the one on trial, this evidence could be brought in under the hearsay exclusion rules contained in FRE 801(d)—particularly the incorporation of co-conspirator statements as a statement by a party-opponent per FRE 801(d)(2)(E).
To be clear, the purpose of being critical of this innovative burst of digital evidence is maintaining an eye toward prosecution, toward seeking accountability and justice for acts of domestic terrorism. The Federal Rules of Evidence—while arguably evolving too slowly to track with technology’s rapid advancement and the new forms of evidence this provides—exist for a vital purpose: to preserve the right to a fair trial and, with it, the integrity and legitimacy of judicial process and the rule of law. The question facing judicial systems—both domestic and international—is how to best adapt to accommodate the unique aspects of digital evidence.
Furthermore, the public’s role in identifying perpetrators and collecting and supplying evidence toward their prosecution can still be very fruitful. Seeking to capitalize on the intelligence made available by open-source information, organizations like eyeWitness to Atrocities have created user-friendly and streamlined means for capturing this information in a way that preserves key markers for verification and authentication. The “legal and technological experts” at eyeWitness have “aim[ed] to bridge the gap between the documenters of grave human rights violations and the requirements for justice mechanisms to use this information, by providing an innovative system that addresses existing pitfalls.” One of their key innovations on the subject has been the creation of the eyeWitness to Atrocities app. The app’s camera feature instantaneously embeds the photos and videos it captures with metadata to ensure verification of where and when the content was taken, and can confirm it has not been altered since. The footage is then secured with the eyeWitness organization, ensuring a sound chain of custody. The lawyers at eyeWitness then catalog the information and identify the domestic or international justice system best suited for its use.
Even when users are relying on typical means of recording and sharing content, social media has shown itself to be a valuable tool for investigating and prosecuting criminal acts on this massive scale. And crucially, there are a number of small and accessible steps individual citizens can take to maintain the integrity of the digital evidence they provide, without having to become expert open-source investigators overnight. Efforts to secure the original footage or image—or, where that is not possible, to incorporate as much identifying information in the content as possible—go a long way toward combating defense challenges on authenticity grounds. If, say, the photo or video was shared via text or via an Instagram or Facebook post, one can download the original source and preserve it on the computer’s hard drive. This approach remedies not only authenticity concerns but also the risk that the incriminating content will disappear (which, while not implicating a rule of evidence, is a significant practical consideration of open-source investigators).
One conclusion here is very clear: Open-source intelligence is a fixture in how crimes will be prosecuted and is likely to become only an increasingly common medium for evidence. So the evidentiary rules as they currently exist—both in the U.S. and in the international context—will continue to be challenged by the effort to utilize forms of evidence on which the rules do not provide sufficient guidance. While one hopes that Jan. 6 was a bug and not a feature in U.S. history, rules and procedures capable of handling the evidentiary issues will need to be in place to ensure that those tasked with seeking accountability for these crimes can avail themselves of the full array of evidence at their disposal, and that means having rules and procedures in place that are prepared to grapple with such issues.
The views expressed in this article are the author’s alone and do not reflect the views of her employer.