As Bobby has reported, one of the fascinating outgrowths of the Paris attacks is the declaration of war on ISIS that came the other day from Anonymous. Bobby mused on the intelligence value of identifying terrorist-related social media accounts. He also, briefly, noted that many providers (Twitter, Facebook, Tumblr, etc.) have begun voluntarily taking down ISIS-related accounts. I want to ask, in this post, a related question, more as a thought experiment than as a serious proposal: What, if anything, can be done to compel providers to take down accounts when they are unwilling to do so voluntarily? The answer, to my mind, lies in an analogy to the Digital Millennium Copyright Act (DMCA). Herewith some, admitedly creative, thoughts on the topic:
The DMCA (17 USC 512) is the most common form of compulsory take down in use today under American law -- it involves the use of copyright to protect rights in content. One has but to seek (and fail to find) an unauthorized video of, say, a recent Beyoncé concert to recognize that a mature, robust system of copyright protection operates with a relatively high degree of effectiveness in the digital realm, at least among law-abiding service providers. The novel legal tactic I want to explore in this brief note would harness that existing mechanism to new counter-terrorist realities.
The DMCA was passed in 1998 to implement two treaties adopted by the World Intellectual Property Organization (WIPO). Its principal goal was to criminalize the circumvention of digital rights management systems. But along the way it set up an at-the-time novel structure for managing copyright infringement in the digital age. Until the passage of the DMCA it was unclear, under American law, whether those who hosted digital content could be directly liable for damages arising from that hosting – was YouTube in violation of copyright and directly liable when it hosted an unauthorized video of a concert?
The DMCA offered a novel response – one that both protected copyright and fostered the free exchange of ideas. It declared that service providers were not liable for hosting violating material, but in turn, it imposed upon those providers an obligation to expeditiously respond to copyright claims made by the copyright holder, and take down (i.e. remove from their system) material alleged to be in violation of the rights of the original copyright holder. In general, the obligation to remove the copyrighted material arises when the online service provider receives a written notification from the copyright holder that is legally sufficient to put it on notice as to the claim. Though the process is not without its critics (who allege both inefficacy and overuse) it has, by and large, been a qualified success – respecting the rights of copyright holders while avoiding significant infringement on free expression.
That model might, with a small legislative change, be adapted to the removal of ISIS terrorist speech. All that would be required was a modification of the law to assign the copyright in all terrorist speech to a non-terrorist organization with an interest in monitoring and removing terrorist content. Here are the essential components of such a plan:
- Identification of terrorist organizations to whom the law would apply;
- A definition of unprotected content associated with that terrorist organization;
- An extinguishing of copyright in such unprotected content; and
- Transfer of that copyright to a third party.
Several aspects of this proposal bear noting. First, and most saliently, it is a uniquely US-centric solution. The DMCA is an American law based on American copyright protections. Hence, this structure of content-control would have little extraterritorial application. Though this is of less concern currently, since a large fraction of internet content and transmission is conducted by US based companies, it will be a significant limitation in the long-run, as the dynamics of content distribution change over time. And, of course, to the extent implemented this policy might drive consumers away from American companies and their products.
Second, to the extent the proposal is focused on the speech of designated terrorist organizations, it likely will not run afoul of the First Amendment free speech protections. But the targeting will need to be carefully crafted to pass legal muster (and to survive politically). Some terrorist content that is extreme (such as videos of beheadings) are likely unprotected by US law altogether. But other content (such as supportive statements in favor of jihad or advocating a caliphate) are presumptively protected and may only be disrupted if the identity of the speaker is conclusively identified as a designated terrorist organization.
Any effort to use legal means to degrade the ability of terrorists to communicate their ideas through social media must first confront the realization that it faces a fundamental legal constraint of legitimacy. [It may also face a problem of efficacy. Many think that the counter-narrative approach is better at combatting online propaganda and radicalization. They assess this as likely because the counter-content approach will never be completely effective and because the counter-narrative approach is considered more consonant with Western values fostering speech over censorship.]
A counter-content effort is suspect because it will be viewed by many as contrary to principles of free speech rooted in liberal Western democracies and, more particularly, to the protections the First Amendment accords to political speech. Properly construed, however, the First Amendment will not pose a legal constraint on action to compel the disruption of terrorist speech. [It bears emphasizing, however, that as a matter of rhetoric and atmospherics the general belief in principles of free speech most certainly will act as a potential constraint on effective legal action against service providers.]
The most salient case on point is Holder v. Humanitarian Law Project, 561 U.S. 1 (2010), a Supreme Court case that construed the USA PATRIOT Act's prohibition on providing “material support” to foreign terrorist organizations (18 U.S.C. § 2339B). The case is one of the very rare instances of First Amendment jurisprudence in which a restriction on political speech has been approved, and the only one of recent vintage.
The Humanitarian Law Project (“HLP”) had sought to provide assistance to the Kurdistan Workers’ Party in Turkey and Sri Lanka's Liberation Tigers of Tamil Eelam. According to HLP, their goal was to teach these two violent organizations how to peacefully resolve conflicts. Congress had, previously, prohibited all material aid to designated organizations that involved “training”, “expert advice or assistance,” “service,” and “personnel.” HLP argued that its assistance was protected political speech. The government countered with the argument that a categorical prohibition on speech in the form of assistance was required because even non-terrorist assistance would "legitimate" the terrorist organization, and free up its resources for terrorist activities. The Court approved the limitation on speech because it was narrowly drawn to cover only “speech to, under the direction of, or in coordination with foreign groups that the speaker knows to be terrorist organizations” and served a national interest of the highest order – combatting terrorism.
It would follow, in the wake of Humanitarian Law Project, that just as speech “to” or “under the direction of” or “in coordination” with a foreign terrorist organization may be limited, so too may the content actually published “by” the terrorist organization.
Third, it follows from this that the proposal to use the DMCA take-down model partially elides the significant problem of attribution. As noted, some content may be removed irrespective of who the source is. But more commonly we will be faced with the necessity of identifying with some precision whose content was to be disrupted. As the discussion of the First Amendment makes clear it may be constitutionally acceptable to disrupt the speech of a banned terrorist organization, even though such disruption is generally suspect as a “content-based” restriction. But that acceptability is almost certainly contingent on establishing a factual linkage between the content in question and the terrorist organization itself. It is almost certainly the case that mere “fellow travelers” who speak in support, say, of ISIS cannot be as readily disrupted through lawful processes such as the DMCA model.
Fourth, the proposal is not self-executing. Even after the passage of legislation someone would need to undertake the responsibility of monitoring the network for problematic content and bringing its presence to the attention of the hosts or service providers through the DMCA mechanism. It is possible that the government might execute that function, but that would come with costs in terms of nimbleness and transparency that are associated with all government action. Government action would also bring with it the political specter of heavy-handed censorship which might be debilitating, even if it were factually inaccurate. On the other hand, devolving that authority to a non-governmental organization or other private sector actor might allow that actor to determine government policy in ways that are more consistent with its own priorities than those of the government.
Though both models have their advantages and disadvantages, the best way forward lies, perhaps in a middle ground – through a Congressionally-chartered private entity that would serve as the copyright holder for the forfeited property. The most well-known such entity is the American Red Cross, which is independent of the Federal government but serves a quasi-governmental function. The new copyright holding entity would be similarly structured. Here, too, questions of implementation would need to be addressed, but no barrier, in theory exists to this solution.
In any event, the entire line of speculation may be unnecessary. Most service providers have been reasonably ready to take down content that it too closely tied to terrorist celebration and activity. Nobody needs to compel YouTube to delete the ISIS beheading videos, they are happy to do it voluntarily. Still, one wonders about the outer limits of compulsory content disruption and the DCMA+Humanitarian Law paradigm may suffice.