Another group of terrorism victims has filed suit against a social media company for allegedly giving material support to a terrorist group, in this case ISIS. These cases have been proliferating of late. I've written about them here and here and here and here and here and here and here and here. Here's the latest complaint, this one filed against Twitter in the Southern District of New York:
I don't have a lot new to say about this issue in light of this latest complaint. It is most similar, in my judgment, in its sophistication and seriousness, to the complaint filed by Hamas victims against Facebook last summer. This is unsurprising, since it was filed by the same law firm. So for an analysis of what Twitter's defenses are likely to be and what the pivot points in the case are, I would refer readers back to that post. The analysis of this case is very similar, though for reasons I will explain, I think the current complaint is at least somewhat weaker.
I'll content myself with a few quick thoughts:
First, I continue to believe that Section § 230 of the Communications Decency Act should not categorically bar these suits. As I have argued before, nearly all of them (so far, anyway) should fail, but I don't believe the material support law operates in a fashion that makes them properly subject to CDA immunity. Zoe Bedell and I explained why in an earlier post:
all of the cases in which the courts to date have immunized service providers are cases predicated on offending content of some sort. That is, somebody posted something that was alleged to abridge someone else’s legal rights, and the question was whether or not the service provider bore some responsibility for the third party’s offending content. Construing § 230 broadly, the courts have held that holding the provider of “neutral tools” liable for such offending content makes the provider a “publisher” or “speaker.”
The material support laws, however, do not work this way. Liability under them does not depend on offending content—by the provider, by a third party, or by anyone. Consider 18 U.S.C. § 2339B, which holds that “Whoever knowingly provides material support or resources to a foreign terrorist organization, or attempts or conspires to do so, shall be fined under this title or imprisoned not more than 20 years, or both, and, if the death of any person results, shall be imprisoned for any term of years or for life.” There are many reasons to believe that Twitter has not violated this law by providing service to ISIS users (we spell out some of Twitter’s defenses in our post last week), but note that if it has violated the law, that offense was completed the moment Twitter knowingly provided service to ISIS. The offense does not depend in any way on what ISIS may have tweeted, or even if ISIS used the service in question. If ISIS operatives tweeted cat videos or they tweeted nothing at all, Twitter still would have violated the statute (assuming it did) the moment it knowingly provided “any property, tangible or intangible, or service, including . . . communications equipment” to operatives of a designated foreign terrorist organization.
In other words, one is not imposing liability under the material support laws based on any allegedly offending content. One is imposing liability based on the provision of service as an antecedent matter to a terrorist organization.
That said, to say that the defendant is not (or should not be held) immune does not mean that the defendant is liable. It means merely that the defendant is not excused entirely from defending the case. And many of these complaints should fail because they suffer from the same problem: they don't even seriously allege that the social media defendant played a meaningful role in the specific attack that killed the plaintiff's family member. They allege, rather, that the terrorist groups used the service, that the service didn't do enough about it, and that an attack happened in which someone got hurt—and that the company is therefore liable. Even if courts agree with me that CDA § 230 isn't the end of the conversation, that shouldn't be good enough in an environment in which the companies make their services available to any user with no vetting.
The connective tissue in this new complaint on the crucial issue of proximate causation seems to me at least somewhat weaker than in the Facebook complaint, though significantly stronger than in earlier Twitter complaints. I have only read over this complaint quickly, but it seems to lack some of the clear allegations that appeared in the Facebook complaint about how Twitter was allegedly used in the specific attacks in which the plaintiff's family members were killed. I suspect that might be fatal to this suit even if it survives a CDA immunity analysis.
Finally, it continues to bewilder me that these case are being filed in the Second Circuit, rather than in the Seventh Circuit, where the standard of causation seems more forgiving. Again, Zoe and I spelled out the differences in an earlier post; suffice it for now to say that the difference between the Second Circuit opinion in Rothstein, which will control this case, and the Seventh Circuit's en banc opinion in Boim is not trivial. As Zoe and I put it back in July, if this filing were in Chicago, rather than New York:
the lower causation standard Judge Posner articulated in Boim v. Holy Land Foundation would apply in what would seem like a helpful fashion. In that long and involved en banc decision, the court found a Hamas donor could be found liable because it “had participated in the wrongful activity as a whole [and] thus was liable even though there was no proven, or even likely, causal connection between anything he did and the injury.” The Second Circuit, however, has rejected this lower causation standard, finding that the [Antiterrorism Act] requires a traditional showing of proximate causation.
So here's my bottom line: There are two big questions in these cases. The first is whether CDA § 230 really means that a social media company is categorically immune from liability for knowingly permitting terrorists to organize and plot on their platforms when the result is mayhem and death; the second is whether, if they are not categorically immune, what standard of causation does a plaintiff have to meet in order to hold a social media company liable for an attack by a terrorist group that does some of its organizing online?