This week’s Cyberlaw Podcast covers efforts to pull the Supreme Court into litigation over the Texas law treating social media platforms like common carriers and prohibiting them from discriminating based on viewpoint when they take posts down. I predict that the court won’t overturn the appellate decision staying an unpersuasive district court opinion. Mark MacCarthy and I both think that the transparency requirements in the Texas law are defensible, but Mark questions whether viewpoint neutrality is sufficiently precise for a law that trenches on the platforms’ free speech rights. I talk about a story that probably tells us more about content moderation in real life than ten Supreme Court amicus briefs—the tale of an OnlyFans performer who got her Instagram account restored by using alternative dispute resolution on Instagram staff: “We met up and like I f***ed a couple of them and I was able to get my account back like two or three times,” she said.
Meanwhile, Jane Bambauer unpacks the Justice Department’s new policy for charging cases under the Computer Fraud and Abuse Act. It’s a generally sensible extension of some positions the department has taken in the Supreme Court, including refusing to prosecute good faith security research or to allow companies to create felonies by writing use restrictions into their terms of service. Unless they also write those restrictions into cease and desist letters, I point out. Weirdly, the Justice Department will treat violations of such letters as potential felonies.
Mark gives a rundown of the new, Democrat-dominated Federal Trade Commission’s first policy announcement—a surprisingly uncontroversial warning that the commission will pursue educational tech companies for violations of the Children’s’ Online Privacy Protection Act.
Mark celebrates the demise of Department of Homeland Security’s widely unlamented Disinformation Governance Board.
Should we be shocked when law enforcement officials create fake accounts to investigate crime on social media? The Intercept is, of course. Perhaps equally predictably, I’m not. Jane offers some reasons to be cautious—and remarks on the irony that the same people who don’t want the police on social media probably resonate to the New York Attorney General’s claim that she’ll investigate social media companies, apparently for not responding like cops to the Buffalo shooting.
Is it "game over” for humans worried about artificial intelligence (AI) competition? Maury explains how Google Deep Mind’s new generalist AI works and why we may have a few years left.
Jane and I manage to disagree about whether federal safety regulators should be investigating Tesla’s fatal autopilot accidents. Jane has logic and statistics on her side, so I resort to emotion and name-calling.
Finally, Maury and I puzzle over why Western readers should be shocked (as we’re clearly meant to be) by China’s requiring that social media posts include the poster’s location or by India’s insistence on a “know your customer” rule for cloud service providers and VPN operators.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.