On Aug. 4, in Dayton, Ohio, a gunman opened fire and killed nine people. The day before, another shooter killed 22 people in El Paso, Texas, apparently after posting a racist message to the anonymous online forum 8chan decrying an ostensible “Hispanic invasion of Texas.” Though there is no indication so far that the Dayton shooting was motivated by extremist political beliefs, the violence in El Paso is the third mass shooting in 2019 to be linked to 8chan and to some form of far-right extremism.
Latest in Social Media
On August 3, a shooter opened fire at a crowded Walmart in El Paso, Texas, killing 22 people. Shortly beforehand, it seems that he posted a screed on the online messageboard 8chan, framing the shooting as an act of terrorism against what he saw as the increasing Latino population of Texas.
Facebook has released an update on its ongoing civil rights audit, illustrating the wide range of effects the company has on civil rights—from facilitating racially discriminatory ads for housing, employment and credit, to concerns about use of the platform to suppress participation in the 2020 U.S. election and census.
It’s been roughly six months since Facebook started collecting global feedback on its proposal to create an oversight board for content moderation decisions. This morning, the platform released the findings of that process in an epic report—almost 250 pages of summary, surveys, public comment, workshop feedback and expert consultations.
Livestream: Hearing on Social Media Companies’ Efforts to Counter Extremist Content and Misinformation
The House Committee on Homeland Security will hold a hearing titled, “Examining Social Media Companies' Efforts to Counter Online Terror Content and Misinformation” at 10:00 a.m. on Wednesday. A video of the hearing is available here and below.
The techlash has well and truly arrived on YouTube’s doorstep. On June 3, the New York Times reported on research showing that YouTube’s recommendation algorithm serves up videos of young people to viewers who appear to show sexual interest in children.
During the past several years, platforms like Twitter,YouTube and Facebook have used a combination of automated detection and human review to identify and remove extremist accounts and content from their sites—in effect “de-platforming” extremists from mainstream social media.
These days, many people see technology companies as indifferent to law, or at least interested in remaining under-regulated. When Mark Zuckerberg called on Congress to regulate how social media companies should handle challenges such as harmful content and data privacy, the request was unusual enough to make headlines. This real or perceived disinterest in legal regulation has troubled a host of people, including those worried about protecting privacy and freedom of expression.
The rush to bring law and order to online spaces is well and truly on. Two important documents on the topic of online speech regulation have come out of Paris in the past week alone.
On April 22, Julia Angwin, an award-winning investigative journalist specializing in technology, was somewhat bizarrely fired as editor-in-chief from the fledgling media company she’d founded. The company, The Markup, was created in order to focus on data-driven journalism, and in solidarity five members of the seven-person editorial team resigned as well.