The 2016 election has put squarely on the public agenda a series of questions related to the norms of social media, everything from the proliferation of fake news on Facebook to the trolling culture of Twitter. These questions are not new. The culture of abuse online towards women, for example, is a matter about which one of us wrote a book. But over the last few months, the concerns—spurred in part by a president-elect and his followers who participate actively in Twitter abuse of opponents and critics—have vaulted into the mainstream.
The problems vary significantly by social media platform. On Twitter, the pressing issue is civility: values of free expression and individual user freedom often get pitted against norms of decency and the ability to participate online free of harassment and abuse.
Consider the attacks on journalists between August 2015 and July 2016. When neo-Nazi trolls attacked well-known writers on Twitter with anti-Semitic death threats and images of their (or their children’s) faces photoshopped into ovens, the goal was to terrorize and silence. Sometimes, the attacks succeeded. New York Times editor Jonathan Weisman suspended his Twitter account and switched to Facebook for a time after a cyber mob descended on his Twitter feed. The harrowing account of National Review writer David French’s experience of Twitter abuse as a result of his opposition to Trump is another example.
The obvious reason why Weisman switched to Facebook is that random strangers can’t contact users and attack them there without having gotten through an initial screening process. Non-friends cannot directly contact users on Facebook until the user has given the green light to interaction by accepting a friend request. That initial screening process substantially diminishes the possibility of drive-by attacks, though it also prevents unexpected, positive interactions with strangers.
By contrast, Twitter has no such speed bump to interactions. Anyone can engage with any user: Put an @ in front of someone’s user handle and you have their attention. That can be a very good thing. A diversity of interactions can yield rich discussions, unexpected insights, and pointed feedback. Yet it also allows angry cyber mobs to descend upon users with threats, defamation, privacy invasions, and intimidating slurs. Right now, on Twitter, each troll gets at least one free shot at each user and has to be blocked individually, which is both time consuming and onerous if multiple individuals are targeting individuals.
So the question is how can Twitter and similarly designed platforms help users harness the positive potential of networked interactions while diminishing its most destructive uses? How can platforms empower users to help themselves more efficiently and effectively, avoiding the need for their intervention, which can be hard to scale and raises concerns about private censorship?
We have a simple proposal for a technical mechanism to align the values of free expression, individual freedom, and civility. It would provide an easy and quick way of giving users more control over the material they see and read and give groups of users the ability to enforce their shared norms. Specifically, Twitter should expand its current system of letting users designate whom they “block” and “follow” to let users designate other users whose blocks and follows their accounts will replicate.
Third-party apps like Block Together have emerged with the express purpose of preventing abuse and harassment. Integrating “block together” and “follow together” functions on Twitter would make it easier for users to protect themselves and view the content that most interests them while reducing other apps’ access to users’ feeds and personal information. Integrating this sort of functionality into Twitter itself would also send a powerful message about Twitter’s desire to enhance civility, privacy, and user control.
A “block buddy” system, like Block Together, would crowd-source the process of blocking among like-minded people. In this system, if Ben block-buddies Danielle, he effectively instructs Twitter that if someone is behaving in a sufficiently uncivil fashion that Danielle doesn’t want to hear from that person any more, then Ben instructs the platform that he doesn’t want to hear from that person either.
If Danielle block-buddies the Southern Poverty Law Center, the Anti-Defamation League, or the National Association for the Advancement of Colored People, entities with the mission of fighting bigotry, she is saying that anyone that organization blocks should also be dead to her for Twitter purposes. Similarly, if people find Ben’s or Danielle’s tweets so offensive that they don’t want to hear from them, they can band together and block them as a group. Some individuals would use this system merely as a way of keeping away harassing users. Civil rights organizations like the SPLC, ADL, and the NAACP, whom many would be inclined to trust in their ability to identify Twitter accounts used to harass and silence, might use it as a way of identifying hate speech online that large numbers of people want to do without.
By implementing a block buddy system, Twitter would effectively allow people to delegate the blocking process to trusted others. Critically, it could do so without itself policing the content in question. Individual users could be as aggressive or reticent about this delegation as they choose: A person who doesn’t mind a certain amount of abuse and doesn’t want to delegate any blocking to anyone does not have to do so and can retain as much control as desired; a person more intimidated by the atmosphere, by contrast, can designate lots of people as block buddies and thus have a cleaner feed, albeit one with less chance of spontaneous positive interactions.
Twitter should also consider adopting a mirror-image “follow buddy” system, in which users can honor another user’s follows. If Danielle has designated Ben a “follow buddy,” she would automatically follow anyone he chooses to follow. This would allow a new user in a particular field or social setting to instantly acquire a Twitter feed based on the people her immediate friends have followed and follow in the future. It would create a default in which she follows her buddies’ follows unless she specifically unfollows them, rather than a default in which she has to choose to follow them individually in the first instance. Again, nobody would have to use this feature. It would simply be an option available to people who want to delegate some degree of control over their feeds to people or organizations whom they trust.
The result, we think, would be a free-speech-friendly and content-neutral means of giving users greater control over the substance and civility of their Twitter feeds.
Would this system cause more people to live in philosophical bubbles? It probably would. But it’s really only an extension of Twitter’s existing system. Currently, users designate those they want to hear from and block those they do not want to hear from. All we are suggesting here is extending these two principles to enhance user choice about whom we trust to act on our behalfs in those judgments without needing to resort to third party applications that may harvest users’ personal information.
This system is, we concede, prone to abuses of its own. For instance, a mischief-maker could include Ben in a block list not because he is engaged in anti-social activity but, say, to silence his views on surveillance or Guantanamo or to deprive him of the ability to engage with others on Twitter. If that mischief maker were a big organization with lots of block-buddies, the silencing effect could be substantial. Similarly, a mischief-maker could include a destructive individual in a follow list for anti-social ends, thus magnifying that person’s voice to lots of unsuspecting people. Worst of all would be if the mischief makers were government authorities bent on silencing and marginalizing dissenting voices.
Then there’s the potential mischief involving small, localized groups. Consider the implications of a social clique in a middle school who have all block-buddied one another and then were to en masse turn on the bullied kids who are on the outs and block them. All of a sudden, those kids are shut out from everyone.
The mischief in the public sphere may be easier to address than in the more intimate local setting of a school. For one thing, there’s the person maintaining the block or follow list. That person is presumably entrusted with the list for a good reason by each person who trusts him or her. The trust is earned, and it can be undone by bad management of a list. Thus, if the trustee falls down on the job of screening additional profiles for following and blocking, users will be less inclined to use that trustee’s list and may abandon it entirely. The beauty of Twitter is that word gets out quickly, so a trustee’s reputation can be revised and updated.
Moreover, the follow- and block-buddy system should not be mechanistic but one that merely changes the defaults. If Danielle block-buddies Ben and he blocks someone, she should still have the ability to unblock the person on an individual level. So even if she doesn’t lose faith in Ben’s blocking judgment in general and uncouple her blocking from his, she can still disagree with and undo any individual judgment he may make.
The problem at the local level may be more difficult to resolve. And it may be that improving the civility climate on Twitter at the political level will necessarily create opportunities for abuse and bullying at the local level—where people’s reasons for blocking one another may be more likely to be petty, highly personal, or downright mean. That is not to suggest that this problem is intractable. There may be algorithmic ways of ameliorating the highly-localized problem: for example, preventing small groups of people who all follow one another from mass blocking of a person deeply within that social web. If, say, ten people all follow one another, Twitter might disallow nine of them from mass-blocking the tenth, but have blocking in those situations automatically revert to only individualized actions. One could imagine, in other words, a block-buddy system that does not operate in certain environments in which it might be prone to abuse. The shunning of the out kids, of course, might still happen, but it would have to take place as a consequence of the individual actions of the nine users, rather than as an automated consequence of the block-buddy system. And that scenario is already possible today.
The broad point here is that there should be some way on Twitter to crowd-source both disgust with and interest in people and organizations about whom many users are likely to have similar reactions. There should be a way of collectively following and collectively turning our face from people who either enliven or diminish crucial online spaces for discourse, networking, and enlightenment. That mechanism should be entirely voluntary, and it should be designed so as to avoid implicating Twitter in any editorial or political judgments.
Crucially, a block- and follow-buddy system might help forestall some governmental pressure for platforms like Twitter to remove hateful and terroristic speech. That pressure should not be underestimated; the recent Code of Conduct agreement between Facebook, Microsoft, Twitter, YouTube, and the EU Commission on hate speech demonstrates as much.
One does not have to be a believer in the creation of “safe spaces” to believe that Twitter should experiment with tools that enable users to protect themselves from destructive abuse (and hence keep expressing themselves on the platform). A block- and follow-buddy system would succeed or fail on its own terms. It might end up as a niche tool, used only by a minority of users. But it is worth trying to avoid a far worse fate: the loss of talented voices (often but not always women, people of color, and religious minorities) as a consequence of the destructive online culture made needlessly easy to inflict on individuals as a consequence of the architectural choices of platforms like Twitter.