Editor’s note: This article grew out of work done in our Georgetown University class on national security and social media. The class tackled an array of questions related to how hate groups exploit social media, exploring issues ranging from privacy and human rights concerns to technological and legal barriers. Working in teams, students conducted independent research that addressed a difficult issue in this problem space. —Dan Byman & Chris Meserole
In recent years, governments across the globe have leveraged fake accounts on various social media platforms to amplify regime narratives and skew civil discourse in favor of the state. Some have formed troll armies composed of volunteers or paid staff who use fake accounts to spread propaganda; disinformation; and targeted harassment campaigns against critics, dissidents, minorities and any other enemy of the state. It is past time for Facebook, the world’s largest social media company, to use its reach to positively set expectations around acceptable state behavior—and the current situation in the Philippines, where the government routinely uses Facebook trolling to flood citizens with disinformation and discourage dissent, provides a good test case.
The scalability and low cost of these troll armies make them an ideal vehicle for propaganda. For example, Gary King, Jennifer Pan and Margaret E. Roberts estimate that in China, the state-run, so-called 50 Cent Party posts nearly 450 million social media comments from fake accounts annually. But small states with limited resources, like the Philippines, are also increasingly able to leverage cheap online troll networks to project their influence, amplifying their messaging to an unprecedented degree. The Oxford Internet Institute has identified 70 countries that have experienced “social media manipulation,” a term that encompasses many forms of online operations. Many of these countries—including Russia and Vietnam—have “troll farms,” consisting of government employees running accounts on behalf of the executive and military branches. Although some trolls work out of genuine support of the state, many work to afford the cost of living. The Philippines is one of nearly two dozen governments that contract out to private firms that pay citizens for part- or full-time work.
American social media companies, including Facebook, have implemented a number of technical and policy solutions to dismantle inauthentic account networks. So far in 2019, Facebook has reported removing “coordinated inauthentic behavior” in 23 countries across 17 separate reports. Facebook often specifies when it believes the coordinated inauthentic behavior is tied to state actors. In the first quarter of 2019, 2.19 billion “fake accounts” were removed from the platform using a combination of advanced algorithms and content moderators. However, Facebook’s statistics do not distinguish between automated bot accounts and troll-run accounts. It’s far harder for a social media company to identify a human-run troll account. The company has taken some meaningful steps, but the situation in the Philippines demonstrates that more must be done.
The destabilizing impact of trolls on politics is clear in the Philippines, where Facebook’s prominence has enabled President Rodrigo Duterte to monopolize information and dialogue on the internet. While trolls and bots account for approximately 5 percent of global Facebook users, the proportion is significantly higher in the Philippines. Roughly 69 million Filipinos were on Facebook as of 2017, representing 97 percent of the country’s total internet users—a striking statistic for a country in which smartphones outnumber people. In 2015, Facebook gave Filipinos access to the Free Basics app, a program that supplements users’ data charges for certain apps and therefore curates users’ access to the internet in 65 countries. Free Basics users have access to a number of basic services, including health information, weather updates and, of course, Facebook itself. In the Philippines, as with many other countries under the Free Basics program, there was a substantial increase in the nation’s internet penetration since its rollout in 2015. Now, Facebook is often Filipinos’ only means of access to the internet.
Duterte has capitalized on this informational shift to promulgate his message. Having begun his political career as a prosecutor, Duterte has always presented himself as an enforcer of law and order. During his 22 years as mayor of the city of Davao, he directed extrajudicial killings, primarily of drug dealers, low-level criminals and poor, orphaned street children. Duterte has continued similar policies as president. Since his inauguration in 2016, the Organized Crime and Corruption Reporting Project estimates that between 7,000 and 12,000 Filipinos have been killed in Duterte’s “war on drugs.” He has openly acknowledged his role in extrajudicial killings and promised to continue to kill criminals.
Duterte’s election to the presidency in 2016 was a victory enabled, at least in part, by social media. Prior to the election, Facebook led social media workshops for every presidential candidate, but Duterte’s campaign was unique in taking Facebook’s help and advice seriously. Nic Gabunada, Duterte’s social media director, leveraged trolling to maximize the campaign’s resources, which lagged far behind those of other candidates—most notably those of the two establishment front-runners, Grace Poe and Mar Roxas, who held centrist political views, had successful careers as senators and came from well-known political families.
Managing a troll farm of 400 to 500 people with a budget of $200,000 or more, Gabunada oversaw numerous troll accounts utilizing classic trolling tactics to flood Facebook feeds with pro-Duterte content—much of which was false. The effort paid off: Duterte went from a marginal candidate to a mainstream figure in national online discourse. In the month prior to the election, 64 percent of all Philippine election posts on Facebook mentioned Duterte. An internal Facebook report even called him the “undisputed king of Facebook conversations.”
Duterte preserved this reputation in the election’s aftermath: Days after his election victory, more than 30,000 tweets mentioned Duterte in just a two-hour period, appearing at a rate that at times exceeded 700 tweets per minute. This amounts to more tweets than those associated with any other Philippine presidential candidate over the previous month, according to an analysis by Rappler, a wide-reaching digital news platform that has distinguished itself by its opposition to Duterte and his government.
Rampantly spreading disinformation remains an efficient tool of propaganda and harassment in the Philippines. In practice, these trolling tactics by Philippine politicians have two parts. First, disseminating thousands of “shares” and “likes” from accounts that appear to be real people allows trolls to establish what seems like widespread and organic support for Duterte. Second, trolls undermine trust in news by posting en masse to “chip away at facts,” a technique that Rappler’s CEO and co-founder, Maria Ressa, has described as “death by a thousand cuts.” Craig Silverman of Buzzfeed reports, “The result is a political environment even more polluted by trolling, fake accounts, impostor news brands, and information operations.” As it currently stands, the lines among paid trolling, unpaid trolling and grassroots support are unclear.
For her criticism of the Duterte regime, Ressa became the target of the government’s trolling operation in 2016. After publishing an article that extensively details the Duterte campaign’s online activities, she received more than 90 abusive messages per hour in what evolved into a social media campaign to #UnfollowRappler. In February 2019, Ressa was arrested by the Philippine government on charges related to digital libel. She was arrested again in March for allegedly violating laws regarding foreign ownership of a media outlet. Activists have decried the charges against Ressa as naked attempts by the Duterte regime to silence her criticism. Ressa’s case is particularly high profile, but it is far from the only one.
In the years following Duterte’s election, Facebook has taken some action against “coordinated inauthentic behavior” in the Philippines. It has removed content that originates from fake accounts by removing the accounts along with any associated pages or groups. A few key accounts dominate the Philippines’ trolling regime. Rappler has identified 26 key Facebook accounts responsible for most original content that is later reposted and modified by approximately 12 million users. These 26 accounts display telltale signs of being fake: Most have profile pictures of famous K-pop artists, have fewer than 20 “friends” and maintain an active presence on pro-Duterte pages. All 26 follow each other. Even though they are often noticeably fake, the accounts have a tremendous impact online; just five of the accounts generated more than 50,000 shares from 1,300 posts in a single month.
The propaganda, disinformation and harassment that accompanies state-sponsored trolling on Facebook stand in the way of Filipinos’ access to an online experience free from manipulation. For instance, in March 2019, Facebook identified and removed 67 pages, 68 Facebook accounts, 40 groups and 25 Instagram accounts that were linked to Gabunada, Duterte’s campaign social media manager. According to Facebook, 3.6 million accounts followed at least one of the pages. This network, which reached millions of Facebook users, is just an example of how inauthentic campaigns can warp discourse online with artificially bolstered messaging.
Meanwhile, Facebook is working with the Philippine government on projects such as the Luzon Bypass Infrastructure, which upon completion will increase the available internet spectrum in the Philippines by more than 2 million megabits per second. The company has also launched Digital Tayo, a media literacy program that aims to provide Filipinos with the skills to better navigate digital spaces. These projects represent important steps in Facebook’s effort to clean up its platform in the Philippines.
Facebook would be wrong, however, to act as if these efforts in the Philippines have purged the country’s online space of harmful content. Disinformation continues to run rampant in the country. While unconfirmed, it is likely that Duterte continues to sponsor online trolling, and opposition candidates have adopted some of his 2016 disinformation strategies. Philippine politicians, having seen that Facebook has not held Duterte accountable for his online behavior, appear to be following his recipe for success. As Silverman reports, “[T]his uptick occurred in spite of Facebook investing in third-party fact-checking and acting to remove pages and accounts that violated its policies.”
The status quo is woefully insufficient, and not just in the Philippines. In the most high-profile example, Facebook has been ill-equipped to grapple with state-sponsored trolling in Myanmar, where the company lacked the cultural context, language skills and corporate protocol to remove the military-run troll accounts that incited the Rohingya genocide.
Facebook should clearly communicate standards of conduct with all governments, beginning with the Philippines. Clearly, the company recognizes the challenge to the Philippine market posed by disinformation and state-sponsored trolling: Facebook’s launching of Digital Tayo is a step in the right direction. This institutional momentum to improve the product for Filipino users could be harnessed to rectify rectify unruly state behavior. As the rise of Duterte’s leadership and disinformation in the Philippines is so clearly traced to state-sponsored trolling, it would be natural to start here. However, the solution we propose for the Philippines is not a one-off but, instead, is a model for replication in similar states around the world.
To address this problem, Facebook should create a “state-sponsored” classification for troll accounts that are suspected to be funded or otherwise rewarded by any branch of the military or civilian government, or by a government official. The company should outline a list of escalating consequences if a violation is suspected and not remediated by the Philippine government. Without state-sponsored trolls on the platform, Philippine public officials will lose the ability to feign grassroots support and harass dissidents.
If Facebook’s content moderators detect a sizable presence of state-sponsored trolls on the platform or a rise in trolling activity, the company should ask the Philippine government to provide information about its finances, communications and public relations operations as they pertain to the possible abuse of the platform. Facebook should require a reasonable degree of accountability from governments under scrutiny; senior officials would be responsible for alerting Facebook when they detect trolling online and ending any affiliation to such activities within three months.
Rappler and other Philippine civil society and media organizations with expertise should consult for Facebook’s content moderation team as it seeks to identify and remove trolls from the platform. Rappler, which has flagged thousands of troll accounts associated with the state, would be especially qualified to help Facebook tackle this problem in the Philippines. Rappler should help set standards for identifying patterns of trolling and terminating accounts.
The idea that the Philippines would agree to such an arrangement may sound unrealistic—but Facebook has tremendous bargaining power in the Philippines. Because Facebook effectively is the internet for many people in the country, the platform provides critical means of communications both for individuals and for the government itself. This leverage provides the company with at least a seat at the table in shaping norms of how its platform is used. Facebook should be proactive and use this power to foster a quality and harassment-free experience for the Philippines’ 69 million users.
We believe that with sustained pressure from Facebook, the Philippine government would likely capitulate and cease trolling activity. Duterte needs to maintain his presence on the platform, which is invaluable to manicuring his image and dictating national dialogue. An end of trolling does impact the extreme distortion of truth and fact. Moreover, the Philippine government has a strong incentive to maintain a good relationship with valuable business partners, including but not limited to Facebook. The government is seeking to secure the public interest of increased connectivity online, and Duterte has prioritized this interest through official statements and infrastructure projects that are connecting more Filipinos to the internet. Likewise, Facebook maintains up to 10,000 global content moderation jobs in the Philippines. Other big technology companies, including Twitter and YouTube, also headquarter global content moderation efforts in the Philippines. Some observers estimate that the content moderation industry employs up to 100,000 Filipinos. Perceptions of instability and a hostile business environment might drive out an industry crucial to the country’s growth.
What would the use of this leverage look like? First, Facebook could rearrange its News Feed algorithm to slow down the speed at which content of uncertain provenance can travel across the platform. The goal of this action would be to stop inflammatory and abusive content posted and shared by trolls from spreading quickly. Trolls might continue to operate online, but the platform would be less amenable to their goals. It would require more effort for any piece of content to have the desired impact. As a result, the state’s finite resources for trolling would be depleted more quickly.
There is significant precedent for modifying the News Feed algorithm. In 2013, Facebook tweaked the algorithm to tailor accredited news and information to each user. In June 2017 and January 2018, Facebook shifted the algorithm to bolster more “meaningful” conversations and content. Further, Facebook, Inc. (which includes Facebook, Instagram and WhatsApp) has taken steps to change its platforms to prevent harm to users. In July 2018, WhatsApp limited message forwarding in response to incendiary false messages that sparked multiple murders by angry mobs in India.
Second, Facebook could deprioritize content posted and shared on the News Feed by government officials suspected of sponsoring trolling. Facebook could move posts by Philippine politicians tied to state-sponsored trolling to the bottom of the News Feed. Their posts would still be accessible but would appear farther down on news feeds and on pages and timelines, where users would be less likely to see or interact with them. Without the same pervasive presence of original references, such material would have less impact on public discourse. While Facebook has not publicly stated that it deprioritized specific individuals’ content, this strategy is not completely without precedent: Twitter announced that, as of June 27, it would flag and hide content from government officials and political candidates that violates the terms and services of its platform.
By adopting this strategy, Facebook would efficiently reduce Duterte’s presence on the platform, thus allowing legitimate news sources to reestablish some normalcy online. In addition, this policy would limit the effectiveness of state-sponsored trolls, who often further develop themes and rhetoric from regime posts. If the Philippine government were to cease trolling activities, Facebook could reinstate the prior visibility of government posts and return the normal functionality of the News Feed.
The lessons learned in the Philippines about deterrence can be applied to other markets to improve discourse and create a healthier global online community. Facebook should roll out a similar strategy first with countries that enjoy high platform penetration, where the percentage of internet users who are on Facebook is high. The company should leverage Free Basics services in countries that already have access to them and should consider establishing terms of acceptable state conduct as a prerequisite for establishing a new Free Basics partnership.
States engaged in Free Basics are reliant on the program for basic internet access. Moreover, in many of these countries, Facebook has intimately integrated itself into public discourse and everyday life. States with high platform penetration do not overlook this fact.
Ending state-sponsored trolling would not fully address the social media manipulation epidemic in the Philippines. For example, this policy does not address disinformation posted by Philippine politicians or civil or military officials per se. Rather, if Facebook identifies a link between a post by a Philippine official and state-sponsored troll accounts, then the post should be deprioritized as described above. Limiting disinformation shared by political figures remains a difficult topic for Facebook. The potential for criticism of political bias and censorship is a virtual minefield for the platform. Combating disinformation by public officials is a logical next step after taking concrete action against state-sponsored trolling.
Additionally, an end to trolling will not solve users’ difficulty discerning between reliable and unreliable sources, which is paramount to mitigating platform abuse. The complexity of trolling campaigns necessitates media literacy training, both online and offline. Given its long-term interest in building a user base that trusts the platform, Facebook must continue to invest in robust digital media literacy programming in the Philippines that empowers local actors to better train Filipinos to recognize and reject disinformation. Such a commitment is imperative if Facebook hopes to be proactive in its efforts to combat all kinds of online disinformation.
The success or failure of Facebook’s efforts to combat the state-sponsored trolling of the Duterte government matters both to Facebook from a narrow business perspective and to society at large.
Facebook’s future growth depends heavily on the success and health of its user base in emerging markets like the Philippines. Facebook currently generates the vast majority of its revenue from developed markets such as North America and Europe, but future growth must come from other sources. In the first quarter of 2019, the number of Facebook’s monthly active users (MAUs) in the Asia-Pacific region grew by 34 million, compared to 1 million added MAUs in the U.S. and Canada combined, and 3 million additional MAUs in Europe. Facebook expects that user growth in Asia-Pacific and the rest of the world will continue to outstrip user growth in the U.S., Canada and Europe. In particular, Facebook named the Philippines as one of three countries that served as a “key source of growth” in MAUs in 2018.
Given these growth prospects, Facebook must be proactive in addressing the social and political hazards of propaganda and trolling, which have already eroded user trust in the U.S., Canada and Europe. Facebook should take meaningful action to ensure that users in countries with high MAU growth are not relegated to an internet experience with rampant disinformation and abusive content. Indeed, the company acknowledges that the key to future growth is maintaining its brand by promoting a healthier community. Just as Facebook successfully deployed vast resources to onboard users, it must invest in building a healthier platform for its users in the long run.
In March 2019, Facebook proved itself capable of leading among major internet platforms when it announced a ban on white supremacist content. Likewise, by redefining its platform in markets like the Philippines, Facebook would be the first social media company to face states that sponsor trolling. If Facebook makes headway, other social media platforms might follow a similar model. The lessons learned in the Philippines would help Facebook create a safer online community in emerging markets, helping Facebook’s bottom line and benefiting hundreds of millions of Facebook users who confront state-sponsored trolls on a daily basis.