During the chaotic withdrawal from Afghanistan this summer, U.S. policymakers had to decide whether to formally recognize the Taliban as the new Afghan government. But the first policymakers to address this question publicly were not government officials. They were trust and safety and public policy executives within major tech platforms Facebook (now Meta), Google and Twitter. Their seemingly minor decision whether to allow the Taliban to use official Afghan government accounts would have major effects, similar to state recognition. If they decided to let the Taliban communicate with the Afghan people through official channels, they would imbue the Taliban with legitimacy. Ultimately, platforms decided to continue banning Taliban content.
This is but one example of what I call platforms’ geopolitical turn. Since 2016, major platforms have become increasingly engaged in a plethora of security and geopolitical challenges that were traditionally the state’s domain. They enforce counterterrorism policies and maintain blacklists of “dangerous individuals and organizations” similar to government sanctions lists. They work on securing elections in the United States and in other countries. They detect and remove state-backed influence operations and other forms of coordinated harmful behavior. They took steps to reduce their role in mass atrocities in global hotspots after fueling a 2017 genocide in Myanmar and contributing to sectarian violence elsewhere. They even played an active role in the latest armed conflict between Israel and Hamas. Facebook now boasts a 40,000-strong trust and safety workforce. For comparison, the entire U.S. Foreign Service numbers roughly 15,600 officials.
How does platform security and geopolitical work integrate with government policy and enforcement priorities? What does the government-platform dynamic portend for regulation? In a forthcoming paper focusing on the United States, I approach platform governance and regulation through a national security lens. That lens adds a new facet to a conversation that has so far been dominated by speech, privacy and competition concerns. It highlights regulatory interests that conflict with those considerations, and it focuses attention on the government-platform nexus. The paper also develops new theoretical insights about national security privatization in an environment in which powerful private technology actors increasingly create and control spaces that governments cannot easily reach.
But first, a caveat: None of the foregoing assumes that platforms’ commitment to mitigating the security and geopolitical harms that they cause is genuine, or that those efforts are effective. Indeed, recent revelations about Facebook’s practices once again raise hard questions about its willingness to prioritize user well-being and the mitigation of societal harm over profit. Other platforms have faced similar criticism. Nevertheless, it is hard to deny that platforms are doing more work in the security and geopolitical space than they did prior to 2016, and that they play an important role in the modern security and geopolitical environment. It is necessary to better understand that role.
The Platform-Government Nexus
Platforms’ increased engagement with security and geopolitics has led to both internal organizational capacity building and greater voluntary cooperation with government national security agencies. That cooperation goes beyond sharing data pursuant to various legal authorities. There have been multiple reported examples of government actors providing intelligence tips to platforms around specific incidents. In addition, platforms have engaged in more institutionalized forms of cooperation with governments. One example of long-term cooperation is the Global Internet Forum to Counter Terrorism (GIFCT). The GIFCT comprises platforms, governments and other stakeholders. Another example is the platform-government election working group that met regularly in preparation for the 2020 U.S. elections.
Government-platform security and geopolitical interactions can have different drivers. Often, what drives government-platform cooperation are hard constraints on government in addressing security and geopolitical challenges that play out largely in privately controlled theaters. Despite the federal government’s expansive power to regulate in furtherance of national security interests, the First Amendment is a significant obstacle for direct government regulation of core platform security and geopolitical practices that inform content moderation. The practices include information gathering, threat analysis, and policy development and enforcement. As it is currently interpreted, Section 230 of the Communications Decency Act affirmatively invites platforms to engage in such practices in order to self-regulate content. Moreover, platforms have a significant institutional advantage over governments in addressing threats on their products and infrastructure. They know their technology and their users, and they are better placed to detect and respond to threats in real time.
Other times, government actors rely on platforms as bureaucratic workarounds. In these cases, government actors have legal authority and institutional capacity to act directly, but they opt to work through platforms informally. Government actors might prefer to conceal their involvement to protect sources and methods or to avoid overt conflict with a foreign actor. Alternatively, using platforms as their long arm may allow government actors to accomplish certain goals faster and without having to labor to meet legal and procedural requirements that would apply to direct government action. In yet other cases, certain players within agencies may go to platforms to advance a policy that faces internal political objection, as the Cybersecurity and Infrastructure Agency had done when it cooperated with platforms to secure the 2020 U.S. elections while the White House mounted a campaign to discredit the electoral process.
Finally, platforms can act as substitutes for government. Platforms at times advance policies that openly defy government on security or geopolitical matters. In other cases, they may step into a void that governments had left because of indecision, neglect or lack of interest. In those cases, platforms act unilaterally, not in cooperation with governments.
I argue that this emerging government-platform relationship—national security by platform—is a novel mode of national security privatization. It is privatization in the sense that platforms now perform core government security and geopolitical functions. But it deviates from paradigmatic models of privatization in both form and substance. The existing privatization paradigm has emphasized a deliberate delegation model, in which government actors actively transfer functions to private actors through a legal instrument such as a contract or another regulatory measure. There are typically subject matter and geographic restrictions on the scope of privatized functions, and the government maintains an oversight role. Major examples of national security privatization—like the U.S. reliance on private contractors for warmaking in Iraq and Afghanistan or the U.S. intelligence agencies’ widespread use of private contractors—largely adhered to this model.
National security by platform is privatization in the sense that private actors now perform core government security and geopolitical functions. They designate terrorist groups, detect and counter Russian influence campaigns, identify domestic militarized movements, and participate in geopolitical conflicts. But there is no deliberate transfer of government functions and responsibilities, and there is no government gatekeeping role. Platforms decide how much they want to be involved in security and geopolitics based on their own self-interest. In many cases, they act unilaterally without cooperating with the government. In others, they openly defy the government. There is no anchoring legal instrument that sets the parameters for platforms’ security and geopolitical responsibilities, and there is no subject matter or geographic restriction on the scope of privatized functions. The subject matter they encompass runs the gamut of national security and geopolitical challenges. Platforms must respond to threats on an ongoing basis almost anywhere in the world given their near-universal presence.
Approaching the problem of platform regulation through a national security lens illuminates different regulatory concerns than the ones that have dominated that debate to date. Platforms are now key security and geopolitical actors. And some government-platform cooperation around security and geopolitics is inevitable because the government depends on platforms to address an important set of modern security and geopolitical challenges. My paper explores some preliminary implications of national security by platform for regulation.
For example, advocates and scholars have criticized soft cooperative arrangements such as the GIFCT and the election working group on freedom of expression grounds. They warned that such cooperative arrangements create content cartels, allowing one actor to determine for the entire online ecosystem what content stays up and what content is taken down. But given that the government depends on platforms to address important security and geopolitical challenges effectively, soft cooperative arrangements might be unavoidable. They are a second-best option considering the constitutional obstacles facing hard regulation of platforms’ security and geopolitical practices and the government’s institutional deficits online. They allow coordination, may create at least some mutual accountability, and give the government visibility into platform practices. Over time, they can help develop norms for platform security and geopolitical governance.
Antitrust is another key concern that has dominated the platform regulation debate. A flurry of proposed legislation in this area is making its way through Congress, and “break up Big Tech” has been the battle cry of prominent critics of platform power. But from a national security vantage point, platform size and the dominance of several major players might be an advantage. The fewer players involved in policing and responding to online threats, the easier it is for the government to build partnerships and coordinate public-private responses to geopolitical and security challenges.
None of this means that security concerns must necessarily prevail in devising platform regulation. There are good reasons to approach nontransparent government-platform security cooperation with suspicion because it may facilitate abuses on both ends of this relationship. Nevertheless, policymakers and scholars alike need to better understand platforms’ role in the modern security and geopolitical environment, how they interact with the government in this space and how national security concerns might impact the broader platform regulation conversation. My forthcoming paper aims to advance the conversation in that direction by mapping and theorizing national security by platform.