Yael Eisenstat, Justin Hendrix, and Daniel Kreiss
The Bulletin of Technology & Public Life
Political Processes
On the heels of reporting that far right extremist militias are once again organizing on Facebook in advance of the 2024 U.S. presidential election, it is urgent for platforms to assess – and immediately act on – threats to the peaceful conduct of elections and the holding and transfer of power.
On March 26th, 2024, a working group of experts met to discuss the relationship between online platforms and election-related political violence. The goal was to provide realistic and effective recommendations to platforms on steps they can take to ensure their products do not contribute to the potential for political violence, particularly in the lead-up and aftermath of the U.S. general election in November, but with implications for states around the world.
Relying on online platforms to “do the right thing” without regulatory and business incentives that reinforce pro-democratic conduct may seem increasingly futile, but we believe there remains a critical role for independent experts to play in both shaping the public conversation and shining a light on where we believe these companies can act more responsibly.
Here are seven recommendations in brief:
-
Prepare for threats and violence: Platforms must develop robust standards for threat assessment and crisis planning, and ensure they have adequate, multidisciplinary expertise, and collaboration across teams. They should prepare for potential violence by engaging in scenario planning, crisis training, and engagement with external stakeholders, with as much transparency as possible.
-
Develop and enforce policies to meet the threat: Platforms should enforce clear and actionable content moderation policies that address election integrity year-round, proactively addressing election denialism and potential threats against election workers and systems while being transparent about enforcement and sensitive to local contexts.
-
End exceptions for high-value users: Politicians and other political influencers should not receive exemptions from content policies or special treatment from the platforms. Platforms should enforce their rules uniformly, and be especially attentive to preventing such users from monetizing harmful content and promoting election mis- and disinformation.
-
Resource adequately: Platforms should scale up teams focused on election integrity, content moderation, and countering violent extremism while ensuring that outsourcing to third parties in the growing trust & safety vendor ecosystem doesn’t impede expertise and rapid response to emerging threats and crisis events.
-
Transparency on content moderation decisions: Platforms must clearly explain important content moderation decisions during election periods, ensuring transparency especially when it comes to the moderation of high profile accounts. Platforms should establish a crisis communication strategy, and be ready to explain cooperation with fact-checkers and other sources that inform enforcement actions.
-
Collaborate with researchers, civil society and government where appropriate: Platforms should collaborate with independent researchers, civil society groups, and government officials to enhance the study and mitigation of election misinformation, within the context of applicable laws. Platforms should maximize data access, and proactively counter false claims about such collaborations.
-
Develop industry standards: Platforms should establish industry standards that respect diverse approaches to moderating speech but prioritize protecting elections. An industry body, similar to the Global Internet Forum to Counter Terrorism (GIFCT) could help develop clear threat assessment capabilities, enforce consistent policies, and facilitate collaboration to defend against democratic threats and political violence.