A range of approaches to child online safety legislation (COSL) is being proposed, debated, or implemented at both the federal and state level in the United States. While the specifics of these bills differ, they coalesce around concerns regarding the effects of social media on young people. This document:
-
Explains these concerns, why they have surfaced now, and how COSL purports to solve them.
-
Outlines major international, US federal and state legislative efforts, particularly the Kids Online Safety Act (KOSA).
-
Summarizes the primary frames by which COSL is justified—mental health, sexual exploitation and abuse, eating disorders and self-harm, and social media addiction—and evaluates the evidence for each.
-
Outlines concerns that researchers, activists, and technologists have with these bills: age verification, privacy and surveillance, First Amendment rights, and expansion of parental control over young people’s rights and autonomy.
While the impetus for this legislation is well-meaning, we question the assumptions behind it. Mental health and well-being is complicated, and tied to many different social and contextual factors. Solutions that exclusively focus on technology address only a very small part of this picture. The granular debate over the evidence linking smartphones and social media to youth well-being distracts us from the real difficulties faced by young people.
COSL poses enormous potential risks to privacy and free expression, and will limit youth access to social connections and important community resources while doing little to improve the mental health of vulnerable teenagers. Ultimately, legislation like KOSA is an attempt to regulate the technology industry when other efforts have failed, using moral panic and for-the-children rhetoric to rapidly pass poorly-formulated legislation.
We strongly believe that reform of social platforms and regulation of technology is needed. We need comprehensive privacy legislation, limits on data collection, interoperability, more granular individual and parental guidance tools, and advertising regulation, among other changes. Offline, young people need spaces to socialize without adults, better mental health care, and funding for parks, libraries, and extracurriculars. But rather than focusing on such solutions, KOSA and similar state bills empower parents rather than young people, do little to curb the worst abuses of technology corporations, and enable an expansion of the rhetoric that is currently used to ban books, eliminate diversity efforts in education, and limit gender affirming and reproductive care. They will eliminate important sources of information for vulnerable teenagers and wipe out anonymity on the social web. While we recognize the regulatory impulse, the forms of child safety legislation currently circulating will not solve the problems they claim to remedy.
On the heels of reporting that far right extremist militias are once again organizing on Facebook in advance of the 2024 U.S. presidential election, it is urgent for platforms to assess – and immediately act on – threats to the peaceful conduct of elections and the holding and transfer of power.
On March 26th, 2024, a working group of experts met to discuss the relationship between online platforms and election-related political violence. The goal was to provide realistic and effective recommendations to platforms on steps they can take to ensure their products do not contribute to the potential for political violence, particularly in the lead-up and aftermath of the U.S. general election in November, but with implications for states around the world.
Relying on online platforms to “do the right thing” without regulatory and business incentives that reinforce pro-democratic conduct may seem increasingly futile, but we believe there remains a critical role for independent experts to play in both shaping the public conversation and shining a light on where we believe these companies can act more responsibly.
Here are seven recommendations in brief:
-
Prepare for threats and violence: Platforms must develop robust standards for threat assessment and crisis planning, and ensure they have adequate, multidisciplinary expertise, and collaboration across teams. They should prepare for potential violence by engaging in scenario planning, crisis training, and engagement with external stakeholders, with as much transparency as possible.
-
Develop and enforce policies to meet the threat: Platforms should enforce clear and actionable content moderation policies that address election integrity year-round, proactively addressing election denialism and potential threats against election workers and systems while being transparent about enforcement and sensitive to local contexts.
-
End exceptions for high-value users: Politicians and other political influencers should not receive exemptions from content policies or special treatment from the platforms. Platforms should enforce their rules uniformly, and be especially attentive to preventing such users from monetizing harmful content and promoting election mis- and disinformation.
-
Resource adequately: Platforms should scale up teams focused on election integrity, content moderation, and countering violent extremism while ensuring that outsourcing to third parties in the growing trust & safety vendor ecosystem doesn’t impede expertise and rapid response to emerging threats and crisis events.
-
Transparency on content moderation decisions: Platforms must clearly explain important content moderation decisions during election periods, ensuring transparency especially when it comes to the moderation of high profile accounts. Platforms should establish a crisis communication strategy, and be ready to explain cooperation with fact-checkers and other sources that inform enforcement actions.
-
Collaborate with researchers, civil society and government where appropriate: Platforms should collaborate with independent researchers, civil society groups, and government officials to enhance the study and mitigation of election misinformation, within the context of applicable laws. Platforms should maximize data access, and proactively counter false claims about such collaborations.
-
Develop industry standards: Platforms should establish industry standards that respect diverse approaches to moderating speech but prioritize protecting elections. An industry body, similar to the Global Internet Forum to Counter Terrorism (GIFCT) could help develop clear threat assessment capabilities, enforce consistent policies, and facilitate collaboration to defend against democratic threats and political violence.
(Research Summary by Katherine Furl)
Recent upticks in election denial highlight journalism’s role in alerting the public to the risks such denials pose to the peaceful transfer of political power and to democracy itself. In “Safeguarding the Peaceful Transfer of Power: Pro-Democracy Electoral Frames and Journalist Coverage of Election Deniers During the 2022 U.S. Midterm Elections,” recently published in the International Journal of Press/Politics, Hesoo Jang and Daniel Kreiss analyze national and local news coverage of 2022 U.S. midterm elections with politicians denying the results of the 2020 Presidential election on the ballot, finding that journalists repeatedly fail to direct public attention to how election denial undermines the legitimacy of the electoral process.
Jang and Kreiss develop a new framework to assess journalism’s role in protecting democratic institutions, introducing the concept of “democracy-framed electoral coverage.” For electoral coverage to be considered democracy-framed, Jang and Kreiss assert it must go beyond merely mentioning political candidates’ election denialism. Democracy-framed electoral coverage must also “positio[n] election denial as a violation of democratic norms with deleterious implications for democracy...fundamentally different from other campaign issues.” Jang and Kreiss consider democracy-framed electoral coverage a core responsibility for journalists to uphold—without alerting the public to anti-democratic threats, journalism risks tacitly consolidating power among anti-democratic political leaders.
Using the 2022 U.S. midterm elections as a case, Jang and Kreiss analyzed of over 700 national and local news articles covering elections with 2020 election deniers on the ballot to determine if, and when, journalists employed democracy-framed electoral coverage. Across these articles, Jang and Kreiss found limited use of pro-democracy electoral frames. Even strong statements asserting denial was untrue (which fall short of democracy-facing electoral coverage, but at least attempt to counter blatant falsehoods) were less common than weak statements simply mentioning election denialism without attempting to correct it. Journalists rarely consulted election administrators, despite their nonpartisan role in ensuring secure and fair elections.
Jang and Kreiss additionally interviewed twelve journalists from local, state, and national news outlets to better understand the dearth of democracy-framed electoral coverage. These journalists very rarely considered that election denialism and other falsehoods could be employed in strategic bids for political power. Given threats to democracy posed by election denialism and other similar surges in anti-democratic rhetoric, Jang and Kreiss urge future research to interrogate the jouranlism’s responsibility to alert the public to these and other anti-democratic threats, in the United States and across the globe.