Regulating Facial Recognition Technology

(Research Summary by Katherine Furl) 

How are Americans legally protected from facial recognition technology? And do these protections align with First Amendment protections, particularly Americans’ right to free speech? In “Regulating Facial Technologies”, Evan Ringel and Amanda Reid analyze the patchwork of regulatory approaches toward facial recognition technology (FRT) in the United States to determine how much FRT protections map onto protections under the First Amendment. 

FRT’s use as an identification tool poses several ethical and legal dilemmas. “One-to-many” facial matching in FRT requires access to large, preexisting databases of facial images and the collection of biometric information, identifiers related to a person’s physical characteristics. Large FRT databases and biometric data collection raise concerns around data security: inaccurate or biased use of FRT disproportionately harms marginalized communities more likely to face surveillance by law enforcement and other regulatory agencies. FRT can also be used not just by law enforcement or other government agencies, but also by private corporations or even individual citizens, raising additional concerns related to information protection and privacy.  

Despite these controversies, Ringel and Reid note that “there is no comprehensive federal law regulating use” of FRT, and “few states have passed laws explicitly focused towards facial recognition technology.” Instead, Americans’ legal protections related to FRT are scattered throughout a patchwork of state legislations and local ordinances. Ringel and Reid dive into these patchwork regulations to see how they connect to protections under the First Amendment, particularly looking to see if Americans’ information used in FRT can be considered protected speech.  

Though Ringel and Reid find “significant judicial support at the lower court level for a broad interpretation of information as speech”—interpretations that would likely protect Americans’ information from nonconsensual use in FRT under the First Amendment—other aspects of these regulations complicate matters. For one, several regulations referred to terms like “facial recognition” or “biometric data” without providing proper definitions, making enforcement of protections through these regulations difficult. Additionally, Ringel and Reid find two differing approaches in these ordinances, each with different implications and potential support from lawmakers and the broader public:  

  1. Broad bans or moratoriums, which would regulate use of Americans’ information in FRT across the board. While Ringel and Reid note this approach might be “constitutionally permissible,“ they also assert it “may not be politically attractive” given considerable public’s support for FRT’s use among law enforcement. 
  2. Case-by-case evaluations of acceptable use, while allowing for individual determinations of when FRT can be employed, introduce their own potential issues. As Ringel and Reid point out, “government agencies and private actors have little incentive to comply with procedural requirements,” so “it is unclear whether procedural requirements like a use policy or notice requirement provide a sufficient barrier against more intrusive uses of facial recognition technology.”

Ultimately, Ringel and Reid argue Americans’ information could theoretically be protected under the First Amendment, and several court cases and state and local regulations support such protections. Even so, these protections lack robust federal support with clear definitions—and as long as that remains the case, Americans’ protections related to FRT remain uncertain and temporary. 

(Research summary by Katherine Furl)

With the 2024 election fast approaching, have social media platforms learned a lasting lesson about moderating electoral disinformation? In their Tech Policy Press article “Platforms are Abandoning U.S. Democracy,” Bridget Barrett and Daniel Kreiss argue platforms risk repeating the mistakes of the 2016 election nearly eight years on. 

  

Social media platforms made positive strides during the 2020 election as “democratic gatekeepers,” taking steps to protect the integrity of U.S. elections and to ensure the peaceful transition of power. More recently, however, Barrett and Kreiss found platforms are backpedaling on this progress. For example, Donald Trump has been reinstated on platforms including Meta, YouTube, and X/Twitter “despite the fact that the former president continues to spout lies about the 2020 election and is actively working to undermine confidence in the next one.”  

  

While these platforms appear to be taking a hands-off approach to the 2024 election, Barrett and Kreiss point out that doing so ignores the potential widespread harm of election denialism and other electoral disinformation. The idea that permitting more and more information, regardless of its validity, in the hope inaccurate information will naturally be countered and corrected is not supported by Barrett and Kreiss's research. As the authors put it,  

  

“Platforms seem to have the naive view that speech and expression is always in good faith and that more political speech is always beneficial since the best ideas will ultimately come out on top. This is both blind to political manipulation and the real world empirical evidence.” 

  

Instead, repeated exposure to electoral disinformation tends to increase people’s belief, rather than increase the likelihood that disinformation will be countered. Adding to this, false information tends to spread more quickly than actual news on social media platforms. 

  

So, what can platforms do to avoid a repeat of the 2016 election? Barrett and Kreiss strongly urge platforms to engage in de-platforming antidemocratic accounts involved in the spread of electoral disinformation. Doing so “decreases audiences for those who would take away the political freedom of others.” 

  

There is still time for platforms to engage in practices protecting the democratic integrity of the 2024 election. As Barrett and Kreiss make clear, however, the clock is ticking and the time for platforms to act is now. 

(research summary by Felicity Gancedo)

American public opinion on who should be responsible for that governance is complicated. In “Social media policy in two dimensions: understanding the role of anti-establishment beliefs and political ideology in Americans’ attribution of responsibility regarding online content,” Heesoo Jang, Bridget Barrett, and Shannon McGregor answer the question “If partisanship doesn’t explain content governance opinions, then what does?” Jang, Barrett, and McGregor argue that it is political attitudes beyond partisanship, a lack of partisan ties, and lack of political interest in general that shape attitudes towards content governance.

Jang, Barrett, and McGregor investigated three possibilities of the public’s support for who should be doing the governance: the government (content regulation), social media companies (content moderation), or individual users (individual responsibility). They found that those with anti-establishment beliefs (regardless of their partisan leaning) are less likely to support government regulation and social media content moderation, but they are more likely to be in favor of individual responsibility. On the other hand, those who believe in a more active government role, government regulation of content is an appealing option.

The authors discussed how “anti-establishment beliefs are relational, reflecting a deep distrust between an individual and society… Those against government regulation are concerned less with protecting free speech for all and more driven by protecting their rights as an individual.”

The individual responsibility view is the commonly held content governance preference of those who hold anti-establishment beliefs. But belief in individual responsibility assumes that all users have the ability and capability to be ‘responsible’. The individual responsibility model puts “a particular burden on those most affected by harmful content online – women, people of color, and other historically marginalized groups.”

Individual responsibility is not only likely to “exacerbate existing inequalities”, but it is also the least effective model of platform governance, as the scholarly and regulatory discussion is rapidly moving towards a co-governance model with an increase in government regulation and decrease in self-regulation.

As the authors noted:

“The public's perspective on platform regulation holds significance, even though current debates often revolve around think tanks, political actors, journalists, and academics… Any form of platform governance, whomever the responsible actor – whether it be governments, platforms, or individuals themselves – impacts both the formation and content of public opinion.”

Understanding these underlying beliefs that influence how the public prefers to understand content governance provides helpful and needed insight into public opinion and may suggest promising new approaches to defining content governance regulations or approaches.