Not everyone deserves their own Wikipedia page; that’s why Wikipedia’s notability guidelines exist. But definitions of notability are unevenly applied across race and gender lines:
Wikipedia’s editors are less likely to consider you “notable” if you’re not a white man.
Using a combination of qualitative and statistical analysis of Wikipedia pages nominated for deletion, Mackenzie Emily Lemieux and Rebecca Zhang join with Francesca Tripodi to explore how Wikipedia’s notability considerations are applied for female and BIPOC academics. They examined two key metrics used in the process of establishing notability on Wikipedia: the Search Engine Test and the “Too Soon” metric. The search engine test determines if a person’s online presence is well covered by reputable, independent sources. In their analysis, Lemieux, Zhang, and Tripodi found that this test predicts whether or not white male academics’ pages will be kept or deleted. But academics who are women and people of color are more likely to have their Wikipedia page deleted—even if they have equivalent or greater online presence than their white male peers.
The second metric, “too soon,” is a label applied to Wikipedia pages when a Wikipedian thinks there aren’t enough independent, high quality news sources about the page’s subject. Women of all races are more likely than men to be considered not yet notable (i.e., “too soon” to be on Wikipedia). The online encyclopedia’s editors were more likely to justify this label applied to women based on their career stages (e.g., “she’s an assistant professor” and therefore not yet notable). But this tag was applied to women on average further in their careers than men who received the tag. Individual bias continues to disadvantage women and people of color on Wikipedia; and Wikipedia continues to allow these hidden biases to influence processes of determining notability.
Digital trace data can change the study of misinformation, algorithmic bias, and well-being.
In this coauthored article, Deen Freelon joins with scholars across the United States and the Netherlands to discuss methods of digital trace data collection and measurement. Digital trace data (or DTD) are the tracks we leave in the snow of the digital spaces we occupy (e.g., cookies in your browser after an online shopping spree or the log data tracked by social media platforms). Although we commonly think of these kinds of data as privacy violations, they are also an opportunity for researchers to overcome the known limitations of platform-centric and user-centric approaches. Of course, collecting this data, organizing, analyzing, and measuring it in meaningful ways comes with its own challenges. Read on for an in-depth deep dive into the best practices for using DTD.
You can’t understand privacy without understanding power.
When we think about “privacy,” we often think of an individual’s privacy, her individual right to privacy, and individualized strategies for retaining control over private information. This is an ahistorical way of thinking that ignores the ways privacy violations are patterned in ways traceable to marginality and domination. Populations identified as “dangerous,” like Muslims after 9/11 or transgender individuals in the current political climate are disproportionately surveilled, their privacy deemed violable in the interests of other people’s safety.
Unfortunately, as Marwick shows in this paper, such issues of systemic issue are rarely addressed at global conferences about privacy like IAPP and SOUPS. Instead, techno-optimism rules the day, complete with sales booths offering the newest products individuals might use to protect themselves. Privacy isn’t an individual problem for which individual solutions that fiddling with privacy settings will help: privacy violations are gendered (most stalkerware is used against women by current or former male partners) and they are raced (as Black and brown communities in the U.S. are all too aware). Conceptualizing privacy without power is simply inadequate in the current moment.