Lack of standardization among platform political ad policies and products is causing problems for democracy
In the wake of the 2016 Brexit and U.S. presidential elections, the major platform companies including Facebook (and Instagram), Google (and YouTube), and Twitter implemented significant changes in the scope of the products and services they offer as well as their policies for working in institutional politics, especially in the context of digital political advertising. For example, all three companies rolled out verification processes for political advertisers and ad databases for the public. Facebook ended commissions on political ad sales and its “embed” program with campaigns. Google placed restrictions on political microtargeting and Twitter ended political advertising entirely and placed restrictions on what it has named “cause-based” advertising. This paper analyzes the policies and products of platform companies with respect to digital political advertising in the U.S. Our focus is on social media platforms (Facebook, Instagram, Twitter, Snapchat, Reddit, and YouTube) and the advertising capabilities that you can access through them, as well as Google search and the Google display
network.
Digital blackface spreads disinformation through "sockpuppet" Twitter accounts
The recent rise of disinformation and propaganda on social media has attracted strong interest from social scientists. Research on the topic has repeatedly observed ideological asymmetries in disinformation content and reception, wherein conservatives are more likely to view, redistribute, and believe such content. However, preliminary evidence has suggested that race may also play a substantial role in determining the targeting and consumption of disinformation content. Such racial asymmetries may exist alongside, or even instead of, ideological ones. Our computational analysis of 5.2 million tweets by the Russian government-funded “troll farm” known as the Internet Research Agency sheds light on these possibilities. We find stark differences in the numbers of unique accounts and tweets originating from ostensibly liberal, conservative, and Black left-leaning individuals. But diverging from prior empirical accounts, we find racial presentation—specifically, presenting as a Black activist—to be the most effective predictor of disinformation engagement by far. Importantly, these results could only be detected once we disaggregated Black-presenting accounts from non-Black liberal accounts. In addition to its contributions to the study of ideological asymmetry in disinformation content and reception, this study also underscores the general relevance of race to disinformation studies.
Political content from Russian trolls appears crafted to exploit intergroup distrust and enmity - on both sides of the aisle
Evidence from an analysis of Twitter data reveals that Russian social media trolls exploited racial and political identities to infiltrate distinct groups of authentic users, playing on their group identities. The groups affected spanned the ideological spectrum, suggesting the importance of coordinated counter-responses from diverse coalitions of users.