When Social Media Data Disappear

Engaging with data that has been erased requires methodological creativity

This chapter in a longer, edited volume explores the quetion of how to collect data that have been erased from their primary locations on the Web, a category Freelon calls absent data. In such situations, the standard methods of data collection cannot be applied; indeed, in some cases it may not be possible to obtain absent data at all. Using an empirical case of the Internet Research Agency (IRA), Freelon offers four methods for when other, more standard methods fail.

Minding the gap between public opinion and social media data

For most of the twentieth century, public opinion was nearly analogous with polling. Enter social media, which has upended the social, technical, and communication contingencies upon which public opinion is constructed. This study documents how political professionals turn to social media to understand the public, charting important implications for the practice of campaigning as well as the study of public opinion itself. An analysis of in-depth interviews with 13 professionals from 2016 US presidential campaigns details how they use social media to understand and represent public opinion. I map these uses of social media onto a theoretical model, accounting for quantitative and qualitative measurement, for instrumental and symbolic purposes. Campaigns’ use of social media data to infer and symbolize public opinion is a new development in the relationship between campaigns and supporters. These new tools and symbols of public opinion are shaped by campaigns and drive press coverage, highlighting the hybrid logic of the political media system. The model presented in this paper brings much-needed attention to qualitative data, a novel aspect of social media in understanding public opinion. The use of social media data to understand the public, for all its problems of representativeness, may provide a retort to long-standing criticisms of surveys—specifically that surveys do not reveal hierarchical, social, or public aspects of opinion formation. This model highlights a need to explicate what can—and cannot—be understood about public opinion via social media.

Calculating publication impact by citations is deeply flawed, Deen Freelon argues

Pablo J. Boczkowski and Michael X. Delli Carpini have done the field a great service with “On Writing in Communication and Media Studies,” Freelon argues. He predicts the article will soon become a classic of first-year PhD proseminars given its clarity and efficacy in laying out the inner workings of the major genres of writing in which we most often participate. In this response, Freelon offers two brief points, both of which pertain to the general issue of how the impact of various forms of scholarly writing should be assessed. Questions ofimpact are inseparable from discussions of scholarly writing in any discipline, as the incentives in place for various writing genres will, to a substantial extent, determine how much of each genre is produced. First, we should consider impact primarily at the level of the writing product as opposed to the journal or outlet level. Second, and relatedly, optimally assessing impact requires knowing which values of each metric count as outstanding, a that requires distributions of impact metrics for scholars in the same subfield who started publishing around the same time. Working toward such a solution would generate an empirical basis for standards of impact, which our field currently lacks.