In the first seven episodes of this season, we’ve explored how disinformation plays on our biases, fuels our anger, and even nudges us to find only what we wanted to learn. The mess is daunting.
Building a healthier, informed democracy is not an individual project, but it’s one we begin imagining in this episode. Given what we know about the problem, how do we begin to fix things?
Listen to the episode
Listen on Apple Podcasts
Listen on Spotify
In this episode, we referenced:
- Behind the Screen. Yale University Press. Sarah Roberts.
- “What Happened When Trump Was Banned on Social Media.” The New York Times.
- “What Newsrooms Still Don’t Understand About the Internet.” Galaxy Brain. Charlie Warzel.
- News for the Rich, White, and Blue: How Place and Power Distort American Journalism. Columbia University Press. Nikki Usher.
- “AG Ferguson sues Facebook for repeatedly violating Washington campaign finance law.” Washington State Office of the Attorney General.
- “Why do we place disproportionately high value on things we helped to create? The IKEA Effect, Explained.” The Decision Lab.
- “Twitter Made Us Better.” The New York Times. Sarah Jackson.
- “Searching for a “Plan B”: Young Adults’ Strategies for Finding Information about Emergency Contraception Online.” Policy & Internet. Eszter Hargittai, Heather Young.
Shannon McGregor 00:03
If we’re all arguing and we’re all exhausted and we get to the point where we’re feeling hopeless, then the status quo remains.
Kathryn Peters 00:15
Welcome to Does not Compute, a podcast about technology, people, and power. Let’s be real, studying these topics is hard. To understand disinformation extremism to really see what information is circulating, and who’s creating it and how it’s spreading takes really digging in and reading through it all. That makes for a good long look at our ugliest human impulses. It’s no wonder professional content moderators develop PTSD.
Just in this season we’ve talked about how ideologues use humor to promote hate, how they game search results for others to find, twist immigrant communities histories to fit their political narratives, and play on anger to spread their ideas. We’ve talked about living in an information environment where these lies circulate, why we might not recognize them, why we might even embrace them. We’ve heard how hard it is to push back on those beliefs. Even within our own families, at our own dinner tables.
Stepping back, it’s easy to feel hopeless. I went back to the researchers to understand what they see as the potential repairs we might make, where they find hope for the future of our information era, democracy, and how we collectively don’t give up. Naturally that’s not to say any of these measures are easy or are enough in and of themselves. Alice suggested earlier this season that we think about disinformation as garbage and only talk about how maybe we could clean it up. Now we know more about how to remove its sources and produce or change the incentives for generating more of it, but that’s a lot more work than just the online equivalent of a weekly curbside collection. Not that what we’re doing now is going so well. While there are signs of hope, there’s still a significant amount of work.
Alice Marwick 2:02
Sometimes I feel like disinformation researchers are very judgmental of the general public. I really want that to end because disinformation is not the fault of people who are gullible, fools, cultural dupes spreading around garbage that somebody else put in front of them. That’s just not how it works.
No, it’s not how it works. If we continue, especially as academics, to come up with solutions like media literacy and fact-checking and grant them legitimacy through our research on it, then we are basically allowing platforms and media manipulators and racist, homophobic, misogynists, all the ifs, to succeed because we’re making this an individual problem that can never be solved individually and I think we should do everything we can to reject that.
Deen Freelon 3:04
I think it’s really important that we understand that the actions that we may take as individuals should not preclude systemic action, think it’s really a both and type situation. I think that sometimes people say, “oh, well, if you just become media literate or you just engage in this like checklist of things to do, then everything will be fine” and that’s not really true. I think that good faith conversations, as far as this goes, really acknowledge that it takes the individual as well as systemic actions by the platforms and others to address this problem comprehensively.
Unsurprisingly, several researchers highlighted a lot more that technology companies could be doing in terms of how they define and enforce their policies. What’s allowed, what’s not, who gets to use these platforms as personal megaphones? We’ll hear first from Deen Freelon and then again from Alice Marwick and Shannon McGregor.
One of the major platform interventions that we’ve seen, that’s been highly effective has been de-platforming. The New York Times recently had a nice visualization where they showed the drastically limited reach of Donald Trump’s statements after he was de-platformed. The reach is not zero because people will still carry forth his words, even after his accounts were removed from Twitter and Facebook, but it went down by, at least in order of magnitude, it was a huge, huge reduction. When you look at the fact that a very large amount of this information that’s out there is spread by a very small number of people. In other words, a very small number of targeted de-platformings could have an outsized effect on the amount of disinformation that is spreading within these platforms.
I do think the platforms have some things to answer to. I think, especially as we look at the coverage that comes out more and more now about like what Facebook knew and when, and what they fail to act upon, they were very cognizant of the fact that disinformation was spreading on their platform, that Facebook groups were being used to, for example, to organize the January 6th insurrection or to organize Charlottesville. They knew all this stuff and they didn’t act on it.
At some point, the platforms have to draw a line in the sand and say, this is disinformation. Have to be willing to put your company on the line and say, that is not true. While I don’t believe that platform should be the arbiter of truth, I don’t think any of us want Mark Zuckerberg determining what should or shouldn’t be in public discourse, the fact that they have been unwilling to do that is spineless and cowardly. They’re perfectly happy to reap all the benefits. I’m generally someone who’s like, “Well, let’s not blame the social platforms for everything. This is a much bigger, more global ecosystem type of thing,” but today I’m mad at Facebook.
Well, we can say that we don’t want to blame Facebook for everything but still blame Facebook for a significant amount of things, right? Rather than striving to be these neutral arbiters and to try and have these policies that are applied evenly, it was an ends neutral mindset because they are private companies. They should stand up for democracy. They should make the choices and take the political or social hit that might come from forefronting democracy and not being an ends-neutral, but they certainly could do that. They have the financial wherewithal to take that hit, I’m positive. I think with that same ethos take those hard actions and deal with it.
Francesca Tripodi also suggested that there are changes technology companies can make to the design of their software tools and how we interact with them in the first place.
Francesca Tripodi 6:45
Google has over time shifted from providing us with just a series of hyperlinks, to trying to answer our questions for us, and also providing us to like-minded searches. Instead of showing you people also searched with links that then provide you hyperlinks that ultimately take you in the exact same direction people also searched, you could say, “have you considered searching [x]?” That might not necessarily change people’s minds, but it would open them up to other perspectives.
I think Google in trying to become the platform of our lives is very invested in this idea that it needs to answer our questions for us and I don’t think it does. I think Google’s best role is to help organize information in a way that allows us to explore what’s out there. It has slowly shifted from an exploratory platform to one they want you one and only to never leave it.
Looking past technology companies the media came up again and again. In making sense of the information we encounter online journalism, especially local journalism is more important than ever and finding long-term sources of funding and support for news coverage is critical.
I think most media and journalists are doing a pretty good job. I think in some ways they’re more exposed or the vulnerabilities that journalism as an institution has to misinformation and disinformation are on display because they’re the first line of defense against it. We have them to protect us from it, but they don’t have anybody. Their failures I think become very obvious and very manifest.
There are some journalists who are doing a good job of tampering down on amplification of harmful narratives. I think there is an increased awareness that you don’t necessarily need to cover every half-baked conspiracy theory. You don’t need to do glossy spreads on the Nazi next door. You don’t need to cover hateful movements with curiosity rather than critique.
I think there’s still a long way to go. I’m constantly surprised by how frequently those types of profiles still do pop up. The New York Times especially, they love a Nazi next door, but individual journalists I think understand this and the ones who have been on the dissident, albeit for a few years, I think are much better and more ethical and more thoughtful about the issues that they cover than they were in 2016.
Charlie Warzel brought up something that I think is a good point, which is that journalists need like an internet IQ. Even if the story has nothing to do with the internet or social media, you still have to think about how might it play out? How might it be weaponized? How might you be a target because of this story? I think you’re right in that a lot of journalists are starting to get that, but I don’t think in general the editors have gotten it. The editorial class, especially in the mainstream, more legacy news media, doesn’t have the requisite Internet IQ yet.
I think this plays into another theme, which is this emphasis on individual solutions over structural solutions. It shouldn’t be the responsibility of individual journalists to make these decisions. It should be at the editorial level. Part of this just requires people to better understand Internet dynamics in a really complicated way that, frankly, only people who are extremely online or people who have spent their life studying these dynamics truly understand. Technology moves faster than our understanding of technology. Technology moves faster than our social norms around technology.
It doesn’t solve all the problems, but to have publicly funded good quality journalism, again, not going to fix all the problems, but it certainly puts some friction in a lot of the flows and the political economy of the way these things interact at the moment right now. Pay journalists more, would be just one very easy solution, you know what I mean, to a lot of these problems. Give them the time to do it. If you’re an editor, don’t have beats that are like, “Hey, what’s the weird shit on the internet today and write a story about it” because those beats exist.
This points to Nikki Usher’s point about there being this bifurcated media landscape where you have stuff like The New York Times and The Washington Post that I pay for because I can afford to, and that’s high-quality journalism, but then most people are not going to pay for that. The stuff they’re getting is the Yahoo! News, the content aggregators, the Outbrain Taboola boxes at the bottom of the page that have a bunch of sensational garbage. People aren’t getting access to that.
Daniel Kreiss 11:58
It strikes me that some of Facebook’s enormous profitability can be directed towards making more investments in things like local journalism and supporting independent journalists, people who are rigorously and reliably producing good information about the workings of their towns and their counties and their state houses. We have lots of national reporting, but we know that, for instance, local news has been hollowed out over the last 40 years. I think it’s time to see a renewal. I think part of that renewal can come from platforms themselves who say, “Look, it’s great that everyone has a voice, but we should also subsidize those voices who are producing the information that Democratic publics need and create that framework.”
In addition to support for local media, eliminating or limiting the reach of hyper-partisan media also matters.
The other factor in this that we haven’t talked about is the hyper-partisan media, and not even just Fox News at this point, although they’re bad enough, but also OANN and Newsmax. These things that are ostensibly news and they’re ostensibly journalism, but they’re not adhering to any of the practices of journalism. They’re flat out pushing these disinformative narratives because at this point they know that their audience is demanding that. As long as that exists, it doesn’t really matter what the social platforms do. That is causing a lot of problems in and of itself.
Absolutely, and this goes back to the evolving business model of journalism, which is, as you mentioned, the audience demands it. That’s what they need, is eyes in front of it, clicks on the page. I f they feel the audience demands it, they’re going to keep producing it. I do think platforms play a role in it because a lot of these sites use in particular Facebook to grow their audiences.
While policymakers at all levels of government have roles to play, the researchers, especially Rachel Kuo and Alice Marwick, point out that these regulatory tools are also capable of causing their own harms.
Rachel Kuo 14:14
One of the ways that both the US government and also platforms are looking to counter disinformation and misinformation online has been building that into initiatives for countering domestic terrorism. In many ways, that’s expanding the kinds of infrastructures that is actually going to target and harm people who are already vulnerable to online abuses and harms, including misinformation and disinformation.
When I think about platform regulation, I think it gets complicated because the US has a very poor track record of technology-specific legislation. It’s often based on moral panics and it generally fails the First Amendment test a lot.
The First Amendment limits many obvious regulatory proposals addressing speech online, but policy responses that promote transparency and increase external oversight without regulating speech itself offer promising directions.
The policy discussion is a little bit difficult because the First Amendment, I’m not a lawyer, but if you look at how the case law has been interpreted, it becomes very difficult to sustain any laws that might materially reduce the amount of disinformation that’s circulating in our information environments in the United States. Obviously, other countries have different laws and they have more flexibility around that.
The main other policy lever that we’ve seen really be effective is libel law. That’s where Dominion and other voting machine manufacturers have really been filing suit against folks that have been making specific libelous claims against them. I’m not a huge fan of chilling effects, but I hope this exerts a chilling effect on people who want to tell a libelous lie, that they might find themselves in court trying to defend a lot of what they’ve had to say.
I think another policy angle is data availability, having data availability to journalists, to academics, to NGOs, because there’s so much going on that it is being treated as a whack-a-mole right now, but if interested parties can have greater access to data to dig into the areas that they’re expert in, then I think that can also serve as a way to surface some of the disinformation or targeted campaigns.
I think this points to this larger issue with how difficult it is to regulate and govern algorithms in general and how algorithms are becoming part and parcel of daily life in so many spaces. The fact that they’re governed by things like intellectual property and proprietariness is really difficult. We know that there are sorting algorithms that do things like say whether somebody is eligible for welfare benefits. Those things are totally blackboxed and we don’t have a way to say, oh, this is working or it’s not working.”
That need for more algorithmic transparency also applies to search algorithms.
Francesca Tripodi 17:25
We need more transparency regarding how these large corporations lead us to the information that they lead us to. We know part of this is driven by our keywords and that they’re trying to match our content with what’s most relevant out there. These corporations have too much control over information that we have access to. We are looking at a smaller and smaller number of corporations that essentially are the gatekeepers to information and information access. We need more information as users about what are those decisions that they’re making and how do they impact us?
There is one policy thing on the table that’s going to help us with that, which is there is a couple pieces of legislation that are trying to compel, as part of regulating the social media companies and tech platforms, to compel more data transparency. That helps academics and helps journalists play this regulatory role in a way of pointing out problems.
For example, data transparency around digital political advertising may address many of the falsehoods that spread specifically around our elections and campaigns. Daniel Kreiss has a wish list.
We’ve seen halting steps when it comes to things like requiring more transparency around things like political ads and political ad targeting, political data, the security of that data, what mechanisms are in place. We’ve seen halting steps, but by no means anywhere far enough around things like disclosure. We’re behind those particular ads and how can we get more light onto who’s purchasing and spending money to influence the public sphere?
I think advertising is potentially a place where there’s a potential for legislative inroads.
It’s a place where we can draw the brightest line, both because it can be regulated, because it’s like a financial transaction, and also because there’s a precedent for it in this country. Other types of political advertising on different platforms is regulated. It doesn’t seem as scary to think about regulating platforms around political advertising, but they’re fighting it.
Facebook is involved right now with a suit with Washington State because Washington State has really strong public disclosure laws around political advertising that have been around since the ’70s. They got updated recently, but even the state would argue, even the 1970s law would have applied to political advertisements online since they started, which just requires some disclosure around not only who paid for it, but who was targeted with it. This is really what Facebook is fighting, is like revealing the targeted information because that’s the secret sauce, that is the product. They don’t want to be called on to disclose that. I’m an expert witness for the State of Washington so I think that’s an important arena to watch to see whether the State of Washington can be successful.
I just think the public should know a lot more than they currently do. I think that legislation can be targeted to compel transparency, to increase disclosure, and to get those independent non-partisan voices greater access to political communication, especially in the course of elections and other really big and important moments.
Even though mis and disinformation are large structural problems, that’s not to say there’s nothing we can do in our daily lives as we consume information online, and more importantly, what we can do as members of communities, sharers of information with those around us.
People should be okay with changing their minds in public. It’s okay to say, “There’s a lot of stuff happening out there that is really confusing, and I understand I am a subject to confirmation bias and motivated reasoning as anybody else and here is a situation where I got it wrong. Let’s maybe do a retrospective in terms of, why I got this wrong, and I want to talk to this publicly.” Also, it’s like, no shame on me, that’s the whole logic of disinformation.
I think modeling that reaction in public does something really important, which is to create a culture within which, being taken in by this, which may happen on occasion, is something that doesn’t become a mark of shame. Something that says, this is just a part of how we live now. It’s better to admit that we were wrong and to point out and do the autopsy on terms of how we can do better in the future, than to claim that we’re always right all the time.
Also normalize saying, “I don’t know.” I think a lot of the ways that people fall prey to disinformation or start to believe in narratives that lead them down pathways is because it feels terrifying to just say, “Actually, I don’t know anything about that.” I don’t know, maybe this is true, maybe this isn’t. I have no idea.
I would add to that, you can’t basically consume your way into better information. This idea that individually we’ll all become better consumers of information is just not going to work, but I think the thing that has really come up, I think for me too, is that the ways that we get information is also really based in relationships. Really it is like, how do we engage with disagreement, with not knowing, with being wrong, at also our relational level? I think that’s something that people do have a lot of difficulty doing and to engage in that longer standing connection and relationship with people to learn together and learn in relation together. How do you actually build information networks where people can share and transmit? What is it that they’re hearing and begin to come to shared collective consensus? These acts of study become like a space that people can really do that in.
In this element of social construction reality, how do we believe what we think we believe? A lot of this is created offline in communities, and churches, and barbecues, and neighborhoods. We take those beliefs and we bring them to the internet, and then we think that the internet is somehow providing us this neutral perspective that confirms those realities when it’s actually reconfirming those realities.
One way I think we might be able to break the IKEA effect of media manipulation would be to demonstrate to people they are being manipulated. I don’t think anyone wants to think of themselves as a puppet on a string. Ways that you can help to reach people to say, well, when you query this, they’ve actually created content around these concerns with the goal of reaching you.
“Try these queries. Do you get the same information? Why or why not?” One thing that you might– if people are engaging in this act of doing their own research, one way you might want to connect with them is to provide them with alternative search queries instead of just links because the link might be suspect. For them, if you’re showing them the keywords and saying, “have you tried searching this?” Or “what terms are you using to get to your information?” That might open up a conversation, I think, for someone who, at least in spirit, wants to go out there and find the information for themselves.
After asking the researchers about what technology companies, policymakers, and individuals might do to counter mis and disinformation, I asked them where else they’re finding room for hope. What other actions and responses might offer stories of progress, give us clues about how we live as a healthier democracy online. I was excited, and occasionally surprised by some of the stories they told.
Really looking at more holistic policy solutions that’s beyond the technologies themselves becomes really important to think about. For example, one of those is thinking about language justice, which has come up several times of how can people access the information they need, for example, for public services, and have access to the adequate language translation, interpretation, and services in their own language. I think looking at policies in those ways can also make interventions beyond just thinking about the platforms themselves.
Several people highlighted the work of Black Lives Matter and how advocates for social justice have innovated on social media to share and distribute news that wasn’t getting covered elsewhere.
One of the most important things that I’ve read on this is Sarah Jackson wrote an op-ed for the New York times at the end of 2019, talking about how Twitter made us better, and that it’s easy to say how horrible it is, but here are all the really important things that have happened, including Black voices and the grievances of Black Americans making its way into the mainstream media.
Yes, Twitter is a dumpster fire. That is completely true, but it also gives voice to lots of people who don’t have access to mainstream media and other ways. It is a way for people to amplify concerns. Like Black Lives Matter is about basically the fact that police brutality and police violence against people of color has been under-covered by the mainstream media for decades and decades, and when they do cover it, they cover it in a really racist bias way that makes the people who are killed by police violence look like the predators. This is a movement that isn’t just about social change on the streets, but it’s also about changing media coverage and media perceptions. That’s really important.
The Black Lives Matter movement, obviously those people who are pushing for things like expanded access to voting, who are pushing back on more restrictive bills, who are making an affirmative case for the teaching of things like critical race theory and the 1619 Project in the face of extraordinary attacks on historians and history, I want to just say, all of these people who are fighting for this are fighting in essence what has been a sustained disinformation campaign over the course of American history to imagine it in very particular ways. To imagine it as always having been this shining example of a democracy, to imagine it as being a meritocracy. I think our moment is in essence, a reaction to what is an extraordinary push for justice and equality and democracy.
Researchers also pointed out opportunities for public education and not only in the school system.
Recent and robust studies on media literacy have found that fact-checkers or people who are able to access credible information, practice, what they refer to as lateral reading, which means getting off the webpage, clicking a new tab, and searching for alternative information than what’s provided to you. That’s different from vertical reading, which is just starting at the top and reading down. While that’s, I think, a really good first step and extremely important and obviously it’s working for fact-checkers, I would say the average person doesn’t understand how if you open a new tab and make almost an identical query, it might just reconfirm those biases even further.
We think a lot about the Last Mile Initiative in terms of access to the internet, but we haven’t yet thought I don’t think enough about skills. This is something that Esther Hargitay talks a lot about in her work, but we are focused so much on access that we forget people really don’t know how to use these tools, and that if we wanted to look at public policy or initiatives for the public that there needs to be more work done on improving those skill sets.
That’s not to say any of these measures are easy or are enough in and of themselves. Alice suggested earlier this season that we think about disinformation as garbage and only talk about how maybe we could clean it up. Now we know more about how to remove its sources and reduce or change the incentives for generating more of it. That’s a lot more work than just the online equivalent of a weekly curbside collection, not that what we’re doing now is going so well. While there are signs of hope, there’s still a significant amount of work ahead of us.
When we think about the individual versus the structural, it’s really similar to like structural racism versus individual prejudice. We have a vocabulary to talk about these things in some other realms, especially as it ties into these structural issues is also really similar. Like, yes, we can tell people fact-check, don’t spread false information, call out your racist uncle for being racist, but we also need to have these bigger structural changes. Some of them, I think, are things that are really hard to solve, like the problem of: how does journalism make money if it’s not just like clickbait and people getting paid? Like how do we support the creation of really high quality information on a scale that is local and national and global that gives us insight into our world and allows us to be good citizens and good members of our community?
The tech is important for sure and it can be gamed for sure. At the end of the day, like there’s all the social stuff and political stuff that exists around the tech that tech might creative incentives in certain ways. That’s what tech needs to worry about. Do we incentivize certain forms of speech that might undermine democracy, for instance? Do we make it easy for elites to undermine accountability, but at the end of the day, you still have the elite seeking out every opportunity they can to hold on to power. At the end of the day, you still had, you had white supremacists 30 years ago, 40 years ago, through all American history, they did need a platform to burn a cross.
Ultimately, the main thing any one of us can do is be engaged in these questions, follow the news, and use our voices to shape the public issues of our day.
One of the issues that I think we haven’t touched on yet is the issue of institutional trust. This is absolutely huge. Trust is something that takes a long time to build. It’s something that in some cases can be broken if folks end up behaving in ways that are problematic. I think that really is a key ingredient when you’re thinking about how to engage with the truth, because this is an issue that like Walter Lippmann defined very specifically, almost 100 years ago, the whole way that news works is that there are people out there that are privy to information that you were not privy to. They report, you decide. Really, that’s how it works. So people breaking your trust, you got to pull out of the consumer news provider relationship. For people who care about telling the truth and not being bamboozled by disinformation, I think is really important to look at the track record, understand where failures occur, how they’ve been addressed.
If there’s anything I think people should remember, it’s that in order to get to a place of a more inclusive and more multi-ethnic and multiracial democracy, we have to go through these moments of extraordinary tension and you have to be up for an extraordinary fight because that’s the only way that power changes.
I’m not settling for quiet disagreement about people’s rights to life and to live without oppression and to be subject to injustice after injustice. I just am not okay with that. I think you have to have a little bit of fight in you and harness that a bit to get out and keep going sometimes, because otherwise you’re like, oh, whatever. I think just letting that little bit of anger, instead of getting the best of you, let it keep you from not settling and not settling for us just being quiet about our disagreements and all of those things.
Here at CITAP, we do this research because it’s one small piece of a bigger solution.
I think as academics, one thing that we can do is we can provide one level of that oversight. We can bring to bear a historical understanding, a political understanding, an economic understanding of these technologies and how they interplay and interact with these different realms of everyday life.
If you look across American history, at those big transformative moments of change like reconstruction, like the Civil Rights era, we’re extraordinarily conflict-laden. We emerged to a better place incrementally each time. I wouldn’t want to take that conflict out. I think people should remember that Facebook is not the source of it, even though it might be the battlefield upon which these things play out.
The conflict itself is not the problem.
No. As long as it’s happening within those sort of institutional rules and that’s the exact opposite of January 6th. There you see a very mobilized group of white supremacists and others who were feeling threatened by an election outcome that did not go their way, who took extra institutional action that was directed at the very institutional rules and bodies that hold everything else in place that are the institutional ways that we have of making people’s voices heard. I think that to me, that’s what should be ruled out of bounds and the rest we should fight over.
For that kind of public debate to be possible, we need strong institutions and a shared democratic foundation. As we’ve explored in this season of Does not Compute, technology platforms now fill critical roles in our society as information resources and public squares. They have responsibilities to the public that go beyond mere optimization for relevance or efficiency.
Here at CITAP we’re working to understand how these institutions, technology, journalism, government interact to support a healthy public life. As you heard from our researchers, some of the most important work and the greatest causes for optimism are popular movements that are building power online and off using new technology tools and tried and true personal relationships to create change. In defining 21st century digital democracy, there are roles for all of us to play. Thank you for joining us through these questions and discussions and playing your role as listeners sharing your time and attention. We’re grateful.
As you think about your own contributions to public life in the information age or questions you’d like to hear us address in a future season, I hope that you’ll share them with us. As always, you can find us at citap.unc.edu, on Twitter @unc_citap, or you can subscribe to our newsletter at citap.substack.com. If you learned anything from us in this season of Does Not Compute, I hope that you’ll share us with a friend or leave a review wherever you listen to podcasts.
It’s been a real joy to share these stories and this work with you. As always, Does not Compute is a product of many hands, including our executive producer, Jacob Kramer Duffield, our senior producer, Amarachi Anakaraonye, our CITAP project coordinator and production assistant, Joanna Burke, our music by Ketsa and our logo by Contia’ Prince.
Does Not Compute is supported by a grant from the John S. and James L. Knight Foundation
Made by: Amarachi Anakaraonye (Senior Producer), Joanna Burke (CITAP project coordinator), Jacob Kramer-Duffield (Executive Producer), and Kathryn Peters (etc)
Music: Ketsa, Parallel Worlds
Art: Logo by Contia’ Janetta Prince