Big Tech's Impact on Public Purpose: How Recent Decisions Will Shape Society
LAURA MANLEY: Welcome, everyone, and thank you for joining today's discussion. My name is Laura Manley, and I'm the director of the Technology and Public Purpose Project at the Harvard Kennedy School's Belfer Center for Science and International Affairs. The TAPP project works to ensure that emerging technologies are both developed and managed in ways that serve the overall public good.
[00:01:08] In today's session, Big Tech's Impact on Public Purpose, we're taking a closer look at the potential short- and long-term impacts of the recent decisions by big tech firms to enforce stricter content moderation policies in the wake of January 6th's Capitol riots. For years, social media companies have avoided taking a harder stance on moderating political content in the name of free speech. They have resisted locking or outright banning prominent politicians, like President Donald Trump, despite claims on mis- and disinformation and hate speech.
But when the President incited a mob to attack the US Capitol, big tech wielded its power. Following the January 6th attack, Twitter permanently banned President Trump. Facebook froze his account for at least the remainder of his term. YouTube suspended the President's account, citing the ongoing potential for violence. Google, Apple and Amazon have taken action against Parler, a social network which rioters used to help plan the attack.
[00:02:05] Trump and other politicians weren't the only target though. Facebook has removed all content that uses the phrase "stop the steal." YouTube has accelerated a policy to issue strikes on any accounts that post videos making false claims about election fraud. Twitter deleted over 70,000 accounts dedicated to sharing QAnon conspiracy theories. and Reddit banned the popular subreddit r/DonaldTrump, one of
the platform's largest political communities for repeatedly violating the platform's rules. The decisions to enforce these stricter content moderation strategies have welcomed polarizing responses. Some applaud the decision, while others claim it is an unacceptable form of political censorship. There is a central question across the response spectrum, though; and that is: What kind of precedent does these decisions set? And more specifically, what does this mean for online speech, private sector power, democracy and, ultimately, society? [00:03:01] Before moving into the speaker intros, I want to caveat this discussion with two key points. First, Trump's actions have catalyzed a more serious conversation about mis- and disinformation, hate speech, extremism and violence on digital platforms. But it's important to remember that these issues did not start with him and will not end with him. They're much larger and more systemic than this President and recent actions of big tech.
Secondly, hate speech, extremism and racism are, of course, not limited to the online world. Although our conversation today will focus on the realization of these risks within digital platforms, we must also remember that these are societal issues that must be addressed in a multifaceted way, not just through tech-focused solutions, but also through individual and institutional change. And they're, of course, related to larger public purpose concepts of truth, trust, democracy, freedom, private sector power, and more.
[00:04:01] So with that, we are very happy to welcome today's four leading experts on this topic. Today we have Joan Donavan, the research director of the Shorenstein Center on Media, Politics and Public Policy, and an adjunct lecturer in public policy at Harvard Kennedy School; Leslie Miley, a tech engineering leader who's previously held leadership roles at Google, Slack, Twitter, Apple and the Obama Foundation; Kathy Pham, co-founder of Fix the Internet incubator at the Mozilla Foundation; and Jonathan Zittrain, professor of international law; vice dean for library and information resources; and faculty director of Berkman Klein Center for Internet and Society at the Harvard Law School. [00:04:42] Each of our panelists will provide a unique perspective on the social implications of the decisions by big tech to act, whether it's through the lens of mis- and disinformation, tech and the law, or responsible project development. I'll kick off the conversation with a few questions
regarding timeliness, the Constitution and the potential impacts of big tech's decision to act. And then around 4:10, we'll open the floor to give you all in the audience an opportunity to ask specific questions. If you have a question for our panelists, please use the Zoom Q&A chat feature. Thank you, again, to Joan, Leslie, Kathy and Jonathan for joining us today. Let's kick it off with a question about constitutional concerns around the recent big tech bans. This may be a question for you, Jonathan. How credible are claims that Twitter, Facebook and other social media platforms violated the First Amendment right to free speech? And is this something that could hold up in court? JONATHAN ZITTRAIN: Thank you for bringing us together today, Laura. What a great group of people to be among. I'm very excited to talk this through.
[00:05:44] So your question was, does the First Amendment protect user speech on a platform like Twitter or Facebook? And the short answer, which lawyers are incapable of, but here we go, is, no. The First Amendment, part of the Bill Rights, generally targeted to protect citizens from people under the jurisdiction of the United States against the actions by the government; first, federal government and later state and municipal governments. And as much as Mark Zuckerberg has at times invoked Facebook as a global government, it is not. And therefore, there's no way in which the First Amendment protects. [00:06:23] One interesting case to mention: Marsh v. Alabama in the '40s found that a company town, private property, but
the Gulf Shipbuilding Company owned all of it. A company town might be [6:37] government. Then when a Jehovah's Witness came into Chickasaw, Alabama, into private property, but it sure felt like sidewalks and streets, and starting distributing literature, when she was arrested for trespass on private property, she had a First Amendment claim. But that is an unusual case, and it has not been followed generally since.
[00:07:02] Another question about whether First Amendment values are something that a private platform should, in the name of public spiritedness or good business, subscribe to. And that's a separate question. And I think certainly you see along the years the kinds of elements of the terms of service of a Facebook or a Twitter very much drawing from the language and the spirt of the First Amendment, but, for example, none of them tends to allow indecent material, which adults have a constitutional right to see as against the government preventing it, but we don't think of that exactly as some horrible infringement in spirit of users' rights. So those are the sorts of questions that the platforms have to confront if they want to be guided by the spirit of the First Amendment. But the literal First Amendment does not [7:51] them. KATHY PHAM: Can I add a perspective on that? LAURA MANLEY: Yes, please, Kathy. KATHY PHAM: My training is not law; it's computer science. And Jonathan, as you were talking,
all I kept thinking about was, so often in tech, whether it's some of the startups we've funded or some of the big companies I've worked at – I would love to hear Leslie's perspective on this since he's led so many engineering teams – we're often not well educated in law and not well educated in the social sciences and how, let's say, information travels and what speech should be governed and not, and that's legal and what's not, what's a policy versus a law and all the complexities around that. And sometimes there's this belief that you don't have to worry about it. Or maybe you don't have to worry about it until you have to where someone brings you to court or people stop using your product.
[00:08:46] And I think that's an important thing to think about, too, as we have these conversations. There's actual law and then there's these groups of engineer and product people and designers who will continue to build, despite what those are, until we have to confront them sometimes. So how do we think about how we build differently or incorporate all these values, like the First Amendment values that you mentioned, into what we build, and what might that look like. LESLIE MILEY: That's really interesting, Kathy. I think there's something that happens inside tech companies that is not acknowledged explicitly, which is societal context.
People build products based on societal context. And when you look at the monoculture that big tech is, you're going to get societal context that looks a lot like our society, which is fairly racist, fairly sexist, fairly patriarchal. So this is what gets built. [00:09:48] And so, when we talk about First Amendment, you can just throw that out the window. The First Amendment only really works for white men in this country. It hasn't really worked for anyone else. And so, when they try to trot out this argument, I'm like, look at who's making the decisions. It's the most privileged people on the face of the
planet. So the policies that you decide to enforce are going to reflect your societal context, not people of color, definitely not women. And I think these are the questions that we have to ask. And then we have to ask, when harm is being done, based upon these platforms, what are these companies' responsibilities? And what are the executives' responsibilities? And that's something that's really not being talked about because there is actual harm; people are being radicalized and they are committing acts of violence based upon the microtargeted amplification of content that is being disseminated at the speed of light on these platforms.
[00:10:47] So I know I just said a lot. [laughter] But I really think these platforms are a mirror of our society and in order to, I think, start looking at how to properly, or even begin to mitigate the damage that they're causing, you have to look at society and say, they're just mirror images. LAURA MANLEY: So building on that concept and your thoughts, Leslie, was this the right time to act? Is this something that should have been worked on a long time ago? I think many people can agree that they waited for too long to act, but other people can say, actually they have been making small changes along the way, over the past few years especially. Was this the right time to act? And what precedent does this set? [00:11:37] LESLIE MILEY: Was this the right time to act? No! [laughter] The right time to act was five years ago. The amount of damage that's been done to people who have been threatened, who have had to leave their homes, who have to have security guards, whose children are harassed is– I honestly don't understand how they sleep at night, I really don't. Because people are being immeasurably harmed. I think it came to this point because they saw the harm that was being done was really going to reflect upon them.
There's going to be a straight line you can draw from tweets and posts on Facebook and posts on Reddit that ties directly to the Nazis and the white supremacists and the QAnon folk who did all this damage, who tried to foment a revolution, essentially, a week-and-a-half ago. So that's the only time; that's when they're like, "Oh, now is the time to do it." Well, it was time to do it because it was going to hurt their business. And I really want to draw something out, which is, tech companies aren't brave, they're not courageous. They're looking
after their own best interests. And the reason that Twitter, in my mind, the reason that they finally banned Donald Trump is that they would not lose any of his eyes to another platform, any of his people, any of his supporters to another platform. They had nowhere else to go. And then it was like, hey, they can't go anywhere else; now we can do it with impunity. [00:12:58] And that's a very cynical view, but it just makes sense. If you haven't done it after he's called people's races, after he targeted people, after they got threats, after "there are good people on both sides," if you haven't done it by then– what? It took a bunch of white supremacists? Well, we saw white supremacists years ago in Charlottesville. So no, it was a business decision. And so, I think it was way too late. And if we are brave enough as a society – and I really
hope we are – we start to deconstruct Section 230 and hold them civilly and criminally responsible for this type of behavior, and this type of profiting. Because they have profited off of this. [00:13:36] KATHY PHAM: Doubling down that, you mentioned five years, but even before that. There are lots of communities that experienced trolling online, misinformation, particularly communities of color and women. And when the issues really started affecting the Western world, the majority of people who are building tech, that's when, at least five years ago, things started to kind of become more of a conversation. But imagine if at the beginning of building our products we had Joan Donovan and Dr. Safiya Noble and Ruha Benjamin all on these teams to help us understand how information travels, and Jon Zittrain to understand the First Amendment law, and early from the beginning, to put some of these safeguards or just these product decisions or these engineering features in right from the start.
I just finished reading Ellen Pao's Reset, and she talked a bit about even Reddit's culture, when it decided one day that child porn and revenge porn was no longer okay. But it took them a while to even get there. And so, what does it look like? And it was too late. If we just did this from the beginning. [00:14:38] LESLIE MILEY: I'd like to touch on this. Please, other people jump in. There were people of color, there were people who understood this at the table at one point in time, and they were marginalized, they were fired. We watched that play out last year with Timnit at Google. We consistently get marginalized when we bring these issues up. There were issues about child porn
and inappropriate images and terrible images on Google properties as early as 2006. I don't know if people remember Orkut, but Orkut just became the cesspool of offensive content. And google let it sit there until the Brazil government said, We're going to shut this down. And then they put it on their employees – yours truly included – to help clean it up.
[00:15:31] JOAN DONOVAN: I'm being patient, but I'm just loving to listen to people support a point of view that we've really tried to bring with our research to this moment. And Leslie, I don't know if you remember meeting me, but I remember meeting you at an Obama Foundation event where me and Nabiha Syed were discussing how Breitbart used Facebook and Obama actually showed up and was like– he was mad, too. He was just like, Yeah, it was remarkable the way in Breitbart was able to use Facebook as a distribution system to reach out and really spread its wares across such a wide swath of people. And we all knew back then that Facebook had been permitting and profiting from widespread hoaxes and, quote/unquote, what became known as the phenomenon of fake news. But we should
also call it for what it was, which was a kind of partisan propaganda attack on the election. [00:16:40] And some of that's been papered over by blaming Russia and then academic debates about, can we even know the basis on which someone made a decision to decide to vote for So-and-So. But through it all, I think one thing that we know is true, I've been joking about Donovan's first law of disinformation, but if you let disinformation fester, it will infect the whole product. And this is a problem,
which is Reddit was strategic early on, trying to get rid of some of the QAnon stuff, and we don't see them as a place in which that kind of content was allowed to grow. And so, it shouldn't be beyond our imagination that platform companies have reached a level of scale in which they're not able to provide a minimum standard of safety to the users of their products. And as such, their products are now being used for political oppression. And it's very similar and the same kind of techniques we saw in the past of people impersonating black women online, people hiding their intentions using the actual features of the platform. [00:17:59] And one thing's really interesting about what Leslie was saying about Twitter. It acted because it knew it wasn't going to lose this massive market share because Trump really didn't have somewhere else to go and to bring those people with him. But I also it's really important– I was just listening to,
I listen to a lot of conservative talk radio, and today they were saying, Let's not give up the field. Let's stay on Twitter. Let's fight as hard as we can to get back. Why give that up to the left, or liberals, or even they were calling people communists and things. And I'm just like, there's something about the terrain of networked communication that is broken. I think the business model is what's broken in "growth at any cost," which, Leslie, you actually mentioned at that meeting that left quite an impression on me.
And if we do allow "growth at every cost," what is the public interest obligation, then? What is the flip side to that, the requirements of companies that bring together this many people for this much connectivity? And so, for our research and our purposes, of course, we focus very closely on media manipulation, disinformation tactics. But that allows us to see what's really broken in the infrastructure of our communication technologies, and then potential points of leverage and ways of not just tweaking the technology, but also bringing a whole-of-society approach to the problem. [00:19:35] And so, it's hard though, even from the perspective of Harvard, being in such a position where, of course, we enjoy so much voice in this debate, but at the same time, unless academics come together and figure out what the research program and agenda is going to be for getting the Web we want, then we're going to continue to surface the wrong problems and then, of course, downstream of that are the wrong solutions. JONATHAN ZITTRAIN: I'd love to attempt a snapshot of where the conversation is so far. My sense is
that over a course of years – the internet isn't so new anymore; there was a time when it was, late '90s/early '00s – what the mainframe brought to bear in the public eye and among academics talking about was one of rights, with First Amendment-style thinking most prominent within it. [00:20:35] I think coming off of a time in the United States when First Amendment rights were thought of as just being so central. I think of the march on Skokie by neo-Nazis in 1977, for which a permit had been pulled. They didn't end up marching, but they fought all the way up to the Supreme Court, the ACLU behind them for the right to march. And the permit was granted. And the
impact on the residents of Skokie, tens of thousands of whom happened to be Holocaust survivors – no coincidence; that's why they wanted to march there – was seen as a sort of "that's the price of freedom," like, "yep, guess we have to bear it," with "we" being [21:16] for the people saying it versus the people in Skokie. And that rolled right into the early days of thinking about the responsibilities of internet platforms, also at a time when there wasn't the scale that Joan has referred to a few times, of just a handful of companies channeling so much of the speech. [00:21:36] What has arisen next to this rights framework, which still persists today, and on some terms can be persuasive, is what I call a public health framework. And that's the sort of thing that I feel that Leslie was really referring to about the kinds of harms that can come about and the harms at scale. I think it was easy back in the day
to fool one's self into thinking, this I just like a parade, like a march, it's purely expressive. And it's at a distance. If you don't like what you see on your screen, turn it off. And the border between the online and the offline, of course, is just evaporated. Somebody described the watching the videos, the various user-generated videos of the insurrection at the Capitol as Facebook made flesh. [laughter] It's like Facebook literally transforming into the real world. And that's what brings me full circle, is something Kathy said, which was the benefit that some computer science folks might have in dwelling on the law. There's ways in which
lawyers and people who think about institutions could really dwell on the malleability of the tech. And if our tech happens to be these atomized collisions of short sentences to include threats that amplified when shared and repeated, it's more than just a customer service or public interest issue. We need to figure out how to deal with that and say, Well, what's the architecture of this tech? And why is this what everybody's spending their tie doing for a half a day? [00:23:21] LESLIE MILEY: I think the issue for me isn't the content on the platforms; it's the targeted amplification of the content. That's what's key here. And that's what Breitbart learned to do on Facebook. And having been at
Twitter and been deep inside of, I call it a global game of Whac-A-Mole against spam and anti-spam, and that type of content, you see that people learn how the algorithms work, and they learn to create content that gets picked up. They learn to create content that gets disseminated and targeted. And how the algorithms work, some of them work, is that slightly negative content gets higher engagement. So guess what you're going to see more of? You're going to see more of that. And these are the things that we should be talking about. It's not whether the content can be on the platform, but whether the content should be– the company should not be liable for its spreading and amplification of that content, regardless of how much harm it's doing. And they've had
noticed that this was harmful. They knew that it was going to get spread. And they just said, We're just the platform, they were saying, We can hid behind Section 230. [00:24:30] And I sat in the room when we were doing Periscope and Vine at Twitter and said, Someone's going to rape, kill, murder, assault, lie. And how are we going to stop this? And people just, We're just a platform, that's not our job. And yes, they need to learn a little bit about the law, but they also need
to learn a little bit about humanity. Because I can't sit there and say, Well, no, I don't want to build a platform that will allow people to do this with impunity. And if we're going to do this, we need to talk about our responsibility. And this is something that, well, you know, I come back, doesn't exist because it's a growth-at-all-costs and there's no accountability. They can go and take all the money off the table, tens of thousands, millions of people can be impacted, hundreds can be killed, and they just show up and apologize every year like Mark Zuckerberg does. [00:25:19] JOAN DONOVAN: I want to jump in and say a little bit about my team's experience on January 6th as people who research this. We watched a woman
die in real time; we couldn't look away. That was our job, right? Like content moderators who spend an enormous amount of time watching people torment animals and decapitate people. This is not fair to the people who have to do this work. And my team, who I love very dearly, feel a sense of duty to this process, to this vision of a world that cares about each other. And when we break it down to abstract values, like,
oh, well, this is just free speech; this is just technology; we don't look at the users, we don't look at the content, the reason why this whole thing really matters is because this is a workplace now. This is people's livelihoods. This is an industry. [00:26:49] Alongside us, there are children, there are teenagers, there are activists, there are– every strata of society is plugged in. And when events like that happen, it just reveals what we already know. For us, it's just another day at the office. But the consequences of that in the repeated
abuse that people are made to witness by virtue of this openness, by virtue of this scale, I don't think we even capture it in a public health framework. Because I know what happened to women, women of color, Spanish women when we apply to public health or we apply a health rights framework to medical malpractice, which was, You just sign this form and you sign away your rights, and if they sterilize you, that's cool, too. [00:27:52] And so, as we approach this and as we imagine this work that we do, we have to consider that these are humans, these are human beings. We need to reduce the scale to get back to human-centered design. Leslie's saying they're sitting in a room saying, Hey, we know that people are going to use this to film crimes, to assassinate people. And then at the same time to say, Well, you know what, that's
just how it's going to have to work because we don't believe in culpability, we don't believe in the rights of others not to have to go through that, not to happen to see a murder. It's different when it's like every once in a while these things happen, but there are ways in which this stuff has become so commonplace that as an employer, even, I question if it's ethical. KATHY PHAM: Joan, thank you for sharing that. I know I say this to you almost daily, but thank you for all the work you and your team constantly do. And for folks that don't know, Sarah T. Roberts has done really amazing work around content moderators as well. [00:29:19] You touched on something that Jonathan first made me think of and Leslie expanded upon, which is this idea that, whether it's the law or social science or how information travels, these are sometimes seem as these customer service extra parts of tech. And you're in these meetings and I've been in some
of these meetings at these tech companies, too, or been in meetings with startups, and it's all extra, and it's not part of the core design of the tech. And like Leslie said early, there have been people inside some of these companies, sometimes maybe hired to be the token, I don't know, and you speak up or they speak up and those voices just aren't as respected. There's a hierarchy that is at least well known inside tech, known by some outside, around engineering and non-engineering, or tech and non-tech, which is crazy. [00:30:10] It's complicated, but it gets to all these deep, deep fields where people like Joan's team go and really deeply understand the content that's out there and how people use the technology and the negative effects it has on people. It doesn't always get taken very seriously but some of these teams – by the
leadership, by your fellow peers. And that's an area that we have to think about as well, in addition to the laws and policy and Section 230, and all of the high level law parts as well. LAURA MANLEY: I know we can have a conversation about all of the different elements of this topic, but I want to make sure that we get a chance to open it up for the live Q&A. One last question I'll ask before we do that at about 4:10. How can individuals meet this moment and help enact more long-lasting change for the broader issues highlighted through the recent events? LESLIE MILEY: I want to jump in and maybe touch upon something, the last part that you said, Kathy, and tie it to this, which is, the executives at the company know the harm that's being caused.
Do not give them an out. They know. They know. They've done research, they have people in the room. They know the content on the platform. And we – and when I say we, I mean society overall, and specifically the press does not hold them accountable to this.
Their shareholders don't hold them accountable. And so, we don't hold them accountable. [00:31:43] And when we try to hold them accountable, we let them do their parade in Washington, and the press just fawns over it. And they need to start asking more – you're apologized every year for 11 years; what's going to be different now? They need to start asking Jack Dorsey – You're off on vacation while your platform is encouraging people to kill your Senators and your Representatives. You were on vacation a few years ago in a place where there was a genocide happening less than two hours away from your vacation spot. How do you sleep at night?
We have to start holding people accountable. Part of it is rewriting Section 230, calling your Congresspeople, calling your Senators. Signing petitions online. Getting involved. Because if we don't get involved, they're going to keep doing it. Because it's profitable. Look at how much market cap both Twitter and Facebook have added in the last year. Look at how much market cap Twitter has added since it has essentially been Trump's, up until recently, preferred method of getting his message out. I mean,
hundreds of billions? Tens of billions? I don't know. Tens of billions of dollars in market cap? [00:32:56] So their incentive is very clear. And we're just not holding them accountable. And we know the damage that's being done. And I think we
can start to surface that more and say, Yes, your content moderators are being damaged. The videos that kids see on YouTube that get recommended to them– Facebook's own research said that, was it like 67% of, I can't remember the exact– it was like this large proportion of people joining these violent groups on Facebook were recommended by Facebook's own algorithms. [laughter] They're feeding it to people! And it's like, how are you not responsible? And they say, Because the law doesn't make me responsible? JOAN DONOVAN: I have had the same feeling about that. I often joke with reporters about, Well, my computer thinks I'm a white supremacist. I often think if I were to turn this computer back into Harvard IT, oh, man [laughter], the horror show! We call them reinforcement algorithms for a reason in the sense that they reinforce the things that you see. [00:34:10] In broad strokes, I'm very afraid of this moment where people might be casually now interested in QAnon.
But through their entrance into the media ecosystem, be it either through YouTube or Facebook, they are more than likely going to get a much larger dose of that content because of the way the system works than they would if they were to say, Oh, I saw this. There's a great Financial Times explainer about QAnon. Watch that, move on. If you want to know, look, and then move on. However, everywhere you travel then on the Net, you're going to see recommendations for these things. There's no system in place that said, I've had enough and I want to erase this data history and move on. Unless you're to do some stuff, you have to know how to work the tech a little bit if you want to get into incognito windows and whatnot.
[00:35:12] But when we think about it from that perspective, where if people just have a passing interest in understanding something, but the way the system is designed is to draw you back in over and over and over. That's where we have to worry. We also have to worry about that profit model that sanctions that kind of behavior around engagement and trying to get people to engage more and for longer term within these platforms.
As we think about what could change or what needs to change, especially around 230– which I know we've been saying it, but maybe people don't often know what it is. It's known as the 26 words that created the internet. And it's basically permission– according to Danielle Citron, it's actually permission to do content moderation. But at the same time, as we've thought about earlier, there's only very small buckets of information that they're willing to take action on, as Jonathan was saying – pornography and a few other things.
[00:36:17] And so, when we think about, then, what are the really important aspects of information that are really life or death; like, you needing to know what are the symptoms of COVID-19. Why should somebody be able to hoax that? Why don't we have a system for timely, local, relevant and accurate information online that has these public interest obligations that we insist radio stations and television have? And so, I think it's really important that we understand that there are many other models to think with. And then, the last thing I'll say before we turn, tech has a way of moving the goalposts on us.
And there is a shift happening where they want us to blame Comcast now, they want us to blame Xfinity for Newsmax and OANN. And yeah, they're parts of the problem, but if we shift the goalposts and we say, Oh, Parler did it, for instance, which is what Sandberg was trying to say the other day in explaining Facebook's role in this. I know, Leslie, it was a whole thing; I'm watching this and I'm like, how are you just giving her the microphone to say this? But that kind of maneuver is really about offloading responsibility. And so,
for us as researchers, accountability has to be at the forefront of how we understand the true costs of misinformation and how we move forward with a research program that understands this devastating impact of disinformation on our society. [00:38:00] JONATHAN ZITTRAIN: I think one thing the conversation has made just powerfully, devastatingly clear so far is how high the stakes are. If you tried to create a ten-year experiment subjecting a huge swath of humanity to what the modern social media construct is, just to see what would happen, there would be nobody signing off on that experiment; it would be way to intrusive and dangerous. And yet, here we are,
collectively, having built it and just sort of run away with its effects, which include effects on the political system itself. So the stakes are absolutely through the roof. [00:38:38] I think it might be helpful to distinguish, especially as 230 has been mentioned a few times now, between activities online that are sufficiently beyond the pale already, that they're unlawful in some way. It's just really hard at scale to find the person behind that awful tweet or that lie that has certain harmful effects, such as telling somebody the wrong protocol for how to deal with an illness, and then they hurt themselves. Those sorts of things are covered by the law, and the question then is, under what circumstances do the platforms that convey, amplify and target that, recommend it, stand in the shoes of the original person issuing the lie. And 230 is a big part of that debate. The
general impact of 230 is, at least at the state level, not wanting the platforms, not allowing the platforms to be placed in the shoes of any given user who does something unlawful. And that would be what a conversation around tweaking 230 would be. Well, under what circumstances is it bad enough that the platform should be responsible without, maybe without the platform having to shut down because out of a million comments a minute, three of them are going to be unlawful and then they'll pay for it later. But there are ways to adjust standards, as the law does, and as policy does, to whatever you want the outcome to be. [00:40:10] But I think there's another category of stuff here that we're talking about that collectively is part of what is so crushing that isn't even unlawful for a person to do. If people want to get in a group and say that the moon landing was faked, they're entitled to do that and it's not breaking any law.
And there is a real question about whether nevertheless a platform should not be pushing that. If somebody says moon landing, maybe that shouldn't be the first group they're offered or persistently offered day after day for them to join. But that's a category of content that is lawful, but awful, for which thinking about interventions at the policy level is really tricky. And you mentioned the 1776 Project; it's just so interesting to see that put in a wrapper of "our kids are being proselytized and indoctrinated into false stuff, now we finally offer the 1776 Project as the corrective." It's
how to handle, at a governance level, who is going to be in a position to decree what's the outlying, surely wrong, view and what isn't. I don't think that's just a dodge. I think that's a really deep project when you're trying to design something, and a deep problem, and one we have to confront. [00:41:37] JOAN DONOVAN: I have one addendum to that, which is that platforms are not just placeholders for speech. They coordinate people. You can't take credit for black Lives Matter, you can't take credit for Standing Rock, you can't take credit for Occupy, and then not take credit for the alt right and then have to build systems differently. So the hard part is when we think of these as speech machines, we run into this 230 problem. But if we actually think of them as broadcast and amplification machines,
then the entire rubric of policy shifts to, how many people do they reach, and are they using that privilege responsibly? And that, to me, is where we're headed. [00:42:24] LESLIE MILEY: I'm going to try to draw this map. After 9/11, we ramped up surveillance of mosques all over the friggin' world. Our intelligence, counterintelligence, United States counterintelligence and intelligence agencies sprung into action. People were bugged, people were followed, people were surveilled – their bank accounts, their apartments, their homes were searched. Because we were afraid of radicalization.
We now have scaled radicalization. The people who showed up at the Capitol were radicalized on 4chan, 8chan, Parler, Reddit– well, maybe not Reddit, but Google, YouTube, Facebook, and Twitter. They were radicalized in the same way that many people who joined ISIS and ISIL were radicalized. But they were done at scale. It wasn't the ones, the twos, the fives, the tens; it was the ten thousands, the twenty thousands, the hundred thousands.
[00:43:18] And we're not deploying the same types of countermeasures to stop this. In fact, we're trying to keep it going. Like, Parler's running around shopping for some place to keep doing their thing. Twitter, who years ago– and I helped build some of the tools to stop this kind of content and to stop– not the kind of content– let me change that. To stop the dissemination of this
kind of content. And here we are arguing about something that the technology exists to actually start to make an impact on. And they'll make an impact on it when they want to. And our government will do something; now they're doing something. We've got 25, 50,000 troops in
DC. They're running around the country arresting people who I'm seriously enjoying watching on TV on a daily basis; it's like, please arrest all these Karens, I am so happy about that. [00:44:04] But it's like we didn't have to get here. And we got here because we didn't hold the companies responsible. And this is what I mean. And I've seen some comments about, it's censorship, censorship. No, it's not censorship. I'm not saying to decide what should be out there. I'm saying decide on what you want to amplify and distribute. That's a difference.
So people can put out there what they want, but they should not expect to have that amplified. And right now they've learned how to do it so well that the tech companies, or won't, stop it. LAURA MANLEY: Kathy, I'm going to give you the final word and then we'll turn to Q&A. KATHY PHAM: You had asked what can we all do at least on an individual level, and I'm still thinking about what Leslie said about making sure we hold the leaders of the tech companies accountable. And something else I think about a lot – and this maybe draws from my years in tech, but also the four years I spent in government, and maybe some optimism sprinkled throughout – is a call of action for a lot of you is, yes, email your Congresspeople, reach out, speak up. The tech companies now are unionizing and, regardless of what we think about the unions,
for the most part there's some really great thought around the unions are quite different than unions of the past, which were organized around their own working conditions. These are people unionizing because they want to push their companies and leadership to do better for society. [00:45:30] And in addition to that, in order to hold people accountable, at least by law, we need laws that make sense. We need people at the FTC, even if it's a small team of technologists, that really understand tech, understand what antitrust even is to enact and to make policy and to hold people accountable.
And we have a huge slack in that right now, at least in the government side of things. So a big call to action is, Congress has a callout for tech folks. And there are different areas of government that are really open and have called to people to come to bring that tech expertise to help us figure out how to hold these groups accountable, in a meaningful, long-term kind of way that is long, long-lasting. LAURA MANLEY: All right, great. So we have a question that's come up from several people that I'd like to have a discussion about.
It says: We often discuss misbehavior or inaction in the tech industry, but is there a company that's doing this right? What does a north star look like in reality? [00:46:39] JOAN DONOVAN: For a while, I thought Spotify was going to get it right, and then they weren't into podcasts and it kind of all fell apart. But for a while, Spotify was working with community and a civil society, community-based organizations, civil society organizations to spot hate music, hate rock on the platform, and remove it when they could, or not serve it in recommendations. There are some rather esoteric black metal that, I get it, you can't actually even understand what they're saying anyway, and they need all the other criteria, but it doesn't mean you need to put in "recommend." But then when they got into podcasts, some of the rules seemed to shift and there was a particular episode, of course, of Joe Rogan, where he had the famously deplatformed Alex Jones on.
[00:47:35] And so, there's going to be some issues with the way in which some of these purveyors of hate are going to utilize the star power of others that are helping make profit for these tech companies. And then we're going to get into different kinds of trouble around deplatforming. If you saw even in the midst of trying to get Trump to stop using Twitter, he hopped over to @Potus; like, "oh, yeah, I forgot, I got a backup account." This is typical. That kind of strategy is typical of the people we look at. I have no doubt in my mind that some of these militant groups are just reorganized on Facebook. For instance, Straight Pride Parade
in Boston is organized by Super Happy Fun America. And the leader of that group lives in Malden and organizes and everybody knows who he is. He's not hiding. But he certainly isn't "super happy fun America"; he's a white supremacist that organizes hate events in our own city. [00:48:42] And so, it's important to realize that they don't always say who they are. They don't always show up in the hashtags. They utilize the anonymity and the lack of transparency on these platforms in order to grow and grow and grow. Which is why I think I love Kathy's point of view because she's been in government, she's been in these companies, she can speak to the two levers of power. But unless we get a broad tech
union that forces change from the inside, including efforts at Tech Won't Build It, and I'm thinking about the group Coworker here that really has helped organize tech, unless the people who are building these things say "no more," then it's not going to change. Facebook is now about to enter a culture of sabotage, where people are going to stay on the inside of that company in order to leak materials to the press. That is the worst position to be in if you were a company shareholder because that means casual conversations that you had in text messages might become public fodder. This is not a good situation to be in. It puts everybody at a different kind of
professional risk. But that's what happens when culture has to stand in for process and justice. [00:50:10] JONATHAN ZITTRAIN: I wanted to take a quick crack at the same question – is there anybody who's done this right – and just say, what's the "this" being done right? If it's, is there anybody who's done Twitter right, I'm not sure you can do Twitter right. For all of the problems my colleagues have been talking about – the scale, the pace, the unfilteredness of it – is there a way to not actively recommend horrible things? Sure. You could just get out of the recommendations business entirely. And that might not be a bad idea. But I find myself focused on, well, what are we trying to do? And what is our goal when we go to do it? If your goal is to learn facts about things, whether it's about health or history, there are lots of places to go online to learn stuff. Wikipedia does things pretty well.
Although that's its own conversation. The fact that we're not even sure what we're online for, what it is we're trying to do right when 10 or 15 years ago these things didn't even exist in their current form, that I think gets us tied in knots a little bit. If you're asking, are there ways to have people in extremely staccato ways follow one another and emote at one another without it becoming a big mess once it's more than 400 million people, I'm not confident there's any way to do that. [00:51:43] KATHY PHAM: I think in addition to that, Jonathon, is– I actually get this question a lot from my students in my product class as well – who's doing it right and how do I do it right? And another way I like to think of it is, you actually might not even know what you're trying to do right or what it is until sometimes you shift your product and you're like, Oh, no, I didn't know what was right around it, but now I know that this actually is wrong. And to build a culture or mechanism where you do something about it, I mean, security has thought about this for a long time. You don't know all the security breaches
you might have, but you know that when it does happen, you do something about it; versus, "I'm sorry, it's just the platform, my bad!" And wave your hand for about ten years and not do anything. [00:52:22] So I think one example is– again, this is a small sliver of Airbnb, but when they had a huge – it's not solved yet, I want to proceed with that – when issues first surfaced with issues of racism on Airbnb, #AirbnbWhileBlack, #AirbnbWhileAsian was trending on all sorts of places, they hired Laura Murphy from the ACLU, who was actually an expert in the topic, embedded her within product teams, had buy-in from executives, and built in features to try and shift the product. And that's, I think, an example of recognizing partway through that something is awry and having mechanisms for bringing people in, and respecting those people on the same level as your product and engineering folks, and building those features. And leaving room for that, versus just "this is how we do things, we're
not changing, this is just the way we do things." And that's a different way I think about that. [00:53:19] LESLIE MILEY: You said, is there anyone doing this right, or is it even possible to do that type of moderation? And it actually is possible to do. And it is possible to do at scale. We had a saying at Twitter, that Twitter was eventually consistent. There's something called the fanout. The fanout of tweets and the dissemination of information
doesn't happen immediately. Recommendations don't happen immediately. Yes, you want them to be timely. So there are things, and there are technologies that I worked on– this is six years ago, so I'm sure the state of the art has moved much further. I tell the story because it is so ironic. When I was at Twitter, I discovered several
hundred million accounts that had been created in the Ukraine and Russia. And we audited some of these accounts and said, There's no reason for them to be here. And I was like, we should just kill them all with fire because we don't understand what their purpose is, and so they don't need to exist. And when we did that, guess who we ran into? It ran into the growth team;
the growth team shot it down. Because they could run resurrection campaigns on those accounts. These are the incentives that you have to stop. People understand, people get it, people know what it would take, but there's this tension that will always tilt towards capitalism, which will always tilt towards the "growth at all cost." [00:54:43] Kathy, you said something that really resonated with me, and I've worked with a company that was about to push out a product that would, could have been, and probably was going to be harmful to people of color. And it took several times of bringing it up to the right levels before the company finally took a pause. The thing is to
build that into your product development process and to bring in people who have experience. Which means they can't look like you. They can't be from your same backgrounds. They can't go to your same schools. They need to have had experience, a different life experience than you have. And tech is terrible at doing that. Which is why things are still screwed up in so many places, because the people in policy, the people in product and the people in engineering who need to be making those decisions generally all look alike and have gone to the same school.
[00:55:35] JOAN DONOVAN: Are you talking about Stanford? [laughter] There's an obligatory dig at Stanford in every Harvard event. And so, I think we just had to do it. [simultaneous conversation] There's just competition for everything. The reason why I think it's important that we also understand this intimately as a Silicon Valley issue is because of the way in which these systems tend to propagate and the vision of what is possible with technology has a lot to do with who you interact with. At Harvard Kennedy School, we're a policy school and we know that certain ideologies mean that certain kinds of policy are just not going to be possible under specific political administrations.
And so, we have to face that as well, thinking about our technological development isn't path-dependent, it's not innovation that is solely driving the tech that we get. That's actually a deterministic argument. It actually stops us from thinking about what else is possible. [00:56:46] And so, I just want to really think about, with everybody here, as you endeavor to go about your work in this world, realize especially that the tools and the technology we build are actually of our own design, of our own creation. But they're extremely powerful and they're being used by people in power to foment, in this case, an insurrection, which feels a lot like political oppression. [00:57:13] I also want to give a shout out to some books because this is education. Gabriella Coleman's Coding Freedom; get that.
Black Software, Charlton McIlwain; get that. Artificial and Unintelligence, Meredith Broussard; buy that. Design Justice, Sasha Costanza-Chock; get it. Distributed Blackness, Andre Brock; killer. Behind the Screen, Sarah Roberts; another midnight read. Oldie, but a goodie,
Frank Pasquale's, The Black Box Society. And I don't have Safiya Noble's Algorithms of Oppression in front of me because I memorized the cites; I've got the citations all up here. But I say that just because there's plenty of alternative histories of the Net.
And we have to learn them now so that we don't fall into this complex of thinking that the way it is built is the way that we should continue. I yield my time. LAURA MANLEY: Thank you so much for that, Joan. And we'll put all those book titles that Joan shouted out in the chat so attendees can see them if you didn't get a chance to write them all down.
I'll give the rest of the panelists just 30 seconds, unfortunately given the time, any closing thoughts on, what now? [00:58:29] KATHY PHAM: One of my main thoughts: One, just listen to everything Joan says. And two, big tech is big and it can seem that way, but having been in the room at some of the highest levels of government, like the White House, or big executive people, fancy people in tech, sometimes a really small group of people can really push tech to act differently, whether it's unionizing, whether it's finding other people to push the company, whether it's just getting yourself into the room in the West Wing so you can help with an executive order or help a Congressperson do something. It can really help shift the trajectory of some of these topics we talk about. So I'm just going to leave you with that note.
[00:59:12] LESLIE MILEY: I think that when I watched the attempted coup last week and watched people being escorted out of the Capitol – after they defaced it, after people were killed – as if they were guests, I see no difference between that and when Zuckerberg or Sundar or Dorsey go to Capitol Hill and do their talk and get escorted out. They're all doing the same type of harm; it's just viewed differently. And I think it's important for all of us to start to– [alarm goes off] I gave myself 30 seconds to talk. I'm sorry I'm past 30 seconds now. I think it's important for us to really begin to get more involved, civilly and politically, and really start to push on companies and hold them accountable for what they're doing. And that means withholding our labor, like the unions are doing. I'm all for that, and I support every union who's trying to do that. Because that's the only bargaining tool that tech employees have, is their labor. JONATHAN ZITTRAIN: I just want to emphasize the thread that all three of my colleagues have emphasized around trying to find the humanity within this technology and how to help that technology bring out our humanity to one another. I'm thinking of that as such an important goal, and one for which
there are ways in which the 25-year run of Section 230 has just sort of [01:00:49] a lot of those conversations, and it's, I think, very valuable to be having them right now. [01:00:55] And I think also this is a bookmark – this is maybe our next gathering; we should make this a weekly thing – the question of centralized versus distributed. And I've generally over the years been on Team Distributed myself. I've seen the threat model as being in position from legacy, non-representative authorities on the little folk. I think the threat model is much broader than that now, so it makes things more complicated. I'm mindful of the fact that Jack Dorsey, in doing a short Twitter thread in the wake of Twitter's deplatforming of @realDonaldTrump, said, with a quick shoutout to Bitcoin, a foundational internet technology, that Bitcoin demonstrates is not controlled or influenced by any single individual or entity. This is what
the internet wants to be, and over time more of it will be. And then retweeted a link to, Twitter is funding a small, independent team of up to five open source architects, engineers and designers develop an open and decentralized standard for social media. [01:02:08] And I would just be so curious to know what Kathy, Leslie and Joan think about, if we had all this, but somehow in a distributed way which would make it harder for a moment of deplatforming to happen, how does that relate to the kinds of interactions, the goals that you have for tech that lifts us rather than crushes us. And I just ask that as somebody who's really trying to think that through myself. LAURA MANELY: So I promise that we will do another one of these because we have literally just scratched the surface of this issue, and I can imagine having at least ten more of these and having all of them be as interesting as this first one. I really do want to thank all four of you for taking the time and really bringing up some important issues. We have a lot to think about. [01:03:06] Hopefully recent events will continue to challenge us to think more deeply about public purpose values like information, truth and trust, private sector power, democracy and freedom. With intentional effort, I really do believe
that we can leverage this momentum and institute positive change, whether that's through reforming Section 230, public accountability, antitrust enforcement or other types of mechanisms. So again, thank you all for coming and joining this conversation. Thank you to the audience for tuning in. To keep up with our research and to stay tuned for more events like this, which I have now promised on air that we would do more of, follow us at @TAPP_Project, and we'll see you all soon. Thanks again, so much. Bye, everyone.
JOAN DONOVAN: Thank you, everyone. And thank you, Laura and TAPP. I really love working with you. Can't wait to get back in the office. LESLIE MILEY: Thank you, all. This was great. Have an amazing afternoon.
LAURA MANLEY: You all, too. Bye, bye. KATHY PHAM: Wonderful, thank you, bye.