UNREAL: Disinformation, Technology Policy, and the Future of Governance

UNREAL: Disinformation, Technology Policy, and the Future of Governance

Show Video

LEISEL BOGAN: Well, thank you all for coming. I am so excited about this panel. It is the culmination of a lot of research here at the Harvard Kennedy School's TAPP Fellowship Program. Before we get started, and I introduce this incredible panel that we have joining us today, I'd like to say a few thank yous, first of all, to Secretary Carter for his incredible support in all of our efforts. My amazing cohort, we've all bonded so much. I'm really grateful to all of them.

I'd also like to thank Amritha and Karen, who have been leading this effort throughout the year, and guiding us, to my research assistants, Nithin and Anna, they've been incredibly helpful. [00:01:27] And today, I'm just going to quickly explain the research that I have been doing, and the context, and then want to just have a conversation with the people who are actually doing this work on the Hill. We have an incredible panel for you today.

And I am really excited for you to hear about them, and their work. And I will let them introduce themselves after I explain what I have been doing, because I have been told I may go on a little long about their accomplishments, because they are so amazing. And then we'll just—I'll start asking them questions. And then we'll leave time for Q and A at the end.

So first, I just want to say that my two areas of research this year have really focused on Congress's attempts to regulate information technology harms, and the impact of those harms, and the impact of those efforts to regulate them. And then secondly, I've looked at the US government's capacity to address technology harms in the future. And I framed those within the crises of the most recent two Congresses, between the 116th and 117th Congresses, so starting in 2019. There has been a global pandemic, an attack on the US Capitol, domestic and economic and justice crises. We've got the unprovoked invasion of Ukraine.

And a number of other issues. And every Congress has crises and issues. But the last few years have been particularly contentious.

And underlying all of those issues has been this sort of information crisis, in the way that new technologies are helping us understand the world and communicate with each other. [00:02:57] So today we're going to focus on the challenges to regulating technology, with the specific focus on the digital information space and the harms that the large technology companies have kind of helped develop, and then the challenges to regulating that industry. One thing that stood out to me in that research is that since 2019, there have been over 110 bills that have mentioned disinformation, misinformation, and social media. Congress has been working very hard on this. And you can read more about it in my research report.

The first part of the report discusses sort of the history of how Congress has dealt with disinformation, which is the deliberate injection of false information to society. And then, how it progressed over time, to sort of look at the larger picture of how these technology companies work, and are being used to spread that kind of information. [00:03:51] But then also, how Congress has shifted a little bit toward misinformation, which is different. It's the unintentional circulation of false information. And there are other terms in there that you can explore when the report comes out. And then lastly, really looking at, is the Executive Branch and Congress, are they both equipped to deal with these challenges in the future? Especially given how rapidly technology has been developing, and the ways that they can be easily weaponized.

So with that, I would like to turn it over to the three of you, to introduce yourselves. We'll start with Anna, then Victoria, and Ishan, we're really excited to have you. Thank you so much for being here. [00:04:29] ANNA LENHART: Awesome.

Thanks, Leisel. Congrats on the report. Hi, everyone. My name is Anna Lenhart. I am Congresswoman Trahan's Senior Policy Advisor, covering mostly protection issues, and even more so, focused on kind of online safety, including disinfo, misinfo, but also a little broader than that.

And then also, child safety issues within that context as well. I do have a background in machine learning and antitrust, which will also maybe come up as well. VICTORIA HOUED: Hey all. I'm Victoria. I previously worked for Speaker Pelosi in 2020, and got to work on a bunch of awesome different initiatives, including disinformation work, broadband work, privacy work, autonomous vehicles, literally everything under the sun related to technology.

I previously was a software engineer in Chicago, working for Cards Against Humanity. And I also did a bit of product work there as well. And now I currently work for Schmidt Futures.

And I'm building on a tech policy initiative called Plaintext Group. And we sort of focus on a lot of topics under the umbrella of competition and innovation policy. So it's great to meet you all. [00:05:36] ISHAN MEHTA: Hi everyone. My name is Ishan Mehta.

I am a tech and telecom legislative assistant in the office of Senator Schatz from Hawaii. And primarily, been focusing on tech and regulation, privacy, security. But broader commerce committee issues as well. Thank you so much, Leisel, for organizing this. And really looking forward to the questions from the panelists.

LEISEL BOGAN: Amazing. Thank you so much for being here. I'm so excited about this conversation. So one of the things I wanted to ask you about is just super broad, what has been your biggest challenge when writing tech policy legislation? Or what are some of your biggest challenges when you've been writing tech policy legislation? ANNA LENHART: Did you have someone you wanted to start with? LEISEL BOGAN: We'll start with you, Anna, since you've been – [laughter] [00:06:31] ANNA LENHART: I did, I did just introduce a pretty huge bill. So I guess I can go first. Yeah.

I mean, I think the biggest challenge that we've been thinking about is how do you future-proof these laws? And it sounds like Leisel's report is going to hit on that too. And so, for lawmakers, many of them have been fighting for the last two decades, for just basic protections in our current online environment. And so, they certainly don't want to feel like they're back at square one, when we're operating in virtual environments, with haptic suits, and our transactions are on the blockchain. So, how do we get these definitions to be able to be stable? And the good news is, you know, AR VR platforms are going to [00:07:08-audio cutout] content. You know, there should be a way for us to keep them in this framework, in the definitions we're crafting today.

But the devil is going to be in the details, right. And so, if we look at transparency, which you know, all three of us—four of us have done a lot of work on—whether you're talking about a VR, AR platform, a video platform, a job board. You know, there's content moderation, transparency, that you probably want. But those metrics are probably going to be a little bit different. And they're probably going to change over time. [00:07:39] And so, from my perspective, the best way to handle this is to have an agency, agency capacity somewhere, that has a lot of really smart people, ideally interdisciplinary, who are able to use the Administrative Procedure Act, APA of rulemaking, open comment periods, judicial review as needed, to be able to update kind of those specifics around something, like transparency reporting, but you can imagine that in like the risk assessment framework space as well, or something like that.

Right now, for those of us who are passionate about that approach, we are watching the Supreme Court very closely. So for those of you who are not familiar, the West Virginia versus EPA case, which is typically framed as an environmental policy case. But I challenge everyone in tech policy to pay attention to it as well. Because it does look to potentially alter the way we think of the major questions doctrine, which is really sort of this idea of how much power can you give to agencies. And I think it could potentially really change the way both the FTC and SEC, you know, their ability to do rulemaking.

So I'm watching that pretty closely. [00:08:44] And I have started to think through, you know, if rulemaking is not going to be the approach we can take, either politically or because of the interpretation of the Supreme Court, what does it look like to take inspiration from tort law, from sort of gross negligence claims, and try to use that in the online product space? But I think it's a little bit tricky, right? I mean, I don't—If we look at Section 230, which I know we'll talk about a little bit more, I'm not sure the court's interpretation of that, over the last two and a half decades, has really met Congress's intent. So, you know, we'll of course be able to do that if we have Congress's intent today. So it's a lot to think about. It's really challenging.

LEISEL BOGAN: No, that's amazing. Thank you. Victoria, do you – [00:09:28] VICTORIA HOUED: Yeah. I was going to say, it's funny, because I feel like, for this question, I come at it much more from like people perspective.

Because I think when things come to the leadership level, like Anna has already done that amazing work of thinking through kind of like the nitty-gritty. And our job is mostly thinking about, like, what are the shifts in political landscape? And does this thing actually fit into the moment? So when I think about like Lina Khan running the FTC, so that like shifts a lot of the conversation around like, how much power do we actually want to give to the FTC from a Republican perspective? And can we push things through in this other, different way, now that there's a power shift? Or even just like, during my time, COVID hitting, and us wanting to do privacy legislation, meant that we had to like change the privacy perspective under the lens of like that your medical information that you're giving to companies, creating different platforms that are trying to get you tested, or what have you. [00:10:20] And so I think the shifts in political landscape, and just in general, I think coming into this space—and this is a lot of it that I end up kind of helping a lot of other people understand like a lot of nonprofits now that we work with, is just trying to understand like who is the right person to actually talk to, and to win over for a lot of these ideas? Because I think the inner workings of Congress is like the biggest challenge a lot of the time.

It's not really like, do you have a good idea or not? But more so, like who are the people who actually have power to say yes or no on a particular topic? And I feel like most of the leadership level is like thinking through the sort of strategy behind the actual people, and yeah, and the policies themselves. [00:11:01] ISHAN MEHTA: Yeah. And just like sort of, I think Victoria had a really good point, is that like, the bill-writing part is only like 15 percent of the work, right. Like the real work begins afterwards.

And how do you build a coalition of people to put enough pressure on like committee chairs, on leadership, on other members, to support the idea, to put it on markups, to put it up for a vote? And thankfully, to a certain extent, like this space has not been as politicized as some of the other sort of issues Congress deals with. So there is certain level of consensus that transparency is good. And [00:11:47] accountability is needed. But as Anna said, the devil is in the details. And that's where things start diverging. LEISEL BOGAN: No, I agree with you.

One of the things that I was surprised, though, when I went to the Hill, having worked—I worked in a software company, and worked in cyber security in the private sector. And some of the things that, to me, just seemed more technical, and when you get on the Hill, you're like, “Wait. That's—I didn't realize there were political divisions on X, Y, and Z.” Because to me, it just seemed like a technical answer or response. So that was one thing, at least for me, that was surprising when working on the Hill, and writing legislation. So I'd be curious, from all of you, what were some of the things that surprised you the most about Congress, and your work in this policywriting process? Maybe we start with Ishan first and go backwards.

[00:12:38] ISHAN MEHTA: I think, how engaged people are in the details, is—you know, people really care about like every last provision in a bill, even if they're not leading it, especially in the space like—You can't just tell somebody that this bill will make life tougher for Facebook, Google, or whoever you want to make it tougher for, and they'll sign on, right. Like people really take their time to pay attention how relevant they are. And, you know, there are bills people join for the political messaging side of it as well. But you know, people are very deliberate when it comes to a lot of these issues, even though it may feel that, you know, there is a lot of sort of—you know, everything's a nail when you have a hammer, kind of philosophy. But that isn't always true, in my experience.

[00:13:42] VICTORIA HOUED: I actually felt a little bit of the opposite. Actually, at least during my time there, I felt as though there weren't enough folks with actual tech backgrounds. And, I mean, that's why I loved the Tech Congress Fellowship so much, just because I feel like this is becoming less and less of a problem. But I always felt like the opinions that people had and would come to me with were based on an article they had read, or just like one research paper that they had read, by someone who also had spent their entire life in academia. I think, like a lot of my personal approaches to how I approach any of the legislation that would come across her desk, would be—was literally just because of the actual in-person experience that I had had with like handling data, and thinking about privacy from a very, very personal, you know, from like—yeah.

I think with that hands-on experience, it just kind of helped legislate in a very different way. [00:14:35] And I was often frustrated by the bills that would kind of come up, because they would always forget something, or not think of something. And it was just based on the fact that they had just a certain opinion because of a book that they had read by someone who, again, just didn't have that hands-on experience. And I think that that was very surprising to me when I came. ANNA LENHART: Yeah.

Sort of jumping off of that, I think, to do this job well, you really do need to talk to a range of stakeholders. And that's the thing that I think constantly surprised me, is the breadth of issues that you hit. So for the Digital Services Oversight and Safety Act, and for those of you who are not familiar, it's a comprehensive transparency bill that includes sort of risk assessments, and also kind of detailed research or access. [00:15:21] And, in the process of working on that text, you know, our office had to meet with experts from Stored Communications Act, Computer Fraud and Abuse Act, First Amendment lawyers, Fourth Amendment lawyers, National Security experts, international data governance and data sovereignty experts, which is super complicated. We could have a panel on that. HHS, right, so Health and Human Services oversees the Common Rule, which drives IRB, which is the Human Subjects Research Ethics.

I had to talk to auditing and financial sector, because we were using a lot of auditing processes. And that language had precedent tied to it. Understanding hiring structures and Fellowship structures within the government agencies, right. So just range of legal scholarship, far beyond and on top of, you know, the expertise that I was starting with, which is sort of experts in online platform governance and online harms, and the community we're speaking to today. So obviously, I had to keep up with those experts, and the research in that space, but then kind of adding on all this additional research. And it's, I feel like, every day is just an experiment in learning what you don't know.

And it keeps it exciting. [00:16:30] LEISEL BOGAN: It does. I remember often coming out of meeting, feeling like I had a pretty good sense of something for a bill.

And then, after the meeting, thinking, oh my gosh. I have to rethink everything. I have to start over, because this perspective just blew this thing out of the water. And I had never even thought about it. There are just so many perspectives to consider.

And then also, one thing that I think is a challenge, is thinking through like, who didn't get through the door? Like who didn't have the money to come in and talk to us? Who didn't have the opportunity to voice their opinion? And how do we get to them? And how do we hear from them? And I think it's another really important piece of this. And sort of along that line, I have also found that the outside has a certain view of how the policymaking process goes, having been on the outside myself. And academia has a particular view. Obliviously, I had worked in academia before this.

But both times seems to have a different view. And I would be curious what you all think about, what is most misunderstood about the public or academia? Or others who aren't doing this, what is most misunderstood about the process, in your view? I'll make Victoria go first. [00:17:45] VICTORIA HOUED: I was always—pardon my language—fucked over by the people, and not the actual bill themselves. I feel like it was always—I think when people, especially when we were doing COVID relief, and so many people were so desperate for us to pass something, it was always sort of like the backdoor pickups that we couldn't really be very public about, that made it the most challenging. It was, I think, always the—just, you know, yeah, like one backdoor conversation, where someone convinces someone of something, and it changes the whole thing. And then we have to start all over again.

Or, we have to go on August recess, and then take a break. And then we can come back just for one, you know, all over again. There were just so many things, I think, that should—yeah, should have been done a lot faster, but just got held up by one person. And I think the most public thing of that, and that people have seen, is like, you know, Manchin in the Senate, it's one person. His opinion. And because of the way things are kind of organized, it halts everything.

[00:18:53] ANNA LENHART: Yeah. I can kind of build off of that. And yeah. Less to kind of tech policy, which was your question more to Congress, what gets misunderstood about Congress. I think there's two things that I see a lot.

So the first is committee structures. So look. Members of Congress that I work for, for what's called a down dais member. So she's not a Chair. But Congress has to work on so many issues. So climate change, genocide, criminal justice, supply chains, those are four issues I've worked with on in the last week.

The only way to get depth on any of this is to have committee structures. And so the two committees probably most relevant to most of the people on the call, but there are others, are going to be your Commerce Committee and your Judiciary Committee. And so one thing I see a lot is that there will be a bill. And it'll be bipartisan. Or it'll be introduced by really high profile member. And you'll see the public and academics on Twitter say, “Oh, this is bipartisan.

It's definitely going to pass.” Or, “It's a really high profile member.” And they don't actually look at the committee jurisdiction of that member. [00:19:57] And the thing is, the leaders of these committees, they do have to kind of prioritize, I mean, their own priorities. Leadership's priorities, to Victoria's point.

And also, the priorities in bills of their members on the committee. So if you're introducing a bill off committee, even if it's bipartisan, it's not that it doesn't have a path forward, don't get me wrong. But it's got one more hurdle than bills already have. And bills are hard to pass. So if you're adding one more hurdle, it's not going to be easy.

And so, I do see that a lot. And it's important, specifically for academics who are looking to work with Congress. Don't just necessarily go to a member that you think is politically aligned, or that you know. Do look at their committee jurisdictions, because it's just going to give you—you're just going to be able to have so much more impact with your research and what you're trying to put forward.

[00:20:46] And then the second thing I want to say really fast, anyone who's met with me has heard me say this, the House and the Senate are different. And what I mean, is that the politics are different. So a lot of times you'll see something that's bipartisan in the Senate. And I'll have someone meeting with me. And they'll be like, “I don't understand. You know, why isn't it bipartisan over here?” And it really just comes down, there's a lot of things.

But you've got the election cycles to start with, right. So Senators run for election every six years. They're not all running at the same time. Over in the House, everyone is running every two years.

And then you've got these kind of historic years, like 2022, where you've got a party that's in the minority, and based on polls and history, is most likely looking to take over as the majority party. That's going to change the politics this year. And so, I just always have to remind people that the House and Senate are different politics. [00:21:38] ISHAN MEHTA: Yeah. And I think, like slightly, you know, zooming out a bit, I think it's also understanding this broader ecosystem of stakeholders that Congress is accountable to, and deals with, and hears from.

And you know, I have been in the sort of think tank advocacy space before. And you get so enamored with your own research and your own idea, that it's hard to remember that there's somebody else studying the exact opposite view front, and is working just as hard, and believes in it just as much, and is also presenting to their Congress person or Senator. And if you do get a chance to interact with congressional staff, or policymakers anywhere, that acknowledgment of, I am presenting one view. And I understand they may be different, goes a long way for us to be able to just place this. It's like a big mosaic, or tapestry, or whatever analogy you want to use. And like in the same day we might have meetings where somebody tells us, like, “Your bill is the best thing ever.”

And somebody else [00:22:55] you're going to kill children. [00:23:00] And that is literally what somebody told us. So it's really, you know, as varied as viewpoints are, everybody is going to have each one of them.

And as part of like developing your own, I think it's helpful, or I would say it's required to know what the other ones are. LEISEL BOGAN: No, I think that's really, really important. And it's something that's this idea of compromise, it tends to be so contentious, in the sense that, if you do compromise, do you actually believe what you're trying to do? And it's like, well you do. But think about, as you said, you have those same conversations, where it's like, somebody is going to die because you're doing this. Whereas somebody else is saying, no, it's really going to help.

And so that's really hard. [00:23:52] I also think one of the things that I've been reflecting on at TAPP is that we don't have a ton of data for a lot of these things. And my colleague had a panel right before this, who's one of the experts cited, like truly peer-reviewed data, and not just like your contemporaries in the industry who will validate what you have to say. So I think that's another piece of how do we get better data? How do we know what the impact will be? Because we often are legislating on assumptions. Those have been challenges that I have been dealing with.

And in one space, in particular, has been Section 230. It's always a hot button issue. And so I really wanted to get your perspective on what needs to be done to address the harms of technology policy? But also, to protect privacy and freedom of expression, civil rights, and civil liberties? We see all these things sort of clashing on the Hill at times, and in the media, and from academia.

So I would love to get your perspective on that issue. Let's start with Anna. [00:25:04] ANNA LENHART: Great. So our office doesn't do a lot on Section 230 reform. So I'll just kind of say what I have been thinking a lot about, which is just this difference between facilitating blatantly illegal conduct, and then sort of speech issues. And really trying to think about how we separate those.

[00:25:20] So our office is currently leading an investigation into online suicide forums. And it's just very heartbreaking work. And these websites are facilitating active, real-time assistance for young people struggling with suicidal ideation. And it's blatantly legal in several states. And the fact that Section 230 is protecting these sites is just, I don't think that was ever its intent So I do think we've really got to get our head around those kind of blatantly, not really speech issues, but just actual legal conduct issues, especially as we move into AR and VR and a Metaverse situation.

I think it's just going to be—like let's get it figured out now. [00:26:00] But then, I do want to talk a little bit about sort of legal but harmful, right. Because that's an area where I think Section 230 has created a really powerful intermediary liability framework. And it's allowed for diverse voices, new movements, you know, new businesses, and influencers.

And I don't think online platforms are newspapers. So I am glad that there is sort of a difference or structure and way to think about them. But, as many people in the room today know, along with those kind of positive benefits, there have been movements that are rooted and hate and disinfo.

So one thing we've been thinking a lot about is how do we move to kind of product liability, or just sort of thinking about the product, and product design, and the safety processes that are in place, and the design features on these platforms? And so for that, I think we really need to look at, how do we incentivize better safety processes and better design? And we have an incentives problem right now, right. These companies are incentivized for ad revenue and growth. And they're actually, in some ways, disincentivized to know how their platforms contribute to the spread of legal but harmful conduct content. Because, if they know that, then they're faced with the choice of letting their growth numbers go down in the name of potentially mitigating harms.

And so we've got to fix those incentives. And that's what policymakers are supposed to do. So I think it's good that we're all thinking about it. [00:27:25] And for me, it really comes to pairing transparency with consumer choice. So, on the transparency front, you know, we have to mandate that these companies pen-to-paper their values.

They've got to tell us what they stand for. You know, if you're saying, if you're claiming publicly that you care, you don't want your platform to be used by teen girls to discover dangerous weightloss pills, or by young men to be actively recruited into terrorism groups, or by grandparents who are just looking for the news, and end up in conspiracy theories. If this is stuff you're saying, either in your terms of service, or publicly, that you care about, how are you doing it? What are you putting in place? What are your investments, right? Are you hiring experts in eating disorders who know that teen girls will figure out countless ways to spell “thigh gap”? And are they working with your machine learning team, you know? Are you trying labels? Are those working? How are you measuring those? [00:28:15] Are you thinking about potential crisis scenarios before they occur? Are you investing in that type of scenario planning? Let's make companies pen-to-paper this. And then we do have to add some kind of level of accountability to that, right.

So the Digital Services Oversight and Safety Act has sort of independent auditing of risk assessments and mitigation reports. But then, it also does have the researcher access piece, which I think kind of does two things, actually. It's a way to sort of have an independent look at, are your mitigation techniques working? And are you actually using them effectively? And, you know, are there online harms coming down the pipe that we haven't seen yet? And, are there new mitigation techniques we should be trying? [00:28:56] So I think that sort of just deep expertise, Leisel, to your earlier point of like, what do we really know? And what don't we know? And how do we fill that gap? So really excited about that work.

But listen. All of that only matters if advertisers, influencers, consumers can leave the platforms that aren't aligned with their values. And right now, that's just not really true. And that's where the antitrust reform comes into play. [00:29:22] And I think if we can have more platforms, they can have different values, they can be transparent about those values, you can have influencers, and especially advertisers.

I think advertisers really are the ones struggling right now. We've got basically a perfect duopoly in the first party advertising space. So if you give them those choices, I think you'll shift the incentives. ISHAN MEHTA: Yeah. Just sort of building off what Anna said, I think, like, so we also have a bill, it's called the PACT Act.

It is bipartisan. And it is on the Committee of Jurisdiction. But, you know, and our culture is very important, is that there is a Section 230 component to it that we think, like we hold platforms accountable for stuff that has been proven illegal by a court. So it's slow, tedious. But that is the standard that we felt comfortable with, considering protections that go beyond Section 230, which is the First Amendment.

[00:30:24] And I think a lot of people conflate what Section 230 covers, and what the First Amendment covers. But that can also be a whole other panel. But, you know, there are a lot of things that are going to improve this ecosystem that have nothing to do with Section 230. And us pointing to our transparency is great.

Just tell us what your policies are, and how you're enforcing them. And hopefully, that, with [00:30:58] privacy law, some competition antitrust enforcement, we allow individuals to see how platforms describe their own sort of terms of service, give consumers an idea of how they enforce them, and then give them a choice of whether they actually want to be on that platform. Another thing we have in the PACT Act is just basic sort of customer service obligations. If somebody is flagging something, contacting you about content that they may, rightly or wrongly, feel is harmful, you at least have an obligation to respond to them, to review that content, to let them know why this content was decided to remain up, or taken down, or whatever, however fits in your content moderation[?] policies. And these things have nothing to do with Section 230. But it is just about empowering users to get more information from what are barriers of opaque governing structures right now.

[00:32:12] VICTORIA HOUED: Honestly, yeah. My whole thing was also transparency and giving like researcher access, and being able to actually legislate based on information that we ultimately mandate, instead of just allowing platforms to give us like, for example, Facebook having like an open API for their election ads, which was not even potentially fully accurate. And so allowing platforms to make decisions as to what is actually given to the public, is not the best. Because there's just no way to know if it's actually honest, a lot of the time. And so, I mean, during 2020, during my era of being on the Hill, I think when the PACT Act came out, I think I was like, “Transparency! Transparency!” And so excited. And Anna and I kind of kicked into full gear of like, this should be like the next thing that should be easy to pass, hopefully.

And it's a lot less, I guess, like—it's a lot less controversial than a lot of the other, I think, initiatives to be able to act on, a lot of the problems that we're seeing with platforms. So I'm like a full proponent in pushing transparency legislation. [00:33:20] LEISEL BOGAN: I'm still on the fence on some of the transparency stuff, in terms of like compelled speech. But, as you know, I kind of have a different view on some of these things. But my one thing I wanted to ask, before I have a couple other questions.

First of all, if you are in the audience, feel free to use the Q and A function to start submitting questions. We'll open up to questions in a second. But to the issue of disinformation, I mean that was originally why Congress started getting interested in—I mean it began with the ISIS using a lot of the social media technology companies, or ISL and some of the radicalization that was happening online. Congress started paying closer attention to what was happening with these platforms. [00:33:58] Just broadly speaking, in terms of how we've solved that problem of disinformation, given the amount of time since it was very clear that we've had state actor interventions in elections for decades. But being very clear that they were taking advantage of these tools and mechanisms that are available online, does anybody have any thoughts on what we do? And why that particular problem still has not been solved, in my view? And what it would take to solve it, and whether or not it's a whole government approach? Or what you think we should do to address that issue, if you have any thoughts? [00:34:37] ISHAN MEHTA: Yeah, I can sort of—I don't know how to solve it.

But there are certain things we have learned, is that one thing we have going for us on that issue, is that the platforms legislators, all [00:34:52] are all on the same page, that we don't want this content on the platform And the platforms are actively investing to make sure, to the extent—you know, at varying degrees, but at least that that is the case that they are actively looking to seek and to move that kind of content. But you know, not to be a platform sympathizer, but the [00:35:19] is really skewed, right. Like it's one private company, a really large private company [00:35:29] on location. But versus nation state, right. Like that has decades of training investment specifically in these sort of tactics.

So just recognizing that challenge is important. I think there is a lot of stuff that can be done, in terms of like, I think what Anna was talking about, her research or access. And not just between academics at Harvard and the platforms, but everywhere, in multiple languages, you know, Senator Luhan, I'm going to give their staff a shout-out, is doing some great work. [00:36:15] A very small percentage of the world speaks English. And a lot of people don't use these platforms in English. So if you think, you know, platforms governance or sort of moderation should be like here, and it's here, for like not-English languages, it's like all the way down there.

So that sort of like recognition, which our foreign policy staff will always say, is like, you know, we are not alone in the world who are facing these issues. I think that's like step one. [00:36:53] LEISEL BOGAN: What do you think, just to follow up on that, and again, feel free to start submitting questions, in terms of the international perspective, I know Anna and others, you're starting to look at this issue.

And I heard a researcher say, recently, we need to put some sort of platform, researcher access at the UN, or other international institutions. I don't love that idea. But I would be curious what anyone else, what your thoughts are. I recognize, though, that there is an international component to it, that we need to address. Does anyone have any thoughts? [00:37:34] ANNA LENHART: I can jump in. So we are working on some texts right now.

So I can't be too public about this. But I think what I will say, is I've been thinking a lot, both in terms of domestically whole government, but also with our democratic allies, about this hub and spoke sort of model, right. So what I mean by that is like in the hub, that's where our values and our frameworks around the way technology should be used for all—like beyond social media and disinformation. But just what are our standards? What are our standards for an [00:38:04]? What are our standards for a risk assessment? What are our standards for disparate impact testing? Like having that sort of expertise, and ideally collaboration. You know, I'm looking, I'm obviously watching the DSA, the Digital Services Act, the Digital Markets Act, the Online Safety Bill. They're all going to come out with these codes of practice, these transparency report requirements, and start to really set a few of these standards.

So what can end up living in kind of our hub. And then, where the spoke comes into play, is sort of the more context-specifics, and the terms of sort of thinking internationally, you know, those transparency reports that come out of Europe. What metrics make sense in the US? And where is the US context maybe a little bit different? I think languages is a big one. And where might we need different kind of reporting that we can build on top of it, instead of just like a whole new report from scratch? [00:38:54] And then also, just kind of domestically, our office has been doing a lot of work on ed. tech. And so I'm kind of going off topic now. But again, you know, if you look at the draft bill, which is public, you'll see that it has a technology impact assessment in it.

And if you look at that, you'll see that a lot of it is pretty similar to the risk assessments we're talking about for social media platforms, or for automated decision systems. But then there's these like ed.-specific components, right. Is this technology actually good for learning? And is it good for all types of learners? And what are those learning outcomes? That's very specific. You don't see that in a normal algorithmic impact assessment. So again, how do you kind of build off of these core kind of values and approaches, and then build on the context specifics? So that was a little rambly, sorry.

[00:39:43] LEISEL BOGAN: No, I love it. That's actually really interesting. And kind of touches on the first question in our Q, which is, what can local and state political bodies do to amplify good information, misinformation, disinformation, or help apply positive pressure for national policy goals, to combat misinformation? How can they help? VICTORIA HOUED: I feel like states are just a great way to experiment, or a great place to experiment on specific legislation that we want to pass on a federal level. And I think privacy is a great example of that. I think like the states going ahead with their own privacy laws, like California, even if it is not perfect, is like, they're at least trying to keep up with the state of technology today.

And I think that that's one of my biggest frustrations with Congress right now, is that we're spending five years to debate on what we should do with Section 230. But people haven't even come to the conclusion of like, if changing Section 230 is even the right thing to do. [00:40:43] And we're like sitting around, talking, and thinking while the tech industry is, yeah, getting into like the Metaverse, where it's like, where all of the issues and problems are completely—They're similar, but they're completely different. And a lot of language that we use to address a lot of the issues we have with like, you know, hate speech on Twitter or something, is going to be completely different than the harassment that people will experience in the Metaverse. And so I think, yeah, I am like, I love the idea of doing like a global hub.

And we've been talking about doing this on, even just like nonprofit, outside, independent industry level. But again, I'm like, okay, well how long will that take to actually stand up? We need to be just moving a lot faster. And I think states can do a really good job of just moving the needle, even if it's not fully perfect, because it's just really never going to be. [00:41:34] ANNA LENHART: Yeah. I think someone just needs to pass regulation so that we can see that the internet did not fall apart overnight.

And I think that's just going to be really problems— [simultaneous conversation] Europe for making that happen. But yeah. I actually do also want to comment on just like local news, which I know is struggling. But, to the extent it's still there, it is still very trusted.

And so one thing I would love to see more, is just local newspapers talking about their ad industry. I know that sounds a little bit weird. But just talking about the way that they are harmed by ad tech, and the way that they're struggling because of Facebook and Google's dominance, I think if local newspapers talked about that, and talked to their audiences, it would make something as sort of abstract as ad tech and monopoly power. It would bring it a little bit closer to home.

So I'll just kind of add that. ISHAN MEHTA: Yeah. And I think just quickly, like the sort of local context matters a lot.

You know, I work with the Senator from Hawaii, and the kind of like issues we deal with, on like language and culture, are just very different than the conversations happening at a national scale. So sort of adding that local knowledge is like, you are sort of in charge of your destiny in a sense. Because unfortunately, those conversation algorithms is always going to be missed at the national level. [00:43:02] LEISEL BOGAN: Those are great answers We have a question from Afsanah. It's a really great question.

And yes, I agree with John, those are great answers. And I do want to point out that this is something that stressed me a lot, when I was working on one of the issues with disinformation is a lot of anonymity online. And, as soon as you start talking about anonymity online, immediately, a lot of really vulnerable marginalized communities that really take advantage of anonymity online, you come to the table and say, “This is how it's going to harm us.” And so Afsanah asked, “I was wondering how you've dealt with or planned around over-compliance of platforms and putting liability frameworks on them. For example, liability for harmful practices or content.

And, in their effort to bypass any liability, they throw off users and people who bring in low revenue, but might cause higher risk, often marginalized folks. I've seen this with sanctions, anti-terror, and fintech, and experiences of sex workers, for example. Is there a way to balance around that, and how companies, by nature, are self-protecting and not really public-protecting?” But here, just in case you've already had those—yeah. So I would just be very curious if anyone has thoughts on those. Anna, you were nodding.

I don't know who wants to go first. [00:44:22] ANNA LENHART: [simultaneous conversation] volunteering me for answers. No, it's a fantastic question. And it's really hard. And it's the stuff we think about a lot.

And I think it's honestly the reason I'm just like so, so passionate around transparency and independent research, and various elements of these platforms. Not just because—exactly like you were saying, like there's harmful content, but also stuff being overly taken down, right. I think we really do have to be able to look at both sides, you know. [00:44:47] And sometimes it's not even policies for what it's worth. Sometimes it's just a member of Congress or a member of someone in the White House, like screaming about something. And then you'll see kind of a reaction by the platforms.

And I just think it's so important that we have expertise, both in government, in independent research sector, and then also at the platforms themselves, who are able to kind of look at that reaction, right. And so we talk about FOSTA-SESTA a lot. And I wish, if we had had something like the Digital Services Oversight and Safety Act in place before FOSTA-SESTA, I think we'd have some really, really incredible studies on exactly what the impact of that policy was. And right now, you know, a lot of what we have is the best that researchers can do with the data they have. But you know, it would be so great to be able to do really comprehensive policy evaluation of some of this.

And just changes in terms of service too, right. Just again, it doesn't have to be the new law. [00:45:47] ISHAN MEHTA: Absolutely.

I think, just to add to that. When we were writing the PACT Act, it was probably the number one question we dealt with, is like, how do we make sure this is effective, but doesn't lead to a sort of over-policing of speech? And working in a bipartisan manner really helps that, because you know, Democrats and Republicans approached this issue slightly differently. So having that balance put us already in a sort of more moderate—at least what I would like to think strikes the right balance. But also, I think one thing you have to remember, at the end of the day, these are businesses. And they care about bottom lines.

And if it is more financially viable to take everything down, that's what they're going to do. But, if it's more financially viable to leave everything up, that's what they're going to do. And so creating call centers, and more work, and sort of really large content moderation pains that don't necessarily link to profit, is not necessarily a winning formula. [00:47:10] So how do you balance sort of incentive versus responsibilities is important. And end of the day, the platforms are the ones that are going to implement all of this.

So, to a certain extent, you have to sort of listen to folks who know about platforms, or work there, about how these things actually work, and what is reasonable to expect from them? And what parts of what they're saying are actually true? And what parts we cannot read too much into? VICTORIA HOUED: I just want to—I think like one thing that I struggle with, is like how we talk about platforms being their own business. And I think the way we treat them is like any other business, or any other industry. But I think the one thing we haven't brought up is like the algorithmic amplification of a lot of content, in a way that—I'm thinking about marginalized communities, and the way they're treated on platforms. And it is very easy to get someone fully removed, who was like someone who was doing activism or something like that, through just gaming the system, through getting like a bunch of people to report that person, or what have you.

[00:48:22] And I think the whole concept of algorithms being able to make decisions on behalf of the users, and to be able to amplify certain things, and to be able to shift things one way or another, I struggle with treating the whole thing as like, “Oh. They're just a business who are going to make business decisions by human beings and by people.” And I'm not making any sort of like assertion, necessarily. But I think it's just—it is just not the same as dealing with like a brick and mortar shop.

And I think in watching a lot of the hearings, and the way that people talk about how we should treat these businesses, it's just not the same. I just wanted to kind of throw that out there in this conversation around how we should be treating them. [00:49:09] LEISEL BOGAN: No, I agree.

I mean, in the research, it was really interesting to see how Congress suddenly realized, “Okay, wait. We were first going after these certain behaviors or failure to moderate. And now we really need to look at the functions that underlie systems. And, you know, what are the architectures?” Because then, if you're going after the architectures of the systems, you're not as much—you're not as directly going after the speech. But to that point, the data is also very different than any other industry.

Like for a lot of this, the transparency arguments, what are your thoughts? I have heard the argument, “Well, it's just like asking—the FDA asking for the tobacco industry to hand over its data, so you can evaluate the impact of that particular product on children.” But it's different, because the data is about essentially, like, one researcher was telling me, “Well, that's like doing—If you're doing research on that type of data, with social media data, it's more like doing research on the child, not the product that's having the impact on that child.” [00:50:07] What are your thoughts on—And to me, it leads to a bigger question of sequencing. Like how do we get privacy before we start to get into all of these other? And Anna's point, like you start with something.

And if it had been in place, it might not—it might have mitigated the harms of another piece of legislation. So I guess, what are your thoughts on the two pieces, one, the sequencing of legislation and how we pursue it, and then two, the uniqueness of the data that comprises the harms, essentially? And what the privacy implications are for that? ANNA LENHART: I will say—But I wanted to give the other two a space. Yeah.

I mean, this was the hardest part about writing Section 10 of the Digital Services Oversight and Safety Act, which is the 20 pages describing ways for researchers to get access. And the problem was, yeah, we don't have a comprehensive privacy law in this country. And Europe does. So they, you know, are doing researcher access right now as we speak. And Mo is the group that's kind of figuring out Article 40 of the GDPR and what researcher assess to social media data can look like under those data rights that are already kind of included in GDPR. [00:51:22] So, you know, as we were working through the Digital Services Oversight and Safety Act, we had to kind of put in privacy safeguards.

And I had to work with a lot of privacy groups to do that. And yeah, it was not necessarily the order I would have liked to do this in. But, in some ways, I also think that might be an opportunity if we can get kind of bipartisan support around transparency, then maybe this is a way we get some data protections. Because, you know, the truth is, that the status quo of how research is done into social media is not perfect either. And it varies a lot, as many people on this call probably know, international rewards vary by institution.

You know, some have kind of stricter standards for use of publicly scraped data, some don't. Some sort of have a different view on what exactly counts as human subjects research. [00:52:13] So, you know, the status quo of the way a lot of this research is being done, even just with the firehoses, and kind of API's available, probably needs to be looked at from a data rights lens as well.

So you know, to keep that in mind, as we think about kind of having—I think there's a way for us to actually mandate some access, and maybe actually add a level of privacy that's maybe not quite there. But yeah. It was interesting having to end up writing some privacy provisions into a bill that was ultimately supposed to be focused on transparency. VICTORIA HOUED: I'm very much pro sneaking in privacy at legislation or sneaking in, even just like PACT Act, I feel like people see it as a Section 230 bill.

But it had like transparency provisions within it. And I think, like, it's just—Even if they don't go into law, or what have you, they set a flag in the stand for the next generation of people writing this legislation. And people can use it as an example for when it actually—when something actually does come to be. So very much about that. [00:53:20] ISHAN MEHTA: Yeah. This is, I think, like a House-Senate sort of dichotomy thing.

Because to me, trying to sneak in privacy provisions into any other bill is like, you know, killing that bill before it even hits the floor. Because the moment people hear privacy, it rings like three alarms in their head. So it's just one of those things, that you keep going around all this sort of politically radioactive conversations, but still maintaining—you know, trying to come up with something that actually is effective But, you know, obviously, in a perfect world, yes, the comprehensive privacy legislation before we go after any of this. LEISEL BOGAN: Politically radioactive. I love that, because it's so true.

And if you—when you're a novice, like I was, I didn't always know what those things were. And you'll be like, oh wow. That was way more extreme than I thought in reaction. Last question, since we're coming up on time. What do you think the United States should do to prepare for future crises and to ensure that our technology is competitive, but also mitigates harms to society? I mean a lot of the counterarguments to the legislation that many of you have worked on, is we don't want to hamstring our competitiveness.

We don't want to lose out to our competitiveness, to China. [00:54:46] You know, there was a discussion I was at the other day, that somebody was arguing that it's a good thing that a technology company has collected so much of the artificial intelligence talent, because otherwise, we would lose out on that talent to other states like China, who are developing at a very fast rate. So how do we maintain our competitiveness and mitigate harms to society? And then, in regard to that question, what would be a whole of government, or a whole of society approach? What would be your wish list, if you could implement anything? ISHAN MEHTA: I think it's really hard. Let me start with that.

I think, from our perspective, like certainly, at least like speaking for myself, I'm much more comfortable with the fact that products I use are American companies, governed by American sort of laws, right, and afforded these sort of protections. But at the same time, I'm very wary of that being a red herring for sort of foregoing any regulation, and it being the sort of, “Oh you can't do this, because China will win, or China will have all of our children's data, or China will take over AI, or China will take over social media.” [00:56:28] I think I'm very cautious when those sort of conversations that come up. Because that fear should not be a reason to not do anything. And you know, I think there are a lot of reasons to be wary of that.

But that doesn't mean that any of those things we were talking about today shouldn't happen. VICTORIA HOUED: Yeah. I think in line with what we were just talking about, in terms of like privacy laws, I think that the US has, because of where we are positioned, we have this amazing ability to set the tone globally. And I think like when we're thinking about, how do we stay competitive, I feel like we view legislation and regulation as necessarily like a bad thing.

But I actually think it can also be seen as a good thing, and that we are the champions in this space. Like we are the great thinkers of the world, as the world changes. [00:57:34] And I think just letting things just kind of go—especially in the state that things are in right now, where there are two super powerful hubs in the US. There is Silicon Valley, and then there's like Washington. And they're on completely opposite ends of the country.

And they don't like each other or talk to each other right now. And the way that the world kind of sees them is like, when these big tech CEOs come to Washington, and people are kind of yelling at each other/the CEOs are oftentimes kind of talking down to the representatives. And not that I think like we need to all be holding hands and kind of getting along—and as a person who helped write the Antitrust Report with Anna, obviously I am not really the biggest fan of the way that they have been able to grow.

But I also think—I just feel like the tone is not great at the moment, in the way that we kind of—I almost feel like it should be more like thinking of it as like disciplining to make someone better, versus trying to just take someone down for the sake of making them—making us less competitive. Because that's definitely not the case. [00:58:40] You know, the people who are trying to regulate aren't trying to make us less competitive, or make the US worse in any way, shape, or form. So we're trying to make us all better. Also, I'm very, very pro talent initiatives in general.

I just wanted to throw that in, when Ishan was bringing up like people talking about China winning on AI, it's like we should be investing heavily in talent initiatives to also make us highly competitive. And that's rooted in education That's rooted in a lot of other stuff, unrelated to just tech. But yeah. [00:59:17] ANNA LENHART: Yeah.

I mean, I'll just close with like, it can't be a race to the bottom, right. It's got to be—We have to center civil and human rights at the forefront. And we don't have to lose on the technology itself, right.

So I think facial rec. is such an interesting example We will have people say, “Don't put any limits on the use of facial rec. in the United States. You're going to lose to China.” No. the competition is about image rec. and image rec. artificial intelligence, which we can use however we choose, right. So we can use it for medical diagnostics.

We can use it in machinery that's, you know, doing work on otherwise dangerous construction sites, which is like my favorite use of autonomous vehicles. We don't have to use it for surveillance, right. And so we can have the better tech.

We can invest heavily in that R&D. But we can have our use cases reflect our human rights and set that standard for the world [01:00:05] ISHAN MEHTA: And sorry to go again. But to Victoria's point, there

2022-05-07 19:29

Show Video

Other news