Stanford s Rob Reich Mehran Salami and Jeremy Weinstein Where Big Tech Went Wrong

Show video

Become a sustaining member of the Commonwealth Club for just ten dollars a month. Join today. Each. Welcome to the virtual program of the Commonwealth Club of California. I'm Lovisa Magnesite, tech reporter for MarketWatch. And your moderator today.

We appreciate your considering a donation to support the Commonwealth Club's work. And if you wish to do so, please click on the blue donate button at the top of the YouTube chat box or visit the club's website at Commonwealth Club dot org. We also want to remind you to submit questions via the chat room next to your screen.

And I will try to get to as many of those as possible later in the program. And now I'm pleased to introduce today's guests, Stanford University professors Rod Riche, Miron Sahani and Jeremy Wainstein. They are authors of the new system era where big tech went wrong and how we can reboot. Rob Riche is the director of Stanford University's Center for Ethics and Society, co-director of the Center on Philanthropy and Civil Society, and associate director of its new Institute for Human Centered Artificial Intelligence. Miron Sorat Sagami is professor of computer science at Stanford and a former senior research scientist at Google. And Jeremy Wainstein is professor of political science at Stanford and former deputy to the U.S.

ambassador to the United Nations, as well as former director for development and democracy on the White House National Security Council staff during the Obama administration. Our guests wrote in their book that in the era of big tech. Groundbreaking technological innovation has given rise to an increasingly efficient society. But they argue that these advances are not without consequence, unbounded technological growth demands control over how we work, how we think, consume and communicate. They say that too many have accepted job displacing robots, surveillance based capitalism and biased algorithms as an inexorable cost of innovation, giving a powerful view the reigns over our society. He also claimed that technologists, venture capitalists who fund them and the politicians who allow for mostly unregulated growth.

Stepped into the seat of power and are prioritizing technological optimization and efficiency over fundamental human values. We are here to discuss the reality of what big tech has wrought and hear their views on how we can chart a new path forward. Welcome, gentlemen. Thanks so much. Thanks for having. Professor. Let's start with algorithms

which affect all of us, whether we know it or not, because they can determine whether we're hired or approved for insurance or other services. They can shape law enforcement's actions toward us and more. In the book, you say we need to push companies to open up their algorithms to independent testing and validation or outside audits. I've covered tech for a long time, and in my experience, tech companies tend to fight transparency at every turn. So can you talk about what's been tried so far in terms of auditing algorithms and what you think will be successful? Sure. Well, first of all, thanks again, Lavae, for for having us here.

I think in terms of algorithmic decision making, as you rightly mentioned, more and more consequential decisions in our lives are being made by algorithms , things like whether or not we're extended credit And so these are the kinds of decisions that in the long term have a big impact in our life. And what we want to get to is a place where we understand whether or not there is bias in those algorithms, how those kinds of decisions are being made. And I'll just give you one example of something that was done in the past. Amazon.com built an algorithm to screen resumes to determine who to give interviews, to write another consequential life decision because it affects employment opportunities. And so and running this algorithm, what they did is take a bunch of, you know, previous resumes that they'd gotten and saw who was hired, basically created data for a machine learning algorithm to then predict who had a high chance of being hired based on a resume.

Those would be the people they would want to interview. So when they built the system, what they found was that it had a high degree of gender bias against women. Right. Shocker. And that was based on historical data that was in their system.

And it would do things like down weight resumes that had the word women like women's soccer team or down weight resumes that had the names of all women's colleges. And so here's a very technologically sophisticated company. They realize that they have this issue and they try to fix it. And what they realize, they can't get all of the bias out of the system. So eventually they they do the right thing. They pull the plug on the system. But it raises two points.

One is, how do we even know which systems are having this kind of auditing done to them to determine if there is even bias in the first place? Right. So you could think of many systems that have been deployed that may actually have bias that have never been tested for it. That's why we call for the need to have these kind of independen audits to understand what sort of bias there is in the results of the algorithms whether or not they can be mitigated and ultimately have some level of transparency and control. That's outside the company as well. The second thing that comes up in thinking about the algorithms is understanding the data that went into them.

So just understanding an algorithm by itself is insufficient if we don't understand what is the historical record of where that data came from? What sort of biases may be encoded in that data. And so there's a level of understanding, provenance and auditing that also needs to happen with data that goes into these algorithms that ultimately makes decisions. So what are some of the things that have already been tried in terms of trying to ask the companies to to agree to outside audits of their algorithms? So in New York, there was this case, for example, one of the council members, former council members in New York, James Vaka, who tried to propose legislation to say, well, we should have this algorithm, that algorithmic transparency in the sense that people from the outside should be able to inspect the actual algorithm , the code for systems that make these kind of decisions. And the problem there relieved and it doesn't. That system like that wouldn't work.

And ultimately that legislation is not successful. And the reason why is just looking at the code for something doesn't tell you what it's really doing in terms of decision making, because it's based on learning parameters, based on data. So one could say, well, once the learning algorithm is actually tuned, all these parameters based on data, why don't we just take a look at it then? And even the problem then is that these systems can be so complicated. Some of the current state of the art systems have on the order of tens or hundreds of millions or billions of parameters in them. There are just not at a scale that a human being can look at and understand what's being produced. So we really need is an odd system that looks not just at the algorithm or the parameters that are in it, but actually looks at the outputs in a systematic way to be able to measure them against certain kinds of criteria, looking for things like bias, looking for things like fairness, and then being able to understand at that level what are the decisions that are being made and what kind of ways are the affecting different populations.

And maybe one thing to just add, Levy, is that we're not yet in a moment where we have mechanisms for oversight of algorithmic Decision-Making. So you're absolutely right that we're in a dynamic where companies jealously guard the algorithms that they use to power the platforms. This is true also in public sector institutions where algorithms are being deployed.

Those are not visible either in terms of how they operate or even the very fact that algorithms are being used to make decisions. And so the story of James Vodka and the New York City Council that we tell in the book is an example of just that frontier that we're at where our democratic institutions, elected officials are beginning to be aware of the ways in which algorithms are affecting our lives, both in the private sector and public sector, and beginning to create expectations about what it means to provide greater transparency and mechanisms for due process. We think that kind of progress in the policy sphere is some of the low hanging fruit for tech regulation. You know, you know, from what you cover that the most contentious issues of the day are content moderation or antitrust. But some of the low hanging fruit are things like algorithmic transparency and auditing, creating mechanisms whereby users can understand when algorithms are being used to make critical decisions about their lives, to put users in a position to seek an explanation and a rationale for decisions that have been made about their lives and then provide rights of due process.

And we see Europe leaning in in that direction under its generalized data protection regulation. And it's time for the U.S. to follow suit. Thank you for that, Professor Wainstein. I do have a question for you, actually, and that is a common theme in the book is tech moving quickly? But government moving slowly.

I want to ask you about automation and how government and society can deal with the fact that some jobs today won't exist tomorrow. Any thoughts? So, first of all, you're exactly right to put your finger on what we call this race between disruption and democracy and the moment that we're living at now, where we see the paralysis of our political institutions, the polarization in our society. You think like democracy is just not up to the job. But the truth is, that's not just a contemporary challenge we often see and have seen over the last hundred and fifty years a process of technological change driven by the private sector, that that generate either market concentration, as we've seen in the current moment, or a set of societal externalities that come from new technologies. And then it's up to the policymakers to play catch up. And we need to grapple with that problem.

But let's take automation, the example that you give. This is a slow moving train. This isn't one of those frontier technologies that hits us over the head. And then we have to figure out what to do about it, like when the social media platforms, all of a sudden we're being manipulated to interfere in our electoral processes. That caused a lot of people by surprise. Automation has been unfolding for decades. Right.

Automation is a slow moving train. It's not one of those fast moving trains, but it is also becoming abundantly clear that the impacts of automation on society are not evenly distributed. Right, that there are certain occupational categories, people at certain ends of the income distribution or educational qualifications who are much more likely to see total job displacement, and then others who are likely to see the nature of their job change over time.

And I think the challenge for us and our policy is basically to figure out what are the compensatory and mitigating mechanisms that we need our political institutions to set in place. That is, we have to expect that ultimately our companies are going to pursue the benefits of automation, but we have to figure out what kinds of investments are we going to make in education and retraining? What kinds of investments are we going to make in upskilling? How do we provide incentives for companies to basically privilege the preservation of human capabilities and not just the wholesale shift over to robot or machine capabilities? These are policy choices to be made by political officials, elected politicians, and we're not yet seeing the kind of action that we need on that front. Along that same line, maybe, Professor Reese, you can weigh in on this. You know what, I think we definitely saw the effects of technology, especially during the past year or so because of the pandemic. And I'm talking specifically about the rise of delivery or its effect on ride hailing, for example, and sort of the lack of a safety net for those workers, those workers that are now in sort of this new employment category because of the fact that they were they are part of the gig economy.

You know, in this case, in the case of the gig economy. There are. All kinds of different players trying to come up with a solution, right, there are the legislative efforts. There are the efforts in the courts, et cetera. What do you think? How do you. Is there a happy medium? First of all, yeah.

Well, you're exactly right about the transformations of the pandemic and the experience on workers. And in fact, so, so many different parts of our experience, education, work, family. You know, pandemic is one of those rare events in which everything is really transformed overnight and society is being remade in a variety of ways. And I want to get to the question of gig workers and their particular situation.

But I just want to emphasize here at the start with the pandemic as the kind of cause for an opportunity or to focus on gig workers, that one thing we often say is that we started writing this book before the pandemic had had set in. And the idea was to write a book about how a small number of big tech companies had really accomplished a kind of power grab over so many aspects of our lives and our lives as individuals and certainly our lives collectively as as citizens in a democratic society or members of some communities. And the pandemic has revealed that the power of the companies has only increased. We sometimes say that the winners of the pandemic, so to speak, are the big tech companies.

Everyone has become more dependent on them or not in person tonight, because we're still being cautious about the pandemic. And now we're using a product that most of us didn't use prior to the pandemic. And that just ripples across so many aspects of our lives. So that initial impulse about scrutinizing the power of big tech, looking at the negative externalities or harms that big tech has imposed, has only increased and magnified during the pandemic. And you pinpointed one of the areas in which that's especially true, that gig worker economy.

And there are lots and lots of interesting things to be said and that we try to convey in the book about this. You were exactly right to say there are legislative efforts in California. Of course, there was a combination of an initial ballot proposition then then an attempt at that state legislation and then a counter ballot proposition. And that's a kind of reveals what we think is more generally true about big tech, namely that we're exiting an era in which we thought that the rise of Silicon Valley would increase human freedom, spread democracy and support the capabilities of all humans, a kind of techno optimism or even utopianism about the power of being a programmer or the power of working at a tech company And then the past five years, even before, of course, the pandemic did this big backlash against big tech. It's it's spreading misinformation and disinformation is violating our privacy.

It's destroying jobs and replacing humans with machines and so on and so forth. And now we think that we're finally at a moment after this long 30 year period of tech optimism, change the tech pessimism that we can be a bit more a bit more realistic. See the great benefits that big tech is able to provide us while also then being wide eyed and clear about the harms it's also imposing.

So let's talk about the gig workers specifically, just very quickly. We have a bunch of things to say about them. It's a great example of how the disruptors of Silicon Valley managed to find a technologically enabled product which employs and compensates very generously a relatively small number of people. And those who work at the platforms Jordache, Uber, Lyft, et cetera, but then treats as as contract workers, the very people who are responsible for then the actual delivery of the service. And those people, of course, have certain freedoms to work when they wish and when they don't. But lack the basic protections that would attach to any ordinary employee.

And it's a it's a genuinely important social debate then to try to referee whether the kinds of benefits that are on offer outweigh the kinds of harms that are also present. And I'll just say this one last thing about the way this was resolved most recently through the ballot proposition, which gave the companies permission to continue classifying workers as contract workers rather than as employees. We think it was a certain unlevel playing field that anyone in California who opened their app during the period of time rops election would get what is effectively a political advertisement from Uber, Lyft Saying here is how we think it's important to vote on this ballot proposition. And that's a way in which the market power of the companie has been transformed into political power in a way that's true across other kinds of tech industry or tech companies as well.

And so I am confident, I think Jeremy and Marin are confident to this isn't the last we've heard of this particular debate in California or elsewhere. But the stakes of the debate are huge and the livelihoods of people are on the line. I do want to sort of keep going along those same lines and talk about, you know, the trade balancing the trade offs of convenience brought about by tech versus the common good, you know, and still speaking about the gig economy , but also about other things. You know, some people don't think twice about the overall costs of being able to satisfy their cravings for ice cream or French fries at any given moment or of ordering Same-Day delivery or whatever their heart desires, or for using Gmail and Facebook for free.

And neither do some investors. Right. It's it's it is that that is the system that we live in. So let's talk about stakeholder capitalism and corporate responsibility. In the book, you guys talk about giving greater voice in companies to those who are likely to be hurt by technological change. How can this how can that be achieved? And I will ask any one of you who wants to weigh in.

Or maybe to start, I think if you look at Proposition 22 in California, which was what reclassified the gig economy workers as contractors with a specific carve out for people who are the drivers and delivery operators and Doordarshan left and Uber, what you find is that by classifying them as contractors. Right. They're not eligible for employee benefits. They wouldn't be eligible for the kinds of things, say, for example are being able to participate in a stock participation plan in the company so that some of the wealth creation in that company could be shared. Perhaps even more significantly, they wouldn't be eligible for benefits.

And so for things like medical services, the medical needs, they have now become something that the state needs to provide through other kinds of programs. And so that's a perfect example of a sort of externality that these companies who would normally have responsibility for this to their workers have now essentially abdicated or not abdicated. They've specifically removed that responsibility. And given it to the state is something that they need to provide while they still get to generate the benefits of the stock price increases and the revenues that they generate from their business. And so that's a place where we think that's an example where you could have more power in the hands of workers in the company if these kinds of carve outs were not allowed.

And we're seeing the next chapter of that unfold is Proposition 22 is now being declared unconstitutional to see whether or not these gig economy workers actually get more rights within the company. And if they get more rights, what kind of power will they then have to be able to, for example, take collective action with respect to what what actions the company wants to take? So that's just one example of what kind of in the broader scheme of things, one of the things we talk about in the book is how companies get funded and the the incentives very clear to the funders have and then the founders have in terms of, you know, pushing these companies forward in a lot of it is focused on looking at particular metrics that they're trying to optimize over time So for some of the platforms, that might be things like time spent by users on the platform for engagement might be things like clicks. You're on ads. It might be things like the friend connections they generate But what happens is those things that are being measured oftentimes aren't a good proxy proxy for what we want to see in the world. More people watching videos doesn't necessarily mean those people are happier watching videos. It could actually mean that they're wasting a bunch of their time.

I have two teenagers and I see him spending a lot of time on videos. That's probably not particularly helpful for them. But when the company chooses to optimize one metric, that's something that's driving their revenue without thinking about what are the societal consequences are not paying enough attention to those societal consequences. That's the place where we see these power imbalances, and that's the places where we believe that some form of regulation is necessary to make those balances more easy. And maybe just to add to this lobby for us. In the book, you know, we can look at this through the lens of any particular technology, but the problems are systemic.

That is there a set of tradeoffs where decisions are being made by those who are designing technology, those who are financing technology, those who run companies. And and those tradeoffs are being weighed and made behind closed doors and without a voice for our society more broadly. And the gig worker example sort of makes this abundantly clear. You know, my mother in law uses ride based apps.

That's a way that she gets around without her ability to drive. But she worries about the precarious situation that it puts drivers in and who is responsible for that precarity that they confront. And as Marilyn described, eventually that falls on the state. If people are disabled, if they don't have access to health care, if they find themselves injured in the workplace, and when they're treate as contractors, it falls on the state and it falls on all of us. And I think part of the central argument of the book is that these tradeoffs that are being weighed right now inside tech companies have got to be made explicit and transparent. And we need people inside companies thinking about these tradeoffs in a more transparent way.

That is, workers inside companies. And we're seeing workers in Silicon Valley begin to exercise their agency to say, I'm not comfortable with that and use of a project of a product that I'm designing or I'm concerned about the potential harms of a product. We saw this on display with the Facebook file story in The Wall Street Journal. You can also think about structural changes to companies. That is, who has a board seat, who has a voice? What is the role of workers in the direction of companies, but also what is the role of affected communities in companies? And ultimately, some of the legislation that we see being considered in in Washington that's thinking about new corporate forms and new approaches to corporate governance are all about that. And then, of course, there's the role of our political onstad.

Russians, in attempting to mitigate these harms, Marillyn, use the word externalities to describe these harms when ultimately the rational pursuit of private interest, which is where you started with with the question, generates social harms. We don't look to individuals to fix those problems. We recognize that individuals may consume a free product and give up all of their data to these companies.

But if we think that there are risks associated with the accumulation of power that comes with control all over all of this private data, and we think that basically citizens are not in a position to advocate for their own interests because the potential and uses of their data are hidden. That's when we take action as a regulatory mechanism. That's the creation of the California Consumer Privacy Act or the creation of federal privacy legislation. So these are all three vehicles that the workers themselves, new structures of corporate governance or ultimately the role of our democratic institutions in mitigating harms, where these tradeoffs that have been weighed behind the scenes and by individuals designing technologies and financing technologies are brought out to the fore.

And where the harms that we're confronting now in society are engaged in a much more serious and deliberate way Larry, can I add one just quick thing here, because I think for the vast majority of people that haven't yet read the book, and of course, we hope you hope those people do. I want to say something perhaps unexpected on behalf of tech companies or on behalf of the people who work in tech companies, because basically we've been quite critical. The book is in many respects critical. The subtitle, Where Big Tech Went Wrong makes it obvious. But I want to be specific about a kind of criticism we don't have and we see all the time around us that we think we should we should see less of and we would want to avoid. And that's the idea of trying to get good founders or better people who work in tech companies and focusing on whether .

Well, is that is Mark Zuckerberg a good person or a bad person or as Elizabeth Holmes, a good person or a bad person? Those are our stories that grab headlines, but they distract us from the broader structural, systemic issue. What's interesting about the Elizabeth Homma story about Theranos? Well, she's alleged to have lied, cheated, deceived her. Investors are employees and the public.

Is there anything morally defensible about lying, cheating and stealing? No, it's it's basically a morally uninteresting case. If she did it, she should be prosecuted. Maybe there's some legal technicalities. It's a slightly more complex case. But the issue that confronts us as a society, as you started us off Levy, is about the tradeoffs of different values. And it's not about good people and bad people or a single individual who is a psychopath and another individual who is a moral saint.

And when we confront the structure of the sector, the the incentives of how big tech works, the optimization mindset, the appetite for scale, the venture capital and the regulatory indifference, at least for many decades in Washington, D.C.. That's when I think we can understand what's happening. Readers don't have to understand the details of deep learning or understand. What you have to understand is the broader system in which big tech has arisen, in which the Davids have become the Goliath's, and disruption is always up before.

And then to think about these tradeoffs. Right. You know, that is a perfect segue way to my next question, which is that all of you as professors at Stanford teach would be tech founders and entrepreneurs and future venture capitalists, all of whom play such a huge, huge role in the in the world around us. Right. As we've seen an outsized role now. Can you can one of you talk about how important diversity then becomes as you as you teach, you know, the students at Stanford? I'm so glad you brought that point up, because it's a super important one, and I think Stanford being the seed bed of Silicon Valley being the place where, you know, a lot of founders from these companies come from Stanford, it behooves us to bring more insight around ethics into that education, and especially to bring more diversity into that education, both in terms of who gets involved in building these kinds of companies, who gets the knowledge around computer science.

What we really need are more voices involved in the tech sector, because what happens is there is a question around the people who are in the technology sector, which right now is, you know, demographically a very fairly narrow slice, tends to build technology for other people like them. And so if we want to see a broader range of solutions to problems and more importantly, a broader range of perspectives brought to existing problems, we need to have a more diverse set of people involved. And from our standpoint, that's certainly something we try to do in the book. It's something we try to do with our teaching is to bring in a diverse set of voices to talk about these issues.

So, for example, in the class that we teach at Stanford, we have panel discussions where we bring in lots of different viewpoints and also talk about research work. The different people have done as a way of lifting up the work of many people who came before us to show the issues that exist in the field. It's not a surprise, for example, that many of the issues around bias and algorithmic Decision-Making have actually been pioneered. That research was pioneered by black women. Why?

Because they tend to be the most impacted by the bias in those algorithms. And so they have a vested interest in terms of the results that they find. But those results impact all of us. So it becomes important to bring out those threats.

And those are some of the things we call out on the book. But as we think about that forward as an educational institution, right where we teach at Stanford, we also view it as an evolutionary process. Right. Education needs to evolve. Needs to bring in more discussions of ethics and value tradeoffs into the full curriculum. And so in terms of computer science courses, that's one of the things we're doing now, is embedding more modules of ethics throughout the entire computer science curriculum in addition to having standalone classes.

But it's to understand that there is a need for this greater diversity and there is a definite push to try to bring more people into the field so that we get better solutions for everyone in the long term. And how would you know, a question is open to any one of you? How would you rate big tech in terms of what it's done on diversity so far, what it's trying to do, what it says it's trying to do, and whether it's succeeding? I'll start there with a general perspective, which is that I'd give big tech a terrible grade, you know, if not failing, near failing to put it into academic grading language. And the reason is the following. It has to do with what what was imagined to be one of the extraordinary benefits of big tech or working in big tech, which is that there are relatively few professions where you can acquire a bunch of skills in this case, programing skills, technical skills.

And then at an unbelievably young age, you know. Twenty one year old dropout founders are. Twenty three year old programmers hired to work in these companies.

You know, the the age the bias of Silicon Valley is also quite well known that the big valley skews young. If you're over 40, you're practically a grandparent at the company. And when you get a bunch of 21 year olds who can basically code in their pajamas overnight and then roll out their their product to millions or even billions of people, you know, famously, Instagram was a company of 13 employees when it was sold for a billion dollars to Facebook. You can't be aware a small number of people, a small number of unrepresentative people about the impact that your product is going to have on millions of people or billions of people.

And in that respect, the failure of the companies to try to even access the voices and perspectives of the of the people who are impacted by what they're doing is a kind of mirror image of the great benefit of being a coder, a programmer, namely, that you don't need to have, you know, a lot of maturity in the field in terms of your age. You don't need to climb a corporate ladder. You just have these incredible coding skills.

And then from nothing more than some software, develop a product that can change the world. And then maybe what you have in mind, Levy, is the idea of diversity dashboards and the hiring efforts within companies to diversify them, as well as in venture capital. And they're, of course, big tech has awakened somewhat recently to the importance of this, and they're making halting steps forward. One, I'd point to, you know, a grassroots sign of some some initial promise is that the computer science major at Stanford, where, you know, Marrot is in the computer science department, is the largest major on campus . It's the largest major for men. And it's also the largest major for women

And so the number of women coming into the field of computer science has been growing significantly over the past couple of years at Stanford And if that continues over the next decade, at least with respect to gender, we should see some significant improvements in the in the coming years. But one thing I'd add, Levy, is, is, is that for that progress to be realized as a transformation of the tech sector and not just a change in the pipeline, the companies that are recruiting this talent need to adapt and evolve their internal cultures to really make space for those voices. And those voices will often be discordant. Those voices will often challenge the way in which particular outcomes, like the decision of some of the social media platforms to privilege free speech over the potential harms to groups that have been targeted online. You're going to get voices as you diversify tech that really challenge how different values are being weighed. And I think some of the challenges that we're seeing in the current moment are that these these voices, these voices that are raising uncomfortable questions are not finding a hospitable environment for themselves in tech. Right.

When when Tim Gabra raises concerns within Google, leading the ethics team and then sees that she needs to take her concerns outside the company in order to be heard, or when women in particular raised concerns about how sexual harassment has been handled and the nondisclosure agreements that have been agreed to. These are cultural changes that have to be made. And we know it's possible. We saw that change possible in the context of Uber when Susan Fowler raised her concerns.

But but we risk being in a world in which we develop a much more diverse pipeline. But those individuals don't find a place for themselves in the ecosystem because the companies don't create space for their voices. The venture capitalists don't align capital behind a more diverse set of founders and nothing changes in the ecosystem. And we have to guard against that. And diversity dashboards aren't going to get us there. Right. We have to take a hard look at what's going wrong, why retention of people of color continues to be a problem in tech, and why it is that that capital tends to go overwhelmingly from venture capitalists to people who look like the venture capitalists who have capital.

I am so glad you brought that up. We do have a question from the audience for Professor Sahani, and that is, what do you think about what happened at Google with the firing of Timna, who she had raised questions about the artificial intelligence programs at Google? You know, I hate to put you on the spot, but I mean, what do you think about what how your former employer has handled that situation? Yeah, honestly, I think they made a terrible mistake. They just handled the situation horribly in the fact that Tim needs a well-respected A.I. researcher.

She wrote a paper that was critical of some of the systems they had. And rather than publishing that paper as it is done in the scientific community Right. The paper went through a peer review process. It was accepted by other scientists as being a contribution to the field. And then if they had something to say with respect, if they thought there were shortcomings in that paper, they should have done what scientists do, which is you write a follow up paper, you make your points, you make your arguments, you have it be peer reviewed. There is a debate that takes place in the scientific community that is healthy and that is how science progresses.

And I think they short circuited that process by basically saying, we just don't want this paper out there. And it was a huge mistake. And they're facing the fallout from it. There's people who've left the company as a result. There's certainly students we talked to who don't want to go to the company because they see what has happened as a result. And so there is a lot of discussion around whether or not these groups, for example, that exist in some of these companies to do responsible innovation are performative or not. Right

There is other companies that have done a better job with it. And I think that as we see the fallout from this situation continue, it puts other companies on notice to say, look, there actually we have a process for this. It's called science and it's called peer review.

And we need to be able to allow voices to independently raise concerns. And if you want to take these issues seriously inside a company, you have to be committed to that. You can't have your own process within the company that short circuits publication because you don't like what it says or yo have a different opinion.

So if I can just add one small thing about, you know, beyond the case of Google's ethics team and tiMne Jedburgh over the past, you know, bunch of years, a lot of big tech companies have hired people to do ethics, work, a responsible innovation teams and find ways to surface concerns from within the tech companies about the very products that they're developing. And for me, the bigger lesson here is that if. Even when those people acting in good faith surfaced, those concerns, but they're then seen as inconvenient from the standpoint of someone else upstairs in the company.

And those voices get drowned out or those people have no power. One lesson for me of The Wall Street Journal Facebook Files series from last the last week or the week before was that the research internal of the company showed and documented a whole variety of harms across a whole variety of products. Just turns out those people didn't have any power. And you can hire people, do do ethics. But if you don't even allow the papers to be published and you don't give them power within the company, then the suspicion from the outside is that this looks like a lot of window dressing.

And with that, I do want to take another audience question and. The you know, we were we were talking about the effects of. Big tech, but the question is about social media and and, you know, of course, that could be a whole panel unto itself. But the question is, what are your concerns about social media not only now, which I guess those are very well documented, but five years from now, what are the what do you see as the most important concerns about social media? We have an entire chapter in the book about social media and the power of these extraordinary platforms that allow user generated content to be surface and then be distributed. There's no question that the social media companies have produced a considerable amount of good and that we also are aware of some of the harms. I'll give you my own sense.

The way we communicate this in the book is that there's a value of freedom of expression, and it's important for social media companies, for any company to put some weight on the scale of freedom of expression and in in an environment in which user generated content, the creativity that comes from a wide, vast array of human beings, you're going to get good content and bad content or at least content that people disagree about. But then there's also the value of individual dignity and not being the target of hate speech or being drummed off the platform and, you know, hounded by a social media mob. Well, that's in some respects an example of freedom of expression, but it comes into tension with the dignity that all users ought to have in using a particular product or inhabiting the the civic square of a social media space. And finally, there's, of course, the great value of democracy, which depends upon a certain type of healthy information ecosystem, where, once again, the value of freedom of expression allows for the creation of misinformation and disinformation.

So we have to find a way to balance these three things. And our own view in the book is that we ought not let all of those decisions be made by a tiny number of people inside the company. Or to put it more critically, we we don't need any more apology tours from the CEOs or executives of social media companies to say, OK, we understand the problems. Our best people are working really hard on it. Now it's time to break some power outside the company. But last thing here, let me I just because you invited us to think about five years from now, not just now.

The book ends with a short description of a new frontier technology, which insiders in the tech world will know about, but many people won't. And there are these large language models, the most famous of which is called GPT three by a company called Open Eye and extremely powerful computing behind them, which allows anyone with just a short input of text, you know write an article about big tech. And the new book by the three Stanford professors called System Error by Levy on Marketplace Hit Click.

And you can get a five paragraph output that is likely going to be quite plausible in terms of thinking this thing. It's actually a reasonable article. So in a world in which these language models, like three, are cheap and widely available, we've just put on steroids all of the existing problems of social media, because if you now you can do misinformation, disinformation and hate speech through a language model that is just simple and cheap to spit out all kinds of text.

We're in an accelerating world of technology that this is a frontier which I think we have to grapple with sooner rather than later as these as these technology. There's basically an arms race among the large tech companies to develop the most powerful model right now. Professors Asami, you wanted to say something? Yeah, the thing I worry about five years from now at some fundamental level, and it's because we already see signs of this is the corrosion of what truth is, and that we all live in our own little world that is mediated by social media and shows us our own version of what we believe is truth and reinforces myths, mistruths. We want to believe are true. And so we don't have common ground and common understanding for things like our democracy to work, to be able to carry on conversations with people on the other side of the aisle or the other side of the country or wherever the case may be, because we have our own notions of what reality are and when there isn't sort of an objective agreed upon reality that is enforced by the information we get, then we are more likely to retreat into our own corners, to engage in tribalism and to not try to come to some common understanding, because we just don't have a notion of what someone else believes your things And I think, you know, this is actually one of the assignments we give in our class is we create a simulation of a little social network. And in that social network, we have some users that skew slightly to the left and slightly to the right.

And we see what that social network does over time as it tries to optimize for things like click through rates. And what we find is that those two groups of people basically separate. They stopped sharing information. They stopped seeing information that the other might be interested in. And over time, they just get their own you know, they're basically islands. And if we think about that playing out in the long term, in a real corrosion of what it means for something to be true, then we get into a situation.

But I don't want to get to hopelessness, but I think that's the kind of thing wher we need some strong safeguards in place now to prevent this from going. I do want to move on to something else, Professor Weinstein. I want to go back to the role of government in all of this. One of the solutions that you all bring up in the book as a solution to what big tech has wrought is called adaptive regulation. Can you explain what that is and why it's important? So thanks, love you for coming back to questions about government, because, you know, at the core of our book is is an argument that that sort of governing a society transformed by technology is one of the existential threats to democracy at the present moment.

And that ultimately like climate change, figuring out how our political institutions are going to get a handle on these externalities and navigate and referee these trade offs is something that we we absolutely need to do. And and in painting the picture historically of the relationship between innovation in the private sector and government response. We talk about the slow pace of response. Right. This this race between democracy and disruption and democracy coming on 20 or 30 years after the harmful effects are realized with a big sort of booming response in terms of regulation, but then being stuck in stone with your feet in the foundation stones, unable to respond to the next new development in technology. And we got to break out of that.

And part of breaking out of that is is on the one hand, putting our government in a position where it's able to attract first rate technical knowledge and know-How, because we have to recognize that in the present moment, lacking that knowhow in our elected officials or staff in Washington and even in our state capitals, means that the major providers of technical know how to elected officials are lobbyists paid by companies And so we need to balance that understanding of of what it means for society or what automations effects are going to be on the workforce. We need to balance that against knowledge and know how that comes either from civil society, which represents a different set of viewpoints, or from a staff that's employed with the kind of technical knowledge and know how to make these judgments. But but knowledge and know-How isn't enough, because ultimately we need to be in a position to respond and experiment with our regulation and with our policy in the same way that tech companies are regularly trying out new project products and testing them and experimenting to see their impact in the world. We have a good model, honestly, of what this looks like with the roll out of self-driving cars in the United States. We didn't make a decision when self-driving car technology was developed to roll it out at scale for billions of people to put self-drivin cars on the road everywhere and to just see how people behave. No, in fact, we recognize that the potential risks of these technologies were sufficiently high that we were going to need to try them on the roads under carefully delimited conditions, with significant oversight to discover a set of things that are deeply disturbing about human behavior, that people are not willing to follow the guidance that they're given to sit in the front seat and not in the back seat.

They're not keeping a hand on the wheel. They're actually opening up their video player to stream something from Netflix. And then we're seeing the kinds of effects of of these new technologies.

And then the companies developing these new technologies need to respond to those critical dynamics of human machine interaction that clearly are not optimized for in the design of self-driving cars at the current moment. That's an example of adaptive regulation. And Nicole Wong, who formerly worked for big tech companie and then was deputy chief technology officer of the United States. I served alongside her in.

The Obama administration has described this move towards adaptive regulation through the lens of the slow food movement. We need a slow food movement for tech, right. One that takes seriously the development of products, the testing of those products, and the effort to anticipate and design for potential social consequences before those technologies are scaled to a level at which those social effects are enormou and cause these kind of harmful effects on things that we really care about fairness, justice, democracy, the well-being of individuals in our society. And so I think it's possible we see models of this kind of adaptive regulation, in particular around fintech products in places like Taiwan, in the United Kingdom.

But it requires a different orientation. We need to prepare ourselves for a moment where there's not one piece of legislation that comes along to fix all the problems of big tech. That's a silver bullet that doesn't exist. What we really need is a government that's capable of working hand in hand with companies to identify new technologies that might have social effects and potential harms, to test those technologies in real world settings, to experiment with different regulatory models, and then to attempt to bring together not only new product design, but with regulatory models that can mitigate these harms. And it's possible we describe in Chapter eight of the book models of where this is being done around the world. And I think the most clear example is the way in which we're approaching self-driving cars so carefully, so deliberately, and through a combinatio of innovation led by the tech sector, but also the guardrails set in.

Faced by our political institutions to ensure that these new technologies don't recover on the roads. Along those lines, I would I do want to ask you about, you know, the book talks about the importance of having technologists at the table when it comes to tech policy, but that politicians themselves should be knowledgeable enough to challenge them when needed. Right. What are your thoughts about sort of, you know, again, having covered tech for a long time? There's been sort of a revolving door of technologists who then work in government And in some cases, like in the case of Nicholai, I think we see that her her intentions and her, you know, her intentions seem to be in the right place. But at the same time, you know, there are former technologists who used to work for other companies that are now in positions of, you know, lobbying the government on certain issues.

How do you strike that balance? But make make sure that that we're still thinking about the common common good here. I mean, it's a it's a great question. And the revolving door between our private sector and our public sector is not unique to technology. Right? Right. A challenge that afflicts the financial industry in banking, you know, the regulation of airlines and sort of other major parts of the of the private sector.

What we really need to do is draw a distinction between the public interest and the private interest and and our challenge inside the federal government and state governments when we attract technologists in. And, of course, this is an enormous challenge because the public sector doesn't pay what the private sector does. And and with the competition for tech talent, you know, you're experimenting now with all of these temporary arrangements where you can pull people into government for moments for three months or six months at a time. But it needs to be clear that when you're coming into government, whether in the legislative branch or in or in the executive branch, that you are serving a different interest than the interests of companies.

You're serving something. That's the public interest. Ultimately, your responsibility is not to users of a product or a community that's built around a particular technology, but just society writ large. And ultimately, democracy is the technology that we have for attempting to generate outcomes that are beneficial, not for particularly privileged individuals, those who are holders of wealth, those who because the color of their skin receive favorite sort of beneficial treatment within our society, but generating outcomes that are really designed to cater for society writ large and to forge that kind of consensus. And I think if we simply assume that having technologists in government is enough, we risk diluting ourselves, because ultimately the challenge is to build a community of technologists who are thinking about how you design for the public interest and not just the private interest. And that means creating pathways into the federal government and into state governments that aren't momentary, that are really thinking about how do we capitalize on this extraordinar energy and innovation in tech, but give people the opportunity to bring thos talents and capabilities to the table, not to squeeze the next dollar out of potentially paying users, but to think about how we design social systems in society that give people equal access to opportunity, give people access to health care of equivalent quality. Right. Give people an opportunity to close the achievement gaps in education.

We need technologists in all of those spaces to not just to regulate tech, not just to think about the potential harms of tech, but also to harness tech to achieve these broader social ends. I want to move on to. I think we have time for just a couple more questions. And I want to ask Professor Reese, what are your thoughts about the antitrust actions in the US versus the antitrust actions in Europe and whether you know and what you think about? Which is working then? Yeah. Good. Well, you know, lots of us in the U.S.

have taken note of the fact that Lina Khan has gone off to become the head of the FCC. And Tim Wu, a law professor, also argue has argued on behalf of antitrust action against big tech is is now gone to D.C. as well . And it gives us an opportunity, just as you invited me here, to compare the antitrust approaches of the European Union against the United States. So, first of all, I just say that one thing that's been important in the U.S. is to try to reconfigure the legal arguments that would be the basis of antitrust action and actually hearken back to an earlier era in the U.S. And here I'm simply rehearsing some of the arguments from people like Timothy Wu or Lena Khan, in which, you know, a kind of more recent school of thought has tried to say that the basis of antitrust action comes when a lack of competition and a concentration of power in the marketplace gives companies an unfair advantage at charging prices that are higher than they would be otherwise if there were greater competition.

But, of course, in the tech sector, many of the so-called products we use are free And so there doesn't seem to be a harm to people in their capacity as a consumer And therefore, there's no basis for an antitrust action. Well, that's had to be reconfigured. And so the U.S. is in the early stages of trying to see whether or not there's a a legal case and a jurisprudential case that will survive in courts, that will allow an antitrust action to proceed on lines other than focusing on the price problem for a consumer. And that has to do with the concentrated power and a smaller amount of innovation and the ways in which, as we've just been talking for the past 10 minutes that great market power can be easily converted into political power and changing the very rules of the system as it goes forward. So on the on the early in the early days of antitrust action here, one would have to point to the European Union and say there there there are at least several years ahead of the United States and have had some initial early victories.

Nothing so compelling is breaking companies up, but they've certainly caused lots of headaches for the big tech companies and perhaps even begun to affect the ways in which acquisitions or mergers and acquisitions have been working. In the book, we talk a bit about antitrust and we try to look back in time at the example of antitrust action against Microsoft as a way of trying to make some predictions about what we were likely to see here in the United States. And so I'll just mention a few things there by way of hopefully picking listeners interests, which is that you have to be clear about what problem you're trying to solve if you if you throw in , so to speak, with antitrust. When we speak about misinformation, disinformation, hate speech, and you think that breaking Facebook up or breaking Google up or Amazon up is somehow going to solve the problem or lessen the problem of that. That seems to us a big mistake.

And the kinds of things antitrust is good for are various ways of avoiding the conversion of market power into political power and increasing the opportunity of competition within the marketplace so that startups and, you know, alternatives to the wild, a large entrenched powers might come into play. But antitrust is not a silver bullet solution for all of the various problems of big tech. And we shouldn't pretend that it is. Anyone else want to weigh in on that? And speaking of breaking companies up. What do you think, should should Facebook be broken up? Should Google be broken up? Should Amazon be broken up? I think that's you know, to follow up on what Rob was saying, that's kind of the hope. The people think that just by bringing the antitrust legislation and breaking up a company, you're somehow going to solve these problems.

But imagine for a moment if Facebook was broken up into five many Facebook's. What problems are you really solving there? And, you know, there might be a path forward there if you were to also guarantee things like data portability and interoperability between platforms, because then you could actually have a social network that was one and network in theory, but spanned multiple platforms. And then maybe people might have some choice as to which network they wanted to use based on the kinds of guarantees it provided for privacy or interactivity or whatever the case may be. But just breaking up the company alone doesn't solve the problem. And that's why it's important to understand what is really the lever of antitrust, what is to get you.

There are important things that it can do, even the threat of antitrust without actually breaking up a company. And that's something we explore, as Rob alluded to in the case with Microsoft. You know, part of the reason you got big tech players in the 2000s was because in the 90s, you had the threat of an antitrust action against Microsoft, which ultimately didn't break up the company. But it can easily be argued that it made a much less aggressive in terms of competitive positions it took against other companies or against acquisitions and mergers that would have just swallowed up competitors. And so even the threat of antitrust might create a more competitive landscape so that big tech doesn't just continue to dominate among five or six big players But if we really want to think about greater competition and greater choice, that that might bring We also have to think about the downstream interoperability issues or other kinds of technical and social issues that come up that really are the glue when we think about what does it mean to have a network across multiple companies or information search across multiple companies. I do want to I think we just have time for one last question and Professor Wainstein.

I will give this one to you. It's from the audience, and it's, I think, a very good question and a really good one to end on, I think. And that is. Do you agree that.

Informational tech, you know that I.T. and it's built, you know, the Internet and all the big tech. Does it fit well with capitalism and should it be regulated or treated as a as a utility? So it absolutely fits well with capitalism because capitalism is doing extraordinarily well out of the emergence of of these new technologies. Right. And and part of what we have to to navigate as we think about potential regulatory responses is how we create the conditions in the United States and around the world for the continued investment in R&D , the continued attraction of top talent to work in the United States and to work in this sector. And of course, the concerns that are often raised about notions of public utilities as platforms is that is that ultimately you may turn these into slow and sclerotic kinds of of institutions that can't actually innovate to stay on the technological frontier. In the book, we ultimately don't end up with an argument about the importance of a public utility framework for regulating the large media platforms.

Instead, we think that basically we need to be in a world where. We have two things that come together. On the one hand, we have the possibility for meaningful competition in in the development of platforms and in the development of products that can be responsive to different preferences from users.

And those might be preferences about privacy or those might be preferences about the kind of information you want to be exposed to, that people might have different preferences about the kind of content moderation that they want on their platform. And the challenge that we've seen with the limited policing of mergers and acquisitions is that each of the large companies has been in a position to squeeze out of the market. Some of the biggest potential competitors that might have offered consumer choice to citizens. And so we're going to need an antitrust approach that doesn't try and break up the big companies as one single silver bullet for solving all of these problems, but recognizes the value of healthy competition in the sector and uses the policing of mergers and acquisitions, as Marilyn described, to create the space for user choice.

But at the same time, we do need an appropriate role for government. But the role for government isn't to institutionalize and consolidate power in a single platform as the only platform that everyone should use, but instead to approach these kinds of harms that we've talked about. And when it comes to the social media platforms, the most evident arms are to the quality of our democracy, to the possibility of deliberation and debate around factual and valid information, and to use the power of the regulatory state to begin to bring out into the open the content moderation decisions that are being made by the platform to make deliberate determinations by both parties operating in unison about the conditions under which that power should be vested in the companies versus vested in our institutions of government.

And to give citizens an opportunity to weigh

2021-10-06

Show video