Thomas L Friedman and James Manyika Dialogues on Technology and Society Ep 1 AI

Show video

(lively music) (people chattering distantly) - [Tom] James. - Hey Tom. - [Tom] Really good to see you. - [James] Welcome. - [Tom] Yeah, thanks so much.

- [James] We haven't done this side in a while. - [Tom] Super, well, let's do it. (lively music continues) - Well, Tom, I'm glad to see you. I think the last time we did this was before COVID. What's been on your mind? I mean, a lot has happened.

What's been on your mind? - Well, James, I was thinking back the other day to the very first column I wrote about COVID, which was, I believe, March of 2020, right after it emerged. And I think the lead of the column even, or near the top, was that I just have this instinct that the world is gonna be divided going forward between BC and AC, before COVID and after COVID. - Oh, you thought that then? - Yeah, and it was the very first column I wrote and not really knowing how big this was going to be.

If you look at the history of the world since early 1990s, we've been connecting more and more nodes around the world through telecommunications, travel, the internet, et cetera. And at the same time though, we've been removing the buffers. And when you connect everybody and remove the buffers at the same time, that can manage the flow basically and prevent surges, the world can get very fragile.

- That's interesting, that's interesting. - What have you been up to? - What's been on my mind since the last time we did this, I think it's two kinds of things. Let me describe each one. I think the first one is I found myself, you know, thinking and doing research, but also involved in all these things, looking at the future of capitalism. - Why? What prompted that? - So if you take a long view, it's been extraordinarily powerful mechanism to drive economic growth and prosperity.

But, and this is where maybe it's analogous to your buffers, but it hasn't been great for everyone and everywhere. So yeah, sure, if you are an entrepreneur, an innovator, a technologist, a company, a corporation, a well-educated, high-skilled worker, this has been great. But if you look, especially in the last 20 years, at everybody else, you know, it's been actually quite tough.

So, high levels of inequality, much lower economic, social mobility, these massive inequalities of people and place. If you look across America, the differences, in fact, right down to the county level, are quite extraordinary. So I've been thinking a lot about these questions, but so as successful as it's been, where has it failed? I think there have been two particular kinds of failures that I've been interested in. One is the question around inequality and economic wellbeing for everybody. The other has been capitalism's, I think, failure to deal with climate change. I think we've never quite been able to price carbon and the things we do or these kind of externalities into our capitalistic system.

So I think we're gonna have to figure out a way to evolve how the model works so it still delivers on all these amazing things that it's always done, but at the same time, on one hand, brings along everybody, but also helps us grapple with this massive externality called climate change. But this is one topic that's been on my mind, but I want to come back to the other thing, which is artificial intelligence. I've been sort of coming back to my roots, as it were. In fact, one of the things that happened, Tom, is that about two years ago, this is just before COVID, the American Academy of Arts and Sciences had asked me to guest edit and kind of conceive and put together a volume of "Daedalus," that's the Academy's journal, and where the whole volume would be dedicated to AI and society, so- - And you got to choose the authors? - Oh, that was fun, yeah. So that allowed me to immerse myself back in again, fully, on where are we with AI technology and its development.

How's it affecting society? And the last time the Academy had done this was in 1988. - Whoa. - Was the last time they had a whole volume devoted to artificial intelligence. So yes, so if you look in the volume, there are 25 essays. They range from where are we with the development of artificial intelligence, all the way to, how does it affect the economy? Questions around jobs.

How does it affect these issues of ethics and bias? How do we think about using it? How does it affect great power competition and some of the geopolitics that comes? And then how does it affect institutions? So all of these questions were what we are grappling with in the volume. And so that's why AI has been on my mind a lot, even more so than it has always been. (lively music) - We've been friends for a long time. I got to know you when you were running, I thought you had the best job in the world, running McKinsey Global. - I did. - Think about a problem, assemble a team to figure out what's going on.

And you guys were great, you know, thought partners for me. Why did you choose to leave that job and come to Google? Obviously great job here too, but what was in your mind? - It's called technology and society. And in some ways, it's a culmination of all the things I've ever cared about, right? Technology's impact on the economy and society. In some ways, it's perfect for me. But I think I've always had this fundamental interest in the amazing possibility of artificial intelligence, robotics and the possibilities, and of that, but technology more generally. So if you look at a bit of the through line of what I've done since school, since graduate school is a combination of do research in technology, work with technologists, work with innovators.

But what the focus is on is on all the future forward things that Google and Alphabet is investing in. So think AI, think compute infrastructure. Think about how all that plays out into the world in terms of, you know, future of work and sustainability and the economy. But to focus on those things and looking ahead to say, how do we make sure we maximize the beneficial impact of that on users, first and foremost, and on society even more broadly? So how do we maximize that? How do we make that the best that it could be in that sense? But how do we do it responsibly? So when I had the opportunity to, you know, in some ways, focus even more on these technology and society things, I couldn't pass it up. (tense music) In some ways, some of the other amazing things that are going on in technology have to do with how we do things. I think you once described to me that you like learning, going to the edge of everything to learn.

- Right. - So I'm curious, describe how you learn. - The first reason I write is to learn.

The second reason I write is to teach. 'Cause I live by Marie Curie's motto that now is the time to understand more so we may fear less. And I think all the best learning happens at the edge.

'Cause at the edge, you get to see things in stark relief. And at the edge you get to name things 'cause you're the first to see them. So the way I tried to learn as a columnist was to go to the edge of four different realms, really. The first was the edge of human behavior.

I lived in a civil war in Beirut for four and a half years at the height of the Lebanese Civil War from 1979 pretty much continuously till 1984. And when you live in a civil war, you get to see how molecules behave, human molecules at very high temperatures. Second, I went to the edge of technology. I'm actually the foreign affairs columnist for the New York Times, that was my original title. But I grew the habit of going to companies, and McKinsey was one of 'em, which is how we met.

And I just wanted to do two things. Wanna hang out in your research lab. I wanna know what's going on at the tip of your spear. Because if you wanna know about the future, hang out with the people that are inventing it. And the same time, I wanna hang out in your HR department. I'm pretty sure I'm the only foreign affairs columns ever wanted to go to HR at AT&T and talk to their great HR guy.

And the reason is, my intuition was, if that's going on at the tip of the spear of Google or Apple or Walmart, and they're training their people for that tip, that's coming to a neighborhood near me. Third, I went to the edge of nature. Learning about nature and then learning from nature. Because I came to realize over time that this globalization system I was describing was so complex and intertwined, James.

And basically, I started to see the world much more through that natural lens. Last place I went to was small towns. America's actually a checkerboard of communities, some of which are rising from the bottom up, and McKinsey's seen this in its study, and some that are falling from the bottom down. And I wanted to understand, these communities that are the edge, why does one rise and one fall? And so basically the way I learned is then my design innovation was to arbitrage all of those. - What's fascinating to me about that, Tom, is what you described about how you learn is not entirely dissimilar to how some of these artificial intelligence systems come to learn. - I gotta know, how does an AI system write a column? Please don't tell me it can write like me.

- Well. - Better. No, please don't tell me better. - Watch the space. (Tom laughs)

No, no, no, but I think one of the things, and, you know, there's lots of different AI system. One of the things that's happened since the last time you and I talked, back to this idea of learning, is we've started to build these large language models. I mentioned those in particular partly because some of the capabilities that they now start to have can someday approximate what you do when you learn and write a column. - Can it both learn like me and write like me? - Let's tackle each one.

Learn like you. Well, so how does it learn? So these large models, what they do is we train them on an enormous amount of data. Text, language, Wikipedia, dialogue, all kinds of data sets to train them. What that training is doing- - It's generalized, it's not specific. - It's very generalized.

And the way they learn, quite frankly, at its most fundamental level is fairly straightforward. See a lot of data and basically play a game of predicting or fill in the blank. So I'm reading this thing and if I read it enough times, I start to get better predicting the next word. Then it builds this parameter, this parameter model for itself that gives it a sense of how words relate to each other, how concepts relate to each other. - It's just generally now. - Generally. All of a sudden, you go from a very simple idea, scales massively to now you have these models that can start to predict, you know, can generate responses, can be conversational, can generate text, can even generate a paragraph, can even generate a little essay.

Now, most of the time, these are not very good. So they're nowhere yet approaching the kind of thing that Tom Friedman would write, but they're getting better and better and better. But here's the thing that's actually start to get interesting. If you then start to train them on lots of very different kinds of things, so you give them some text to give them some natural language, you give them some images, you give them some sort of code. - Video, audio, maybe part of it, yeah. - Yeah, now all of a sudden, they start to become quite general.

These systems then become multimodal systems. So you can prompt them with words or natural language in English. They can do translations, they can generate images, they can generate software code, and in fact- - So it's getting close to artificial general intelligence. - They're becoming more general.

We're nowhere near AGI, artificial general intelligence. That's a whole conversation we can have, but we are not even close. But they're becoming more general.

- They're only gonna get better. - You can only imagine, they're only gonna get better. So for example, up until these systems, we could only translate a few languages. But these models, when trained generally and then prompted by a few words or texts in the language you're trying to translate to, all of a sudden can actually translate these languages extraordinarily well. So now we can, you know, there's nothing to say we couldn't now aspire to be able to translate up to more than a thousand languages.

(inspiring music) - When I listen to you talk about these AI models, it does feel like we're close to some real tipping point in these. Talk about that a little, 'cause I'm just thinking about our last conversation, which was probably about three years ago. I would be embarrassed to look back at it now 'cause I think of how much has happened quickly since then. Is the technology, AI tipping point coming quicker than we think? - Where we are now is there's been an incredible amount of progress in the last decade, especially.

In fact, the last even three or four years, that progress even accelerated. What do I mean by progress? Progress in the sense of we have more capable systems that are much, much more capable. But I think the kinds of things that perhaps people worry about are nowhere near in sight. People worry about consciousness and sentience and tipping points in that sense.

We're nowhere near that, we're nowhere near that. But the systems becoming more capable? Yes. Do we now need to think even more seriously about the amazing benefits as well as the complications? Absolutely. So in the case of Google, I'm happy to say that, you know, there are some core, basic guidelines and principles that have to do with, let's make sure we're not causing any harm. Let's make sure we're, in fact, we're applying this for beneficial, useful applications. But I think the world's gonna have to come together because it's not enough for one company to get it right.

- Well, you need a complex adaptive coalition, just like you need in a village. Every one of the problems thrown up by our Promethean moment actually requires complex adaptive coalition. - Well, what do you mean by the Promethean moment? - A Promethean moment is a great leap forward and of new tools, new energy sources, and new ideas. So printing press, scientific revolution, agriculture revolution, industrial revolution. And this moment that you are describing, it doesn't just change one thing, it changes everything.

How we learn, how we govern, how we do commerce, how we do crime, how we fight wars, they all have to change. So what is our Promethean moment? What is our printing press? Well, our Promethean moment is distinguished from all the others in two really important ways. It combines a technology virtuous cycle with a climate vicious cycle. So we've never seen this before.

So our printing press is actually our ability to sense. We can digitize, we can then, through bandwidth, connect that to our ability to process it and store it, and then learn from it, share with it, and act on it. That cycle is now going faster and faster and it's touching everything.

That is a technology super cycle. But what's different about our Promethean moment is it's accompanied by a climate vicious cycle, and it's also a cycle. Emit CO2 and methane, thicken the atmospheric blanket around the earth, warm the planet, melt ice, remove reflection of the sun's rays, heat oceans, change jet stream and gulf stream, change monsoons, end up with increasingly super destructive weather. So we're actually going through two super cycles at the same time, one in technology and one in climate. We're gonna be able to hear the world speaking to us in phenomenal ways that we've never done before. (gentle music) But I wanna press you.

Are we approaching a point with AI that we won't be able to govern this? I was just reading some Stephen Hawking stuff this morning that this thing could begin to design things itself, like maybe how to get rid of us. - No, I think that's an important question. We can make choices about, what are we gonna use this technologies for? What are we not? What uses, what applications do we wanna allow and use and encourage because they benefit society? Which ones are harmful? We can make those kinds of choices. Now, we still have to get our act together to actually make those choices.

That's a different governance challenge. But one of the things that is a thread running through all of it, through all of it, is this notion that in some ways, these AI technologies and their power and capacity and capability, quite frankly, are throwing a mirror back at us actually. - How so? - Well, in the following sense. So we all would like to say, let's make sure these systems are fair, right? We'd all like, say, make sure there're values built into them, right? We all say these things, but then I'll say, Okay, let's agree what those values are in the first place, right? So it throws these questions in that, quite frankly, we still, humanity's been grappling with for thousands of years, right? - Interesting. - But all these questions about fairness and bias, which are fundamentally important, we have to get right.

I'm not sure we know them. - So we're here at, you know, Google, why should Google get to decide that? - We should not, we shouldn't. And so when I said we get to decide, I mean society as a whole, this is a collective questions. These are, in fact, these questions about technology use and even climate are collective humanity, societal questions. - But what if Google is innovating at a speed, scope, scale, and depth, I have no idea where you are? Okay, how do I regulate you? - That's one part of the co-creation.

Let me add another piece of the co-creation. Society has to co-create with us. One of the things that is particularly striking about nevermind AI, computer science as a field. Whether you look at academia or companies, we don't have the kind of representation we should have. There's something about also including society in all its forms so that we can make these as collective decisions. That's work to be done.

(upbeat music) One of the questions I have for you, Tom, as you think about complex adaptive systems, I think the idea certainly resonates with me, makes sense to me, most of the examples that we have of those tend to be local. In a local community, people living next to each other, interact with each other, having to get on with each other. How do we coordinate everybody and solve what I think of as a collective action challenge, which is, how do you get everybody to agree? It's not as if you can only, we only need these countries to do something, everybody else doesn't have to do it. We have to get everybody to do something. - Well, James, you just beautifully described what I think is the most important political science slash governance question in the world today. Our condition now is one of interdependence.

I think it's the biggest leadership governing challenge of the day. - So what do we do? - So what do we do is that we have to come together, define a new politics. And I'm not entirely a pessimist about this, James. And it seems to me the struggle in America today is between those who understand, either intuitively or actually, that we do need complex adaptive coalitions. It's the only way we can solve these big problems. (exciting music) - Let me ask you a final question, Tom.

What are you most optimistic about when you look ahead? - What am I most optimistic about? - What are you excited about? - Yeah, no, I'm gonna stick with your optimism question. You know, last Christmas, the Washington Post did a story about a small town between Baltimore and Washington. Where it was Christmas time and this guy was putting lights up in his house, Christmas lights on his house. And he knew the woman living across the street had recently, I think, lost her spouse and was having a hard time with COVID, very isolated. And on the spur of the moment, he decided to string his Christmas lights from his house, cross the telephone wires, and light up her house. And went across the street and knocked on her door and told her what he'd done.

And she was just so amazingly touched. Well, the neighbors all saw it. And so they decided to start crisscrossing the streets with Christmas lights. They bought out all the Christmas lights at every hardware store in the whole area. And the Washington Post ran an amazing picture of all these homes with Christmas lights stringing both sides of the street.

It was their own way of building a complex adaptive coalition, in my language. And that leaves me optimistic that despite all the tensions, all the divisions in our society, that there's still a lot of people, James, who want to get caught trying, get caught trying to make it better. - I have to agree with that because I think, you know, I find myself optimistic that, at the end of the day, I think people are fundamentally good, actually.

When people interact with each other in the real world as neighbors, as cohabitants of a place, in interconnected and dependent systems and contexts, I think good things generally come. - Exactly, more than not. - I'm very optimistic about that.

I'm also optimistic, quite frankly, about the transformative potential in terms of the capacity that these systems can do. I cannot not be, right? The possibility of all these kinds of things that are possible has me just enormously excited, particularly as somebody who kind of grew up in places where these things were not accessible, these capabilities are not accessible. - My only regret is I'm 69, James. I really, really wanna see how this story ends.

- Yeah, I'm excited. We have to get it right though. - Absolutely. - We have to get it right. But I'm excited. - Thanks James. - [James] Thank you. - It's a real pleasure.

- Yeah, it's wonderful. We have to do this sooner. - [Tom] Yeah. (bright music)

2022-11-22

Show video