Adm. William McRaven and James Manyika | Dialogues on Technology and Society | Ep 6: AI & Leadership

Adm. William McRaven and James Manyika | Dialogues on Technology and Society | Ep 6: AI & Leadership

Show Video

(music) Admiral McRaven, it's a pleasure to see you again and spend some time. I had an extraordinary experience working with you back in I think we're finished in 2019. 2019, right. When you and I co-chaired this extraordinary task force, which had this kind of dual objective, which is U.S.

innovation strategy and national security. It was kind of an unusual combination of topics. But as you recall, part of it was realizing that some of these new fundamental technologies, AI, quantum biology, biotech and so forth that are on the one hand incredible, incredible engines for innovation, prosperity, solving problems. But at the same time, they did pose some national security questions.

I'm curious, as you reflect back to what we did. Does it feel that the world is still in the same place? Has anything changed in your mind? Yeah well first, James, always good to be with you and thanks for allowing me to join you today. You know, when I think back on the task force report, one of the things that I was surprised by was I think going into it is, as you know, we tried not to make it about China, but of course, it ended up becoming all about China.

And where were we, the U.S. relative to China. And there was a sense back in 2019 that, you know, China was gaining on us very quickly and they were going to surpass us in the next couple of years. And I think what we recognized was it was the nature of the culture here in the U.S.

It was, you know, what went on in Stanford and what goes on in Harvard, MIT, and now in Texas and other places around the the U.S. because of this sense of innovation that Americans in particular have this great innovative culture. But, of course, as you recall, one of our recommendations was continuing to fund basic research, because from basic research is really what's going to drive the next great innovation. I'm not sure we have funded it to the level that we need to. So, you know, our hope at the time was that Congress would take a hard look at this and begin to put more funding into basic research. But I think we still need to apply pressure and get them to do that.

I know, I actually remember that, well I think I remember that the statistic at the time we found that the peak of funding of basic science and research had happened in 1964 - 64, correct - and that had been coming down, down ever since, and that we had this huge gap. I also recall something Admiral which is I think we also observed the fact that we were not investing enough in education on the areas. And in particular, this idea of how do we get both more people studying in these fields and educating people in these fields more broadly around the country. So not just the Stanfords and the MITs, but broadly.

And how do we also get more mobility between public service government and the private sector of the people who have that kind of background and training? So you're pointing out that we haven't done enough on the funding or basic research side. Are you feeling better about the education side? I'm not actually so... In some areas.

But when I was the chancellor of the University of Texas system, I was frequently asked in large groups, you know, with my military background, people would say, Well, what do you think the number one national security problem is and I would always say K through 12 education. And there was a no no. I mean, the number one national security problem? K through 12 education. And my point was, if we weren't teaching the young boys and girls at an early stage to have those STEM skills to to think critically, to be exposed to other cultures, then when they got to be 20 and 30 and 40, they weren't going to be in a position to make good national security decisions. So as I still think I mean, we've got, without a doubt the finest university system in the world, bar none.

But when I take a look at K through 12 education and actually pre-K through 12, we still have a lot of work to do. And this pipeline of young men and women that are coming up, we need to invest as much in public education and recognizing that there's a space for private education as well, But this is what's going to allow us to stay above and ahead of everybody else. I fully agree with that.

I think one of the other observations we had at the time was the fact that there was so much unevenness around the country. while you might find these few places, these few spots where those are flourishing, it wasn't as evenly distributed. The question is how do we invest in all parts of the country for all people so that we can actually build these skills more broadly? You know, and again, it was part of my naivete when I came in to to run the university system. Our flagship was the University of Texas at Austin. What I found when I went down to, for example, UT Rio Grande Valley, which is down in the southern part of Texas, they were just as smart, just as talented, you know, wickedly that they in fact, it was a U.T.

Brownsville that helped identify Einsteins gravitational waves. They were part of that. And so you realize that you don't have to be an Ivy League school in order to have brilliant researchers, brilliant faculty. And so how do we ensure that the nation is putting the right amount of resources in areas where they need to? Some of this is the state. So the state's got to step up and recognize that, hey, we've got great institutions across the state.

We're going to invest. And coming from the state of Texas, where we are fortunate to have this, both a permanent university fund and the state of Texas has a very, a lot of money. Let's figure out how to disperse that money across all universities so that all boats rise. Because if you put your bet on just those research universities and again, you get into this competition, well, I'm a research university. Got it. But guess what? There's great stuff going on at universities that are not don't qualify as research universities.

Invest in them as well, because then they can rise up at some point in time and become research universities. But this has got to be a national reckoning of how we're going to fund education, both K through 12 and university education, and put the money where it's going to, again, be spread wide enough so that even the smaller schools can take advantage of it. I want to go back to one of the other things that I know we talked a lot about at the time, which was this idea that in addition to education and K through 12 and all of that, there was also a greater need to have some greater mobility between the private sector and the public sector and service, especially in these fields with these technologies. Do you think we made progress on that? Yeah. That's an area where I do think we made some progress.

I mean, you see kind of public private partnerships are beginning to work. Certainly what Google is doing is going to make us better as a federal government. It's going to allow schools to be better. You know, we talked today about can we get the the private sector to invest in infrastructure? A great idea, Not something I thought about. But when you think about not just our airports but our educational infrastructures, our you know, again, all the systems that are going to allow us to be smarter, faster, better.

These are important relationships. And I do think we've made some progress in that. Yeah. One of the areas at least I have some optimism is one of the things we talked about at the time.

The task force was this idea that in some of these new areas like AI, which I'm sure we're going to talk about a little bit more soon, there was a need to provide computer resources to universities to be able to do that kind of work. And I'm actually glad to see that there's now this initiative called NAIRR, which is try to build this kind of national AI research resource to enable universities to be able to do that. But let's maybe talk a little bit about AI I mean, I know that at the time this was four years ago when we did this, AI was actually one of the areas we were talking a lot about at the time. Now, I think the world has come to fully appreciate the power, possibilities of AI in the last year. And a lot of the conversations at least there tend to be a few kinds, On the one hand of course, there's a lot of excitement about all the incredible things we can do for the economy, for society, for health and all of that. At the same time, there are some real risks and concerns.

Now some of those are things like when these systems don’t perform as desired and so forth. And, you know, they’re biased and so on. We don't want that. But one category of risks is has been around this question of possible misuse. How do we think about that? Because some of those risks of misuse relate to, of course, things like misinformation and so forth, but some of them also national security risks. How do you, how should we think about those kinds of risks? Yeah, I don't think we can be risk averse.

I mean, the fact of the matter is the potential of AI and machine learning is so great that my belief is we need to go ahead full steam ahead, recognizing we've got to go with our eyes wide open, recognizing there are risks out there, and tackle those risks as they come up or as we build a roadmap there, recognizing those risks are going to be out there. I never cease to be amazed by the talent of these young men and women that are working on these machines. They're wickedly smart.

And so I think that they are wickedly smart enough to see the potential problems and come up with a solution. So I don't think we should be afraid of it. I do think we should again, go in with our eyes wide open, again, both from a national security standpoint, but from the broader goodness it can do around the globe. Yeah, I know. And in fact, one of the things we we're trying to think about is similar to what you're describing, Admiral, which is on the one hand, we want to be involved, meaning there's so many amazing possibilities for society, for people, on so many fronts.

At the same time, we want to be responsible. And so think about and contemplate and try to use these technologies to actually innovate and find solutions to them and obviously do this together. I guess one of the other questions I know you and I were discussing this in our various conversations before, which is, what's the role of values in all of this? Because in fact, one of the things we're now working on, you know, in a very global space and one of the key questions of AI systems, we always think, well, we want to make sure it reflects our interests, our values.

How should we think about values and the role of ethics and those kinds of considerations? Well right now in the United States, we are always going to put a person in the loop, because if the decision is wrong, somebody will have to be held accountable. That is the nature of the United States military. Why is that the nature of the U.S. military? Because it is part of our value system that we are going to hold people accountable.

But you raise the issue of. Yes, but what if our adversary doesn't care about the person in the loop and therefore their decision process is going to be faster because on this side where we have a value system and they don't, then will they be able to, you know, to overcome our our defenses because their decisions are faster? That's a scary thought. But at the end of the day, I believe that the system that has a value system attached to it will be fundamentally better in the long run. I don't know how you build that into the algorithms, but I do think it's going to be important that even if you can't build it into the algorithms, the algorithm up here will say, hey, I know what our value system is, and if I can't build a value system into an algorithm, the human's got a value system that's been incorporated into us since birth, and that will be the trump card for us. What you're pointing to is actually an important major area of research in the field, because one of the things many of us are trying to do is to figure out how do we have the systems learn with human feedback, right, and involve human inputs into those questions and how we think about them.

But even if we automate those decisions, we still want to have them be working alongside people right? So I think a lot of us, many of us actually believe that in some ways we should think of these technologies as being assistive to us, correct, as opposed to substitutive or replacing things that we should do and be accountable for. But that's going to take obviously, some assets and some real work. But I want to come back to something else, Admiral, which is in your recent book, you talked a lot about leadership. Right. And I think we're going through this period where on the one hand, many of us are excited about these technologies, while also concerned about the possible risks, and we're trying to coordinate and work with others. What's the role of leadership in all of that? How should we think about what leadership should look like when we're trying to get these big questions right? Well, the very first chapter in my book, the Wisdom of the Bullfrog, and I debated whether or not I was going to title it this, but it's death before dishonor. And and in that, the death before dishonor goes all the way back to Caesar's time.

But the Marine Corps is kind of taking it on as their informal motto, because one of the great Marines Medal of Honor recipient, John Basil, and had it tattooed on his arm. But the point was, with this death before dishonor is, honor, nobility, integrity, character are the single most important thing in leadership. So as you begin to look at AI, if you have leaders that don't have character, that don't have integrity, the think that the dollar is more important than the nobility of what value can be brought from AI, then you're going to get a piece of AI that is not what we want. So you have to start, I think, anywhere, any leadership problem, whether it's AI or just leading young men and women in combat, it is about the integrity of the leader.

And if that's the cornerstone, then you begin to build the culture that says these are men and women of character and integrity. And if I'm working for them, then that's what's expected of me. A long time ago in the military, this idea of servant leadership was kind of fundamental to us, and the idea was like, if you're the leader, your job is to make sure that the men and women that work for you are well-trained, well equipped, that you give them the latitude to do the job. Your job as a leader is to make them successful. If it ever becomes about you, your promotion, your, you know, credibility, your anything, if it's about you, you're probably not the right leader for the job.

If it is about the men and women that are working for you and and you make them successful, guess what, everybody ‘s going to be successful. And you make them successful by, again, treating them with respect, giving them the tools to do the job, give them the latitude to do the job, but hold them accountable when they don't do the job. And all of those are kind of fundamental in leadership that will apply to AI, I think, as well. Yeah and as you talk about leadership and in that regard, I mean, clearly there’s enormous lessons that are about leading, and leading with integrity and character. You've led a life of public service and in many respects, I suspect your motivations were public service and you're doing it to serve the common good, the public good. You know, one of the things we're trying to do, I’m trying to do is, the idea of, think about the role of technology in society.

So by having society in there, we’re trying to take account of how does this impact society, how is it accountable to society? How should we think about that from a public service standpoint, given your incredible career? Yeah, I mean, I think you have to decide what do you think is good for society? So what are the things that make society better? When a society is cohesive Not necessarily, you know, homogeneity, but cohesive that we believe in certain values that we believe in. You know, human rights, that we believe in a democracy and a representative government, that we believe in the, you know, respect for humanity, respect for other cultures. If this is what's going to make a good society, we believe in education, you know, then you've got to frame what is good for society. and once you have those building blocks, then I think you build upon that. But those are often very difficult things.

I mean, in our experience, one of the things we've tried to do, for example, we've set up these AI principles that guide our AI work and I won’t take you through them, there’s like seven of them. But the essence of them is to try to ask two questions. On the one hand, they're trying to get at this question of, you know, could this benefit people in society in any way? The second question I get is this question of could this be harmful in any way? Sure. So trying to navigate the essence of those two questions, I think we're learning a lot. I hope we get it right most of the time, but I know we've had to learn a lot, some hard lessons.

But I think that this becomes for you, then it's just binary. So is it good for society or is it bad for society? But, you know, there's that nuance of what is good and what is bad. And so back to the leadership. Somewhere along the way, you're going to have to say

this is what we Google or we whoever is working this, these are the things, the principles we think are good for society. You got to have that debate, I think, in order to begin to frame where you're going to go. And what should guide that debate, in your mind? A debate, a conversation, a conversation.

And I do think there's a little bit of the wisdom of the crowd here. So how do you how do you poll people to say, okay, America, 360 million of us, let's have this conversation. So I think having these conversations with the American public, because as much as we think that I mean, they don't understand AI, but they will understand what do we think makes a good society? You're not going to get it right. You're never going to get it completely right.

But I think it will begin to bound the problem a little bit. One of the things that I'm curious about, though, as you think about the possibilities of these technologies, what are you optimistic about and what are you concerned about? You know, I'm really optimistic about where AI will come into play in the health care sector because we know that I mean, it will look at, you know, new MRNA and it will look at, you know, new protocols and it will come up with AI will come up with new ways to cure and to develop, you know, pharmaceuticals. I mean, I just, that part to me is just fascinating.

And I think it will, it will revolutionize the world. You know, I also look at education. I think it's going to open up so many doors on the education front, on the health care front.

I see you know, almost a science fiction world of limitless possibilities. I'm not naive. No matter what we build, bad things can happen. We can't be afraid of a sharp stick. Right now, I love the quote.

Yeah. Can't be afraid of a sharp stick. So I'm. I'm pretty optimistic about the direction.

Again, I'm optimistic because I've seen the young men and women that are working on this and some of the older men and women that are working on it. Yes. And they are so smart and so thoughtful that we'll figure it out. Well, Bill, one of the things that, maybe this is a place where I’d like to end up, which is, let's take a thought experiment. Let's imagine it’s the year 2050 and we're looking back and the world, America, society, is wildly happy, right? AI happened. What happened? What did we get right? What problems did we solve? What dangers? What happened? Yeah. I think AI solved those problems

that we were going to solve in time, but that time could take decades. So climate change? Will AI figure out weather patterns that allow us to understand climate change better so that we can reduce the impact of climate change. You know, and maybe decades. We’d have figured out a cure for cancer.

Will AI figure that out in a matter of days, weeks, months? So the value, of course, is not that humans couldn't figure this stuff out, but would we figure it out too late? Or how many people were going to die or what was going to happen to the world before AI came along? So all of those things that we would get around to at some point in time, AI will accelerate in a very positive fashion and that's what I'm optimistic about. That's terrific. I mean I think just given some of the things I've learned from you and we learned in our task force, I would also just add, I think I would hope that we also solve this idea of, how do we give these opportunities to everyone? How have we revolutionized all the places, So it's not just these few places? You're absolutely. And universities, but it’s everybody? But this is this is terrific. Well, thanks for making the time. My pleasure, James. Always good to be with you.

Good to see you.

2023-12-17 00:00

Show Video

Other news