Reskilling the World for AI feat. Google’s James Manyika | ASK MORE OF AI with Clara Shih
- We have to keep in mind even as we try to be very ambitious in utilizing this technology to benefit everybody the need to be both bold and responsible at the same time. This might sound like it's a contradiction. We actually don't think it is. I think we have to embrace that tension and always be mindful of it, even as we pursue these opportunities. - [Clara] Welcome to Ask More Of AI, the podcast, looking at the intersection of AI and business.
I'm Clara Shih, CEO of AI at Salesforce. I just had an incredible conversation with James Manyika, he's SVP of Research, Technology, and Society at Google. I hope you enjoy the conversation.
Maybe you could look back the last 10 years. How far have we come and how did we get here? - Wow, it's extraordinary. I think the progress that has happened in AI and machine learning for me it's breathtaking.
I did my research as a graduate student 25 years ago and just the progress is amazing. And I think we've been living with these technologies for quite some time actually. Because what happened is in kind of the mid-ish 2000, 2010 and so forth several things came together. The combination of deep learning and those techniques coupled with compute in the form of GPUs and TPUs that allowed us to do these very complex vector, matrix multiplications that normal CPU didn't do as well. And then all the data. So all of that coming together meant a lot of progress.
And in fact, we've been living in this for quite a while. I'm always amazed when people forget that Google Translate for example is machine learning-based and that already 1.3 billion people use it.
But I think the pivotal moment that brought us to where we are now and this excitement, probably has its origins back in that classic, now classic paper from Google Research. This is the paper that introduced transformer architectures, The Attention's All You Need paper. These transformer-based LLMs have been the backbone of especially all the generative AI we're all excited about. And it's gone on to lead to the founding of companies, initiatives, and so forth.
So this is what's brought us to this extraordinary moment, especially in generative AI. - So much of the dialogue, and rightly so, is focused on the risks of AI and how to mitigate those risks. And I will get to that in just a moment.
But maybe just for now, can you talk about where you see the positive impacts of AI taking place to address societal's most pressing challenges? - Well, I think it's such an extraordinary time. I'll maybe mention a few examples in a few areas but these are just a few examples. So you start with, for example, the application of AI to science. Hopefully, people in the room are familiar with the extraordinary progress on, for example, in the life sciences, AlphaFold is extraordinary. AlphaFold is a DeepMind algorithm that basically predicted, solved essentially a 50 year grand challenge, which is how do we understand computationally the structure of the amino acid sequences.
And how they fold to become protein structures. This is kind of the backbone of biology, whether it's drug discovery and so forth. So we'd been making very slow progress in understanding the structure of proteins. In fact, we hadn't even fully understood the roughly 20,000 proteins in the human body, the human proteome.
AlphaFold predicted the structure of all roughly 200 million proteins that are cataloged and known to science. That's extraordinary. And today, something like 1.3 million biologists are actually using this to do their research.
I think that's extraordinary. So you've got these examples in science. So life science is one key area.
In quantum computing the progress that, for example, our team in quantum computing, figure our team's called the Quantum AI Team. If you've looked at the magazine Nature, they've probably had a Nature paper including the cover article every month in the last six months, Breakthroughs Enabled by AI. So, you've got a lot of things going on in science.
But I think besides science, you've also got the extraordinary impacts on pressing societal issues today. I'll give a couple of examples. What began as a fun experiment with one of our AI teams predicting floods. So, it turns out that every year something like roughly 200 million people are impacted by severe floods in the world.
So this team began doing the work to understand how can we predict when floods are coming. Turns out that if you could actually give people something like a week's advance notice, the lives saved go up dramatically. If you give them a week as opposed to two days, for example. So it turns out that this team that began doing this work in Bangladesh, in parts of India, it worked. And now they've expanded I think as of last, two months ago, we are now covering 80 countries, covering roughly 460 million people who get flood alerts using AI. So you've got all these examples, flood alerts, wildfire predictions, and so on.
And you can go on, and on, and on. So the pressing societal issues are extraordinary. I'll mention one other example, which is much more, I think perhaps area of examples are personal things where we worry a lot about these questions of access and inclusion. And I mentioned Google Translate earlier.
As wonderful as it is, Google Translate has covered roughly 130 languages. But it's now possible these systems to go much further beyond that. So we actually have a moonshot to get to a thousand languages. Incredible. A thousand languages.
So think about these issues of access. So you've got a whole bunch of things, but I also just maybe end with one thing, which I know we're excited about. The possibilities to improve productivity, creativity, and things that people do are also very exciting, things that power the economy. So we shouldn't forget that.
That's also quite extraordinary. - It's just incredible to think about. On the large language front, these large stochastic models, being able to predict the next series of ones and zeros. And those ones and zeros could be a sales email, it could also be a protein structure. And the flood example, the data has been there this whole time. But now we have the GPUs, and the hardware, and the models to really activate the data into insight, and action, and to save lives.
- And extraordinarily, think about how this could power the economy, could power the productivity and creativity. Think about what this does for small businesses. I think one of the things I'm actually pretty thrilled about, we're announcing this week an extraordinary partnership between Google and Salesforce. Which we're excited, which puts together some of the work we've been doing, Google Workspace with Duet AI with Salesforce. And taking advantage of the incredible security and privacy building blocks that Salesforce has built, that we've built in a way that enables businesses to make use of this. So the potential for productivity for both companies and ultimately the economy to power the economy, I think is quite, quite, quite extraordinary.
It is so exciting. And we've gotten so much excitement from our own sales people here at Salesforce. They use Google Workspace for, they're, they're making their customer decks in Google Slides and their sales proposals in Google Docs.
And now very soon they'll be able to use all of that customer 360 data within Salesforce to generate those highly custom tailored pitches for that specific customer. - No, and it's pretty exciting. But I think one of the things that is gonna be important is how do we make sure that those capabilities are available everywhere and to everyone, all businesses everywhere, all kinds of companies. Not just the large companies, but companies everywhere as well as other organizations. And they don't all have to be companies. It could be nonprofits, it could be other kinds of organizations.
I think the possibilities here are immense. - Yeah, and I think that's a shared value that our companies have is democratizing access to these technologies in a secure and ethical way. - Oh, absolutely. I'm kind of being kinda infected by Google's mission, which is to organize world's information, make it university accessible and useful everywhere. I think that's exciting.
- So, let's switch gears. Let's talk about... Let's address head on some of the complexities and risks that this new era of AI are bringing on. what's top of mind for you and Google and what should we be doing to address them? - Well, I think this is fundamentally important. I think anytime you've got a powerful, transformational technology, which has all these incredible possibilities. If it's powerful and interesting enough, there will be complexities and risks.
I think it's important to be clear about the different kinds of risks we're talking about and complexities 'cause they're different. So it's just to add a few categories. First of all, you've got what I think of as kind of performance issues when these systems don't generate outputs that any of us would like. Either because they're not factual, or they're hallucinations, or they're biased, or they're toxic.
So you can imagine those kinds of performance limitations and things that we have to solve for. 'Cause those could worsen harms that already exist in society. They could really cause harm. So we have to think about that category of issues. I think there's another category, which is to think about the possible kind of misapplications and misuse. So even when this works well, something that was built to do one thing could be misapplied for something else unintentionally.
Then you've got actual misuse by different kinds of actors. That could be individuals, it could be, I don't know, terrorists, it could be governments, it could be political actors, any number of actors who might, even companies who might misuse this technology for things that we might not want. So how do we think about that? Misinformation is clearly one of the things that's top of mind for many of us at the moment, About how do we make sure these technologies are not misused in that way. So the misuse issues are kind of a whole category that we have to think about.
Then I think there's a third category which is, these technologies as useful as they are and as powerful as they are, they're gonna have these incredible impacts throughout society. And an important one are things like the impacts on labor markets and jobs on various parts of the economy. We're gonna have to think about things, everything from intellectual property, copyright. So you've got these cascading impacts, think about what does it mean for education? So, we've got these second-order effects as this rolls through society.
We have to think about those. So I think it's all of these things that we have to keep in mind even as we try to be very ambitious in utilizing this technology to benefit everybody. It's the reason why we've begun to talk about this is, is in our case the need to be both bold and responsible at the same time.
This might sound like it's a contradiction. We actually don't think it is. I think we have to embrace that tension and always be mindful of it, even as we pursue these opportunities. - With great power comes great responsibility.
So I really like that framing of those three areas. And let's talk about each one. So the first was around performance and what do you do when you are training this multi-billion parameter model on what's out there on the internet? 'Cause the reality is there is toxic content, there is biased content in the data training set.
How is Google approaching that? - Well, several things. I think one of the things that's interesting about that is the ways we've all approached things like bias have evolved over time. And there was a time when I think most of us thought the only way is to curate the data and then clean it up. But we've discovered for example that, well, in some cases you actually want to train it on everything. Because you are better able to detect the biases when you actually have examples in the data.
So I think even the techniques for understanding, you still care about bias, how you solve for it is evolving as we learn more and get more capable. - It reminds me of how we talk to our kids sometimes. There's a school of thought where you shield your kids from all these bad things. And then there's a school of thought that you talk to them. And you are very realistic with them about the good and the bad that's out there and you teach them to recognize when is what. - Exactly, but then you've also got things like we are now doing adversarial testing, generative adversarial testing at scale to actually understand the outputs of these systems.
In addition to that, we're also learning for example things like, I think what others have tried to talk about is constitutional AI but there're different names for this. Which is when you try to create guidance and principles that guide the outputs that you generate. Then there's still, of course, real research to be done on things like factuality for example. We know how these models, these transformer-based architectures work, which is they're predicting the next token.
And because these are statistical predictive mechanisms, simply training it on the accurate information doesn't solve that problem. You're still gonna have generative hallucinatory efforts. So the question of factuality is still a fundamentally important research question that I think we're making some progress on. Do you ground the systems in other data sets, do you make calls to search and other verifiable sources? So there's all these different approaches to try to make progress on the outputs and the performance of these systems. - And of course we're never done. Because there's always new learnings, and feedback, and iteration.
- Absolutely, and there's also just innovations you have to come up with to solve these things. One of my favorite ones, for example. For a long time we've known that for example image classifies, and data context, and so forth don't handle all kinds of faces. Well, faces like mine for example. We've all seen the examples.
But even that's an area where for, so for example, at Google we've had an effort for some time. So it turns out that when it comes to recognizing colors, for example, facial colors or skin tone colors, there was something called the Fitzpatrick Scale, which was established like in the '70s. It had a very narrow range of skin tones in a way that didn't reflect all of humanity. So we've actually been working with some researchers at Harvard to create what we've called the Monk Scale, which actually is based on all of humanity's skin tones.
So we can do a better job of recognizing that. In fact, we've actually now open sourced that scale so that other techniques and technologies can actually get access to it. So we've gotta keep innovating these as we discover these issues. By the way, we're not perfect at Google. We're learning, making mistakes as we go along. But I think we have to be innovating on these issues to make progress on them.
- I couldn't agree more having that growth mindset. So let's move on to the second risk category, which is these mal intent, is these bad actors. How are you thinking about how to red team or prevent against that? - Yeah, I think one of the things that's interesting and I'm sure you're experiencing this too and others in the audience, is it's quite interesting. People are, for example, these large language models and this interface is like Bard and so forth are, constantly trying to adversarially prompt them, get it to do bad things, get it to say bad things.
So we're constantly doing incredible work to think about how do we kinda red team these systems. So the red teaming approaches are an important part of the toolkit. But the other things is trying to work on some fundamental innovations. One of the things we worry about with misinformation as an example is how do you understand synthetically generated content and so forth. So we've been working a lot on watermarking. So early this year we actually announced that we were gonna put watermarking to all our generative image and video content.
In fact, we actually rolled out a couple weeks ago SynthID, which does watermarking to all the generated images and outputs. Now of course, this is very difficult with text. It's a lot easier with images and video and so forth. But this is a fundamentally important research problem. We're also working on provenance techniques. Some of you may have gone to one of the events we did a couple weeks ago, Cloud Next.
We talked about how we're approaching and trying to build in metadata. So people can actually understand when those images came from, when they were generated. So this is all work we must do.
I'll mention one of the interesting innovation, there's two more to be done. We've actually developed, for example, something called AudioLM,, which is very good at detecting synthetically generated audio. It's something like 99% accuracy.
So we're gonna have to keep innovating and researching on ways to address some of these misinformation challenges. But of course, at the end of the day, society, we as society have to think collectively about. It's not enough for one company or one research team to do these things. We have to think about how does the whole ecosystem of both people developing these technologies.
And those using them, how do we get to a common understanding and set of frameworks that actually protect us, the misuse, particularly with regards to bad actors. - That's right. It's similar to how we've approached cybersecurity. Just teaching people the importance of having a complicated password and using two factor authentication.
We'll need to come up with what that is for generative AI. - Oh, absolutely, absolutely. - And the watermark disclosure, I think that's fantastic. And it reminds me of the acceptable use policy that the Salesforce Office for Ethical and Humane Use of AI published recently really requiring all of our customers that use any of the Einstein products to always disclose to an end consumer, so our customer's customer, when they're dealing with an AI versus a person. - Right, exactly. And then that also gets you into some very deep important ethical, philosophical almost questions.
Which is, how do we think about questions about how we want people to interact with these systems? The question of do you want to allow people to (indistinct) these systems? Do you want them to interact with them in ways where these systems sound like they're humans or with personalities? And these are quite deep almost philosophical questions that we're all gonna have to grapple with. I think in many regards, Clara, one of the things that's interesting to me is how in some ways these AI systems and these developments are almost putting a mirror back at us as society. It's quite easy for us to say, all of us probably would all agree with the following statements.
We don't want bias systems, yes. We want fair AI systems. We want systems that reflect our values but what do those questions actually mean? How do we think about that? These are questions for us as society. I was struck by some of the reviewers who said to us that, "Well, Bard is biased 'cause it said climate change is real." So, okay, that's a question for us. How do we wanna navigate and think about these questions as a society? And I think these technologies are putting a mirror back at us to say as society, get yourselves organized and think about these important deep questions.
- That's right, it's such a good point. Well, let's shift gears to the third risk category you talked about or the dialogue we've started 10 years ago. And these longer term longitudinal, macro shifts such as job displacement. How are you thinking about that? - Well, I think the question of work is always interesting. If you take historical analogies, all the historical analogies say it'll be okay. 'Cause look at what happened with the industrial revolution.
We've always managed to adapt and work our way through it. And if you look at the most deep research that's been done, I did some of this when I was at McKinsey Global Institute. But also other academic institutions have done this work. And most of that work seems to say the following, that research is that yes, of course, there will be some occupation categories that will decline over time. If you look at the... Some occupations have a lot of their constituent tasks that you can imagine AI and other systems automating.
So most counts seem to think that that's roughly somewhere in the 10-ish percent range of all occupation categories, will probably look like that over time. The numbers vary, of course, on which research reports you look at. So there's this category of job declines that way. - And for that group, how should we prepare as a society? - Well, I think it also depends on some of the other groups then we come to the other groups and we'll look at the whole 'cause I think they're all related.
Then you've got other occupations that look like they'll actually grow and grow because demand for them will rise. Or because new occupations will come into being- Like prompt engineering. Like prompt engineering. And actually, it's quite funny because, so the Bureau of Labor tracks, if you look at the BLS datasets, it tracks something like 800 different occupation categories. And if you look at those, they update those roughly every 10-ish years or so.
And if you had looked in 1995, web designer didn't exist. It was in other. Today, if you look at the occupation, there's nothing called prompt engineer.
I'm sure a few years from now when they update that that'll exist. So you always have these new categories as well as growth. So you'll have some jobs that will grow either because they're new or demand of them has gone up. But I think the biggest effect that's come through in a lot of this research are the jobs that will change. So they won't decline or won't grow, but they'll just be different.
Because some portion of the constituent tasks are being augmented and complemented by technology. And most research seems to suggest that's roughly, at least two thirds of occupations fit in that category, at least for the foreseeable future. And I think that's where these questions of skills, adaptation, how do people work alongside powerful technologies become really important. So back to your original question, which is, how do we deal with all of this, particularly the job declines? I think we're gonna have to get better at society at a few things.
Both how do we help people transition and adjust? How do we help people re-skill? How do we as society do a much better job than we what we did I think for example during the period of hyper-globalization when some similar things were happening. But we didn't do as good a job as society, or as an economy, or as policy makers, or as companies to help those transitions work well. We're gonna have to do a much better job of that. So there's real work to do there. But in all of these categories, how we adapt to society is gonna be important.
The difference perhaps with previous periods is that it may happen faster. - Which makes it harder. - Right, exactly. May happen faster. And so our ability to adapt and innovate is probably what's gonna be fundamentally important to work our way through this. - And we have business leaders from every sector, every country from around the world here in person at Dreamforce and online.
What's something tangible that we should ask everyone to do to help with this transition? - Well, as business leaders I think really focusing on, I know it's some something of a trite thing to say 'cause we've said it so many times, this re-skilling question and so forth. The reason why I emphasize this is because most of the examples you see when people say, "Hey, here's an example of reskilling." Quite often the numbers are small, it's not at scale. How do we do this in a much bigger way, especially in a way that affects the most affected workers but at scale? So I think this is something which is much more of a scaling, reskilling is not a new idea.
But how do we do it at scale, I think is the real challenge. Then I think we're gonna have to think through some more complicated things. Which have to do with even when the work is there, how do we think about the potential wage effects of these transitions? Because it won't play out equally across all occupation categories. Some people working alongside AI are gonna benefit extraordinarily. They're gonna be more productive, more innovative.
The salaries and wages will go up. Others, it won't always be that way. So how do we think about these questions about the wage effects for everybody and how we include everybody? I think that's an important question. So I think business leaders, as business leaders, we have to think about that across the different sectors and the categories of work that happen.
I think the other thing we have to think about as business leaders is how do we make sure everybody's benefiting everywhere? I think there's a real risk here in a geographic sense, both within countries and between countries that some pockets where things are happening benefit from this and some places don't. So these differences in place I think are also very important both within countries as in United States but also between countries. - It's the modern day digital divide. - It's a modern day digital divide. It's a different version of it. Also referred to occupations differently, places differently, locations differently.
- And so what should we do? - I think it's a collective endeavor. I think it's not just one entity. So I think it's companies, policy makers, governments, civil society. We have to get our minds around this and make the necessary investments and work that we all have to do.
It isn't just a single company so we have work to do. I think as leaders, as business leaders, we should start with our own areas in which we work, the ecosystems we work with, the partners that we work with, the small companies that live in our ecosystem, the companies we collaborate with. I think there's real work to be done there. And the larger questions I think are gonna take everybody, policy makers, governments, and others.
- Yeah, I agree with you. Both of our organizations work closely with schools. So just reimagining K to 12 education and skilling outside of school.
And I know both companies, we also offer free online training and learning courses on AI. - Oh, absolutely. But even an area like education, I think even there the questions are changing. There was a time we would've all said, "Yeah, let's make sure we're focusing on STEM education and education for K through 12 and so forth." That's still fundamentally important but we now have tools to help with that.
I've been struck by having spent some time with some kids in some poor school districts about how they've gone from, "We've been waiting for somebody to come bring the STEM education, coding education that we were promised or somebody said we should learn and no one showed up yet." And now generative AI showed up and in fact, I can just talk to the system, "I have an idea." So I think we should also think of the other side of this, which is how are these tools helping us to solve some of these re-skilling and training challenges? So seeing kids who've never had a coding instructor come to their school, play with an AI generative model and work through coding examples, I think that's pretty exciting actually. So I think we can also look to these technologies to help us with some of those challenges too. - Well, that's incredible, James. Thank you for all the work you're doing.
Technologies, including AI are neither inherently good nor bad and we make them good. And protect from the bad based on the decisions that we make and the values that we bring. So thank you for your leadership in the industry.
- Well, thank you very much. Thanks for having me. (audience clapping) (logo whooshing) - I really enjoyed the conversation with James.
Three big takeaways for me. Number one, is that there's different kinds of risk that we should think about. First, is the performance of the models and the AI. Second, is how to address bad actors. And third, is looking at longitudinal bigger systemic risks to society like job displacement.
Number two is the importance of having a growth mindset. We're still very early days and it's constantly changing and so we have to keep looking, and iterating, and getting better over time. Last but not least, the most important thing that leaders can do across the public and private sectors is to start re-skilling our employees and our communities now. Well, that's it for this week's episode of Ask More of AI, the podcast at the intersection of business and AI. Follow us wherever you get your podcasts and follow me on LinkedIn and Twitter. (bright music)
2023-09-21 06:28