Humans, Artificial Intelligence & The Future: Riveting Keynote #Futurist Gerd Leonhard NobelFest '23

Humans, Artificial Intelligence & The Future: Riveting Keynote #Futurist Gerd Leonhard NobelFest '23

Show Video

Thank you. So, thank you very much to the organization for giving me this nice King of Astana outfit. I like it. Thank you. So, Salem, it's my great pleasure to be with you today. I'm going to talk about the Boratcak

Keremet, the good future. When we think about the future, it was very interesting to hear Professor Ray earlier. It's really interesting to see that many of us look at the future as if technology or science can solve every problem. And it's funny, I believe that technology and science can solve our practical problems, water, food, energy, but our human problems, our collective problems, they take more than technology. You know, I used to work a lot in the telecom business and telecom companies said, well, really what happens is when we are connected, we prosper.

And that's true. But today, many of us are over-connected. We wish sometimes that we weren't connected, that we could connect to other people. Some of us have more relationships with our screens than with other people. So, in many ways, you could say technology can be heaven or it can be hell.

You can take a hammer and kill your neighbor or you can go build a house. You can use artificial intelligence for great things and for not so great things. Human genome editing, we can maybe solve cancer, but we can also build a super soldier.

And it's up to us to see what we make of technology. We can create the good future, as I call it, the Karashov future, a future that is good for us, for all of us, food, jobs, self-realization, privacy, rights, or we can take the same technology and create a tyranny. Social media, great example. We used to enjoy social media. Maybe we still do, you know, when we meet each other on social media. But today, social media has turned into a magnet of bad opinions.

In many ways, you could say that the demise of democracy in some countries is because of social media. We're seeing all the things that aren't so good. We're seeing the things that are negative. So we should think about that a little bit. We should think about what's coming up. I think the next ten years will bring more change than the previous one hundred years. I know it sounds crazy, right, when you think about it.

The previous one hundred years, two world wars, the internet, COVID, all of these things. But now we have six king-making technologies, and I prefer to call them queen-making because there's so many women in the room. They're queen-making technologies. So start with artificial intelligence. I'll talk more about that in a second. But paired with that, of course, is quantum computing. The possibility of pretty much limitless computing power. Supercomputing, supercomputing. For that, nuclear fusion.

The possibility of finding a new source of energy that is not pollutive. The next generation nuclear energy. That's roughly, I don't know, what do you think, ten, fifteen years? It's not fifty years. When we have nuclear fusion,

it is the end of energy issues. Think about that for a second. That sounds like pie in the sky, like science fiction, but it's actually very real. We have, of course, genetic engineering, synthetic biology. We can make airplane fuel from plants.

We can possibly change the human genome to avoid diseases. Lastly, geoengineering. The possibility of bringing the earth back. I mean, we're talking about really, really big stories. And we are inventing all of these, but here's the bottom line. We will have all the tools, the tech, the science, but will we have the telos, the Greek word for wisdom? Will we use technology in the right way? And part of that has to do with our economic logic. We use technology to make more money. I mean, that's primarily what's happening with artificial intelligence today. That is not such a good idea, because there's a limit to how much money we can use in regards to what we really want. I'll talk more about that in a second.

So we're now facing three revolutions, three very, very big things. And it was mentioned in the speeches before. The first one, of course, the digital revolution. Everything is going digital in the cloud, connected. The second one, the sustainable revolution. It's a hundred X of the digital revolution. The World Economic Forum says roughly a hundred million new jobs in the Greek economy. I mean, this is the future of Kazakhstan. This is the future of a place where you can smell the coal in the air outside. Can we switch that to sustainable,

to circular? I mean, this is a huge story. In the U.S., President Biden just announced the climate corps, like the peace corps. And we should have that in every country. I mean, it's kind of interesting that the U.S. didn't care much about these issues for a long time, but now it's like the IRA bill and all of these new things coming in. The last one is the most important, the purpose revolution. What is the purpose of an economy? What's the purpose of our future? The purpose is not just profit growth and more jobs. That is the purpose of my generation. The baby boomers, the Gen Z, no, Gen X, sorry.

That's what we wanted. Did you know that the CO₂ that's generated around the world was generated in the last twenty-five years? That's my generation's doing. Now we have the millennials, the Gen Y, between twenty-five and forty, many of them in this room, saying we want a different life. We don't just want profit and growth and more money, even though we still want money, right? Let's be clear about that. That's a very big objective. We want those three things. Sustainable purpose, the good future. And the good future, good doesn't just mean having enough money. You know, there's a lot of research on this, showing that roughly at $ seventy thousand per year, which is a lot of money,

you're sort of average happy. If you have $ seven hundred thousand or seven million, you're not ten times as happy. It's basically not all related to what comes in in revenues. And the digital revolution, we have four changes there. We have information technology, biotechnology, energy technology, and last but not least, right now the biggest one, AI technology.

I mean, when you're looking at that, you can basically say, there is going to be enough innovation, enough jobs, enough future, enough potential in the next decade to change every single process. How we drive, how we fly, how we transport, how we print or not print, right? How we connect, everything. I think it's quite clear that the future could be amazing if we play our cards right. If we finally learn how to collaborate before we have a problem.

Did we do that in COVID? No, we had to have COVID to collaborate on vaccines and the pandemics. The nuclear bomb, we collaborated only after we had to. Are we going to learn how to collaborate with artificial intelligence before we have a problem? Many people around the world are saying that's quite likely that artificial intelligence could lead to a stock market crash.

That is our biggest proportion ever because we're being manipulated by information from social media and through AI that leads people to make decisions on investments. In fact, the chair of the SEC, Gary Gensler said, it's very likely that artificial intelligence will lead to a giant global stock market crash because of the misinformation. That is, of course, a negative side of technology that we have to keep a good eye on. Let's take the example of synthetic biology, changing processes that are found in nature to use them in engineering, bioengineering, like spider silk, like alternative jet fluid, all these things that we could do with genetic engineering. And of course, cultured meat, as was mentioned earlier. When you eat meat from the laboratory, you know, that's grown in the lab. It's from an animal, but in the lab, right? It sounds disgusting, right? But it's actually really interesting. It's an alternative to protein that's shapen up around the world. You see that chart here? All of this is essentially synthetic biology, fuel, mining, electricity, chemicals, meat, and agriculture in the next decade.

So if you put that together, artificial intelligence, which I'll talk about in a second, and synthetic biology, you have this explosive mix of possibilities that's just crying out also, of course, for regulation. We will share the slides later so you can see some of the images. So, very important. We are already at that point where we can safely say our life expectancy is already hugely increased. We're increasing three months of life expectancy in the West, except for America, for some reason. Three months per year in life expectancy.

My kids will grow up to be a hundred years old, unless something really powerful happens. With new kinds of biology and genetic engineering. I mean, that's amazing news. You could argue that there's gonna be too many people, but that's not quite true either. Because now we've found ways to feed them. So, looking in this direction, it's really quite obvious solutions are coming. Hello, get my clicker to go here. And this is the big news, right? This convergence of humans and machines.

And you could say some of that will solve longstanding problems, make our work more efficient, make us work faster, more connected. The other scary part is, are we going to become like the machines? I mean, imagine you live in a world where everything you do is connected, and you have a connected interface that you use through a helmet, or even directly as Elon Musk is suggesting. What keeps us human? This is a very big question that's coming up. Some people are saying, because of artificial intelligence, five to seven years until this overlap is almost complete. A point called the singularity. We have to decide, as Professor Ray was saying, who do we want to be? That's the question. It's not the question of what can we be, what is technically possible. Of course I can connect my brain to the internet eventually. I mean, I can do that now. It just takes two million dollars. Fighter jet pilots have that now.

The question is really what do we want, and where is it going? As robots are becoming already human-like. You tried this ten years ago, it would kill everybody on the football field, because the robot just couldn't do the work. Humans and machines are overlapping like never before. You've seen this video of Rolling Stones, Mick Jagger, and Boston Dynamics. And basically, I can't play the music, because we'll get us thrown out of YouTube. So you can imagine, it's the song, Start Me Up. Don't turn it up, okay? Basically, the robot is a simulation of Mick.

But the more I watch this movie, this clip, the more I'm thinking the amazing part is not the robot, it's Mick Jagger. He's seventy-seven years old. He still does the same thing. It's mind-boggling what technology can do now. You can see that on YouTube. So artificial intelligence is defined as computer systems that are able to perform tasks that used to take humans. In other words, like this, right? That's how it all got started. And it can only do the tasks that can be easily simulated.

Empathy, compassion, feelings, emotions, intuition, imagination, no. That is because feelings, emotions, imaginations are not data. At least they're not simple data, zeros and ones. Very hard to understand for humans. Much harder to understand for people, for machines. So AI, defined by Demis Hassab, he's the CEO of DeepMind, is computer systems that turn information and data into knowledge. This is a powerful definition.

And I was hearing earlier that we're moving Kazakhstan into the knowledge economy. Well, what if machines have knowledge? The new revolution of artificial intelligence isn't the workers in the factories. It's the knowledge workers.

I mean, we have to understand what's happening here. The machines can do the knowledge work. Not all of it, but the simple knowledge work, the commodity work, the putting effects together. And what's, of course, the next step is artificial general intelligence. That is a system that surpasses humans in almost all aspects of what humans used to do. Economically valuable tasks. And it is a declared goal of open AI to create an intelligence that is generally intelligent.

And I would put forth that is not such a good idea. I think the idea of having intelligent systems, smart systems, connected systems that make us work faster that are our tools, that's a fantastic idea. But to build a machine that's like us? What is the point? I mean, what is our job going to be? What is the purpose of us when we build a machine that can be like us? And how can we control a machine that has an IQ of a trillion? Right now we have a machine from open AI, ChatGPT, that has the IQ roughly, if you can compare that even, to Elon Musk. And they're doubling, or quadrupling, so next to GPT-Five we'll have TenX. Eventually we'll end up with a machine of a IQ of a billion. You know what that machine will say? We'll go to the machine and we say, how do we solve climate change? And the machine will say, eliminate all humans.

The most logical answer. That makes sense. We are the problem, right? That's called mission failure, right? Bad alignment. We have to figure out how we control this. And we have to figure out the side effects of technology. Oil and gas, for example. We knew for a long time the side effects are deadly. What did we do? We said, yeah, let's wait. We'll have the government figure that out later. For now we just want to drive and fly.

And what we have now is the biggest crisis in humanity with climate change. If we do that with AI, I don't think we're going to be there to witness what it means. Examples. So now we have a software that does what graphic people used to do, like to redo an image, right? So you don't have to learn how to program or do Adobe, whatever you have to do for this, after effects. You just type in what you want. And the same is done here. For example, this designer here is using Adobe to create simple images by putting in a word prompt. I mean, now a ten-year-old can do what a twenty-five-year-old designer did. And that is fantastic.

But let's make no mistake about this. Creativity requires emotions. This is good. It's a tool. It's not God. It's not the purpose of life. It's not going to show me how to be creative. It's a tool I use to find out how I can be creative.

And I think if we're looking at this direction, it's really quite clear which way we're heading with this. For example, here we have the first magazine cover in the world on Cosmopolitan magazine, made by an AI. Made the cover of Cosmopolitan. And made it in twelve seconds and didn't get paid. The last part, of course, is the interesting part. A lot of companies will try to use AI to replace people to not pay them.

This is what all the Hollywood strike was all about, with the writers and the actors. Because the studio said, we can put all the scripts into the AI and the script will write the next movie. What a ludicrous idea. Maybe you put in these things and then you deserve what comes out on the other end. This is me speaking in Romania a few months ago.

And then I discovered this software called Rask AI. And they allow me to dub my speech in twenty languages. So here's the Russian version, me speaking Russian on the stage. Turn this up a little bit. I'm

happy to be here. Many people around me tell me that there is no good future, including my own children. And when I speak about good future, they tell me, didn't you notice COVID, climate crisis, and all the problems that we have? Good or not. But I do this in Spanish and German. And the German part, of course, I am German, right? But it's translating me into German in my voice. And here, I can have the AI change my picture. I can look a lot more attractive when I play a video. I don't know, it's probably not more attractive. But this one, I like the best. This, I look young, right? That's fantastic. So the AI can make all these images. And you know, it's really interesting. I think the funny stuff and, you know, kind of nice to have, AI is really good at this, but here's the bottom line. There's a huge leap between funny and good enough and interesting to really good.

There's a canyon. I mean, that's not me. It's what the machine can find, pictures of me. I don't know where the astronaut comes from. I asked the AI to make a portrait of me. There's various websites that make portraits, right? You upload photos and it goes out on the internet, finds photo and it makes portraits about you. But who is this? It could be me, kind of the James Bond version of myself, I suppose, right? And it's flattering, but I mean, the bottom line is still, you know, I would say, not quite, right? For now, the gap between interesting and really good is huge. This is what artificial intelligence does in generative AI.

You ask the question, it goes out to a trillion pages, it copies and pastes, puts it together and sends it to you. It has no idea what it's saying, it has no context, it doesn't know what's in it. It's just the most likely auto-complete. And that is very useful if you're looking for pointers. It is not useful if you look for the truth. If you Google and say, what kind of speaker is Gerard Leonhardt? Is he a good speaker or not? You'll get fifty links.

And then you can say, yeah, probably not, or probably. But here you get one answer. And it sounds so good that you say, yeah, that's it. So next time if you have an election in Kazakhstan, you just ask Chad GPT, who should I vote for? Send vote. I mean, that's probably not such a good idea.

Hello, got my clicker to cook again here. Don't be hanging out for me. I'll speak to you.

Who are you? I'm Chad GPT, a language model by OpenAI. How can I assist you today? Cool voice. Thank you. The chatbot now has a voice. Actually five. Hey. Hey there. Hey. Hi there. Well, hello. It's like a sci-fi movie with an AI assistant that's always. This is Joanna from Wall Street Journal. She's speaking to Chad GPT, and Chad GPT is speaking back. Now, very soon we're going to have devices like my wristwatch or little button that I wear here, or in my car. And we're going to connect to Chad GPT even without the mobile phone, even without internet, and ask it anything we want. That's heaven or hell. It's heaven for, you know, yes, I can ask any question. I can be instantly smart. But how do I know it's right? Have you used Google Maps? All of us use Google Maps. Sometimes it's really great. Sometimes it's really bad. But we say, okay, big deal. I find my own way.

We'll have to do the same with AI. Imagine seven hundred million Indians on mobile devices, which is happening basically in a year or two. And they can ask any question they want and get a spoken answer in the voice of a famous actress. And you'd say, okay, that's interesting, but is it good? Is that the good future? Or is it the scary future? And, you know, which way are we going with this? So very big difference between biological intelligence. That's all of us, by the way, not just humans, but also animals, of course. Social, intellectual, kinesthetic body. Emotional intelligence. Allegedly, women have emotional intelligence a lot more than men.

That is not surprising. But it's very interesting to see what kind of intelligence does the machine have. Why, you name it, logic, right? Logic, data, binary. You're not binary. You didn't marry your husband or your wife because of efficiency. There are many, many reasons why we do things.

Humans are multinary. Machines are binary. We should use machines to do the binary jobs. I want a machine to be good at a job, to be competent.

I do not want the machine to be conscious. I mean, why? That's my job, is to do all these things, right? Why would I want that? So when we're looking at this future, and we'll have basically a genie in our pocket, we have to wonder. It's going to make trillions of dollars, trillions. Replacing humanity with technology is the biggest income idea ever. Is it a good idea? And virtual reality on top of that? We could live in a synthetic world, a world that is just made up.

We gotta think about this because also our work is changing. The pyramid of work, guess what? You talk about the knowledge society. Well, that's good, but here's the fact. Our computers will cover that lower part of the pyramid. That's machine territory, logic information, and some basic knowledge. Machines can and will do that.

They will beat us in five years at every aspect of logical computing in our brain. So where's our turf? It's on top of this. It's the place where we go that's on top of the pyramid, the human-only turf. The future is not just science, technology, engineering, and math. It's also what I call HECI, not just STEM. Humanities, ethics, creativity, imagination. Put ethics, music, sports, literature, and stuff back onto the agenda because basically what's happening is we're moving up the pyramid and the rest is done by the machines, and we supervise them. Education? Oh my God, no.

We are learning in school to download information so we can use it later in the same way that we downloaded it. I mean, this is, of course, as we know, utterly useless because we can't possibly download all the information we need. It's constantly changing. So we get information on demand and we learn new things how to react. So we don't download anymore. We make up things. We create. Mark Fuller, my favorite futurist, he once said that we go to school to be de-geniused, to have the genius taken out. That will not work in the future because that's all we have. If your genius is taken out, you will not have work. If you work like a robot, a robot will take your job. If you study like a robot, you'll never have a job.

That's something we have to face. And that's also a great opportunity because now we can think, okay, what is going to be next? How do we resist the temptation to automate everything? Automated teachers, automated drivers, automated checkout in the supermarket, automated politicians, as some people have been suggesting. We have to put the human back into the system to figure out what our next step is, what our next viewpoint is, which way we are going. So this concept of an AI saying, I'm all you need, I will be very suspicious of this. I think we need to look much, much further to take a look at what makes us human and how it works. So logic alone is not enough. Real life transcends data. You know, the most advanced photographic system in the world captures three percent of what the human eye can see.

That is because we don't see with the eyes only. We see with everything. So a completely different way of looking at the world. Real life transcends data, is beyond data, is beyond information. Creativity requires emotions. Algorithms know the value of everything, but the feeling of nothing. Do you want to live in the world where the feeling is no longer matters? I mean, that's all that we do, that's what we are. Humans care about three things, experiences, engagement, relationships.

Do we care about data? Also. It's a minor point, you know, for us. We care about other things, the feeling of nothing. A Guinean proverb, knowledge without wisdom is like water in the sand. That is what AI is giving us, knowledge. We have to use our wisdom to make it work, to not be water in the sand, to be actually meaningful, to do new things. So as we go into that future, we have to worry about this a little bit and say, well, let's reject this kind of reductionism. You know, the machine will do the work and, you know,

what do we do? I think that's a lousy question because it's kind of machine thinking like discard. You know, we're just one giant algorithm. Organisms, humans, aren't algorithms. If we are, then maybe we'll find out in the next fifty years we can change our mind, but for the time being, we haven't even figured that out what we are.

So going into that future, I think it's going to be important that we think of artificial intelligence as a tool, we think of virtual reality as another great tool, but when Mark Zuckerberg from Meta says that we're going to spend more time in virtual reality than in real life, then I would say, well, maybe I should also move to San Francisco in a $ ten million house so I can spend more time in the virtual reality because for the rest of the world, this is out of reach and it's probably also not a very good idea. And it could lead to this kind of scenario, to a scenario where we're essentially seeing everything around us is going digital, everything is sucking us into the digital world. And there's actually a name for this that psychologists use, they call it nature deficit disorder, that we're growing increasingly digital and we're forgetting what nature looks like. And AI will also lead us in this direction, AI, virtual reality and biotechnology. Do we really want a world that's fabricated? Do your kids still know how to build a sandcastle? Do you know how to solve real problems with real people? Both of those things we'll need. Of course, that is of course the dilemma. We're not going to let go of this technology, we're not going to stop using it, we're not going to stop developing AI, we're going to go forward and forward and forward, but we also have to protect what makes us human because it has nothing to do with the code and the AI.

Here's a great guest speaker here, Mustafa Suleyman, who wrote a great book about AI. He runs a company called Inflection. Let's see what he has to say about AI. Everybody is going to have an intelligent assistant, a personal intelligence that knows you, that is super smart, that understands your personal history and can actually hold state, it can preserve things in its working memory. So it will be able to reason over your day, help you prioritize your time, help you invent, be much more creative. It'll be a research assistant, but it'll also be a coach and a companion. And so it's going to feel like having intelligence as... So it's a research assistant, but also a coach and companion. That I would say, I don't quite understand. Is it a tool or is it a friend? Is it a digital tool or is it my wife? I'm my compatriot.

I mean, it's a strange way of formulating things. You know, this is supposed to be something that helps me not substitute everything that I do. So the issue is really with ChatGPT is like this. We think of ChatGPT as it's cute, it's human, it's funny, it's exciting, but really what it is, it's a box with wires. It's a box with wires. It's an auto-complete. It's a machine and it's very powerful. I use it all the time, but sometimes I have to say, I'm like, oh my, this is just not, it's making up stuff. And it's like, okay, it's a tool like Google, and I smile about it and I move on to the next thing. So as we're moving into this future, the challenge of learning systems is that it's about speed, not depth of correctness. It's noise in, noise out, garbage in, garbage out, right? There is no Kazakhstani language database in ChatGPT. You're not gonna get any answers when you try that in Kazakhstan.

It's just not there. The three hundred sixty-five languages of India are not part of the database of ChatGPT. So good luck being Indian and getting answers with any of your questions.

It's about coherence, not truth. It's about patterns, not meaning. It's about simulations and it's about the digital world. So we may end up in a world like this where artificial intelligence is making up the truth. Elections, you know why we had the Brexit to a large degree? It's because social media developed stories that were completely fabricated and people were reading that Turkish people will move to Europe and if UK is part of Europe, they'll end up in London. And they said, we don't want Turkish people, all of them coming to London for some reason. It was completely made up story.

So we have to keep an eye on this and worry about where that's going because really in the end, artificial intelligence is now becoming all seeing. It sees everything you've ever written, everywhere you've ever been, all of your thoughts on Facebook, all of your WhatsApp conversations, all of your locations, all of your apps, everything. That is interesting, potentially powerful, but it does remind you a little bit of our worst fear that we have about AI. When we look at the past movies, for example, the whole concept of what AI is doing, basically too much of a good thing can be a very bad thing. Coffee, food, cigarettes, alcohol, it's not illegal because we can kill ourselves with food or smoking. Some of it is illegal. We have to worry, what is the legal concept here? Too much of a good thing can be a very bad thing. Too much technology is not good for us.

Too little technology, also not good for us. How do we balance that? We need wise government. We need people who make those decisions. General intelligence, here's the problem.

Basically, technology does not have ethics. It doesn't care about your values, what your intent is, your consciousness, your human. It's not a fact, right? These things aren't facts. So that's a problem. Here, I'll give you a short interview with Geoffrey Hinton, the co-inventor of artificial intelligence and deep learning, and what he says about general intelligence. You can show up now, Geoffrey. What happened in the last year that made this so urgent for you?

So for a long time, I was working on making computer models that got more and more intelligent in an attempt to understand what went on in the brain. And very recently, I realized that the computer models we're now making may actually be a better form of intelligence than what's going on biologically. So the idea that they might overtake us quite soon suddenly became much more plausible. For a long time, I'd said that would be thirty to fifty years but now I think it may. He says, computer models now have a better way of thinking than humans.

So we have to think about this and say, well, we can't stop this, we can't put it back, but we have to think about collaboration. We have to think about what is next in this process of this journey from assisted intelligence to automation, to augmented intelligence, to autonomous intelligence. And on the last part, we're clearly going to have to say, we need some rules here. Autonomous intelligence? Could be heaven or hell, but who's in charge, right? Not you, not me. I don't know, the US government, I don't know, Chinese government. You know, the big companies, the big countries around the world that run AI, they all say whoever is first with AI will rule the world. That doesn't sound like a great proposal to me. So I think we're going to need to have something that puts us together into a new scenario, into the scenario of a supervised model where we're talking about really what matters, and machines should have competence, not consciousness. We should take technology as a present,

but not let it become a bomb. So this is about balance. And I propose to you, if you're studying science, you're going to have to face that issue every time you invent something.

What is the balance? What is the most plausible way going forward? What is the way of doing this correctly? What is the way that we can actually create technology that's good for us? How can we collaborate? Do we need this? The UN is working on this, you may know, the International Artificial Intelligence Agency, like the International Nuclear Agency. This is coming and it's going to be good for us to have this, whether we can contain it. I guess we can think about that when we meet next time right here. But this is my view of the way forward for a country like yours, a small country like mine, Switzerland, an up-and-coming country. We need to be proactive and we have to have some caution. We can't just do things just because it's possible.

This is called the Oppenheimer problem, if you've seen the movie. We're nearing that moment where we have to decide what we want. Which way are we going? So I'm going to summarize, maybe we have time for some questions. So what now? First, the primary business objectives for AI, they are quite simply not rocket science. If you're looking at this chart, it says right here, improve customer efficiency, operational efficiency, employee productivity, increase innovation, what we always do basically, but better. So the bottom line is for work, focus on intelligent assistance. I call this IA,

not AI. Better software, smarter software, Salesforce, Einstein is a great example, Expedia, TravelAgent, many other great examples. They're not X-Machina, they're not Black Mirror, they're not Transcendence, they're none of that. It's just better software. Very important that we keep in mind where this is going. Competence, not consciousness. When you're looking at buying a piece of software, you think about what kind of competence you're going to get, not whether it's going to take over and allow us to sit back and do nothing. Very, very important point as we go in this future. The other thing is we don't want AI to be a black box,

like in legal services or financial investing, where it comes up with stuff that we don't know. Medical terms, we always have to keep the humans in the loop. Even if it's more expensive, if it takes longer, if it's more tedious, always keep the human in the loop, at least for the time being, until we know what exactly to do with this and which way to go. So defining the good future again.

We have choices. Just like with climate change, we're making the choice right now, are we going into a sustainable future or a dead future? The UN Secretary General said the other day, we can decide collaboration or certain demise. That's his precise words. That's talking about climate change. We have choices about technology.

How do we use it? How does it make sense? What kind of rules do we need? What kind of social contracts do we need? Because I really believe if we play our cards right, this could be a nirvana for us. Think about this for a second. Solving cancer, solving climate, solving energy, solving food, maybe solve poverty even more than we have by embracing the three revolutions. That's powerful stuff, but we're going to have to think a little bit further than just the same way that we thought about everything before. So in my book I wrote, that was actually about seven years ago, to embrace technology, but not become it. That's the mission. And if we're talking about AI, that's the mission we have to look at. How do we use it without becoming it? How do we go not too far? How do we make it safe, secure, collective benefit? So thanks for your time. Please consider my book, Technology Versus Humanity. It's on Amazon or you can find it everywhere. I have a few copies here as well. And tomorrow I'm going to do part two.

If you're in for more suffering, I will have a second part for you tomorrow. My latest movie is called Look Up Now. It's the opposite of Don't Look Up., there's a QR code. It's on YouTube. Just look for Look Up Now. It's twenty-four minutes on artificial intelligence. I have fifteen translations using AI, including Russian, not Kazakhstani, I'm sorry. Wasn't available. But everything else is there in fifteen languages. Thanks very much for your time. Thank you very much. That was amazing. Very interesting, very profound issues. I hope we do not become the technology itself and we'll only use it to our leverage. I hope everyone enjoyed our wonderful speaker's speech. Hopefully everybody loved the presentation made by our wonderful speaker. So now it's time for Q&A. You do have a unique chance to ask a question regarding artificial intelligence. So any question. So there you go. We have our assistants with the mics, so please don't hesitate. Good afternoon. Greetings.

What do you think would be the economic or financial incentive to develop generalized artificial intelligence? So with things like chat GPT or image recognition, we want to cut costs, but with human-like generalized AI, why would we do it? Yes, if I understand correctly, the financial incentive of using AI means we can work much faster, much more efficiently. Some people are saying that using AI, many professionals can work four or five times as fast. That is huge productivity gain. Stuart Russell from UC Berkeley said that possibly with AI, we can quintuple GDP, five times the amount of GDP using artificial intelligence in all kinds of processes.

This is the biggest money thing ever, really. In return, however, we have to ask a simple question. If I work five times as fast because I have AI, will the company I work for fire the other five people? Or will it pay me more? Or will I only work one fifth of the time? The answer is in the current logic, companies will say, thank you very much. You work very efficiently. Everybody else can go. We have to think larger, and this is, I think, the core to your question.

If we want to use AI correctly, it could lead us to the point where productivity goes like this, and then what we do with collective benefit should also go like this. Otherwise, we have the same problem we have now, which is productivity is increasing, but wages and jobs are staying the same or even declining in some places. That is not such a good idea. So if we were to use AI in such a way to be so productive, then we have to say, well, in that case, maybe I work less. I work three days a week,

like they're proposing in France, for the same money. Or maybe in ten years, we have a guaranteed income. We've been talking about that for twenty years. So we have to question the financial benefit outside of the good old paradigm of growth and profit. Then we can get to a good place. Otherwise, I think it's just gonna be another tool for corporate gains. That could be a real problem for us.

So huge challenge there. Okay, thanks. And another question. You don't get two questions. Yeah, yeah, yeah. Sorry. One more question. One more question up here, sorry. Hello, thank you for your speech. My name is Kaisar, and actually I'm a teacher in school. Where are you? Kaisar. I'm here. Oh, okay. I'm a teacher in school, and my question is related with the curriculum. And from what age we must teach AI in school, and how we can change our curriculum and subjects and overall teaching process? That's a very good question. I mean, teaching AI is a big topic, but I believe in general it's good to teach our kids, everybody really, technology in the widest sense. I think being a programmer is useful, but probably no longer that important, because in a few years I can speak to my wristwatch and have an app programmed. I can already do that now.

Those are all useful things. You know, when I went to school, I learned Hebrew and Latin and Greek, you know. Doesn't seem very useful, but it was a good exercise for other things, yeah. So I think learning technology is important, but keep in mind, as artificial intelligence is increasing in productivity and in skills, our skills are completely somewhere else, right? They're about creativity, imagination, you know, all of these things, intuition. And how do we teach that? So I think it's gonna be more important for us to work on our human skills together with our technology skills, but not either or. You know, many countries have in the past said, if all of our citizens are good in technology, that's the future. That's unfortunately not true in that way any longer. India graduates one million engineers per year. Where do they go? They build bridges or roads, and now you have machines that build bridges that replace the engineers and machines that replace the call center.

So I think it's gonna be important for us to have different education that focuses on the things that only we can do. The only question you have to ask when you're talking about your kids, will my kids learn something that robots can't do? Then you have the answer. Thank you very much, Mr. Lenhardt. I think we're done with the questions. Thank you. Can I have one more question, please? The last one, okay. One more question, please. Last question. Yes, last question. Thank you for your opportunity for the last question. So my question is, you mentioned in your speech that AI and social media will definitely bring some influence to people's decision on their decision on investing on the stock market. So my question is, for example, in U.S. capital market and stock market, will the AI and the social media will bring effect for the Federal Reserve's decision? As we know, the Fed is keep rising interest rate for last one and a half years. Do you believe that the Federal Reserve's decision will be changed or in their next policy meeting in November policy meeting? So because as I know, the Federal Reserve just joined the Instagram last Monday. So the chairman of Federal Reserve, Jay Paul, also gave his very short video last Monday. So do you think the AI and social media will also influence Federal Reserve's decision will probably eventually influence the stock market, the capital market, and also the IPOs? So my question, this is my question. Thank you. I'll try to answer quickly on this one because time is up. But basically what's happening is that artificial intelligence can influence our opinion in such a way that we think it's our opinion, but it's actually coming through what we've seen.

So I'm not worried about AI going and saying, I'm going to crash the stock market. It doesn't have that ability now. I'm worried about it essentially spreading rumors, doing things that aren't real, like the Brexit Turkey debate. The other day, there was a great meme on social media that there was another attack on the Pentagon that somebody had flown. And it turns out this was fabricated by AI on social media to make a completely good looking story. I mean, if you can replicate me, I can come tomorrow as creating some weird message or any politician. And AI is so perfect at this now. So we're going into a world where artificial intelligence is creating its own narratives. There's an app called HeyGen, allows you to make videos by recording your voice and filling in the text and creating your videos without a camera.

So when that happens, I'm worried about information that leads to investors to react based on very little substance. And this could be a real problem. And I think what's happening now is American stock market is realizing and the SEC is realizing that there has to be something put in place that prevents opinion from being manipulated, that it reflects on stock market decisions in such a way. Because that is the real issue. So there's two scenarios, I think that we can see in the worst case. One would be that we have a stock market crash because AI has manipulated opinions very quickly around the world. And the second one that we may system failures like the air traffic control, because there have been human overrides and human mistakes acted on by AI. So these are all follow up consequences. I think Gary Gensler, the SEC chairman said, the weakest point right now in the financial system is the role of artificial intelligence. And how do we address that in the future? It's a great example for inadvertent damage. What Stuart Russell from UC Berkeley calls this the misalignment. So we give AI a job, but it doesn't understand the job, it makes another job. And that's not what we meant, it leads to big chaos. So social media is a great example.

Social media is AI basically, what you're seeing there is organized by AI. And what does it do? It shows you whatever you click and share the most, which is probably the worst part of news that you want to see. So if you're getting your news on social media, you already in the realm of AI. I think that's a real societal problem that we have to tackle.

I said at the beginning of my speech, however, the future is better than we think. These are things that we can fix, but we have to agree on what they are, how we fix them. So the most urgent thing about AI is collaboration on the little things, the middle level things, jobs, automation, and the top things, the control issue.

That to me is urgent. Thank you very much, Mr. Lehmark. Let's give him a round of applause, guys. Let's give him a round of applause for such a wonderful...

2023-10-25 06:06

Show Video

Other news