Debunking AI: Ensuring Artificial Intelligence Doesn’t Destroy Our World

Debunking AI: Ensuring Artificial Intelligence Doesn’t Destroy Our World

Show Video

[MUSIC] Welcome to TecHype. A series that debunks misunderstandings around emerging technologies, provides nuanced insight into the real benefits and risks, and cuts through the hype to identify effective technical and policy strategies. I'm your host, Brandie Nonnecke. Each episode in the series focuses on hype technology.

In this episode, we're debunking artificial intelligence. You cannot avoid hearing about artificial intelligence or AI. It's literally everywhere. When you search for something online an algorithm was used to provide you a response when you go to the hospital to get a Cat scan an AI powered image recognition system was probably used to aid the doctor in spotting any abnormalities when you apply for a loan, no surprise there again, an algorithm is used to determine your credit worthiness. Last year, over 14,000 AI startups were in operation in the US alone. A PWC Global Artificial Intelligence Study shows that AI actually has a 15.7 trillion.

With $1,000,000,000,000 potential contribution to the global economy by 2030. AI appears to be simultaneously the greatest benefit and the greatest risk to the world. While it can contribute to greater efficiency and effectiveness, the technology also poses serious safety, security, and bias risks. What can be done to better assure we realized the benefits of this transformative technology while mitigating its risks? Today I'm joined by Professor Stuart Russell OBE, Professor of Computer Science at UC Berkeley, Director of Chi, the Center for Human Compatible AI, and also Director of the Cavy Center for Ethics Science, and the public author of Artificial Intelligence. A modern approach with Peter Norvig, which is the standard text I understand in AI.

It's been translated into 14 languages and is the author of Human Compatible AI, and the Problem of Control. Stuart, thank you so much for joining me today for this episode of TecHype. Well, thanks for inviting me. Thank you. I think it's really important that we first start with a definition.

There's a lot of misunderstanding around what is artificial intelligence? You are the expert, what is artificial intelligence? I think everyone understands it's about making machine behave intelligently, and that dream actually goes back thousands of years. You can even find Aristotle talking about fully automated musical instruments that play themselves and things like that. What does that mean as an engineering discipline? What I call the Standard Model, since pretty much the beginning of your field has been machines whose actions can be expected to achieve their objectives. This was borrowed from notions of rationality and economics and philosophy. It really focuses on how it behaves, and then, if you want a thought process inside, well, that's an engineering decision. Is that the right way to achieve this kind of intelligent behavior? But now when we go online and we interact with ChatGPT, as many people have found out, it's really hard to dispel the illusion that you're actually conversing intelligent entity.

I agree. It's in the same way that, if you ever watched the movie Titanic. Yes, of course. It looks like there's water. Yes.

There's no water. It's all computer generated water. There isn't any water. But you can't look at the movie and think it's anything other than water 'cause it's splashing and foaming and people are getting wet and all the rest, there got to be water, no water. We need to learn how do we inoculate ourselves against this illusion, and one way to do it is to think, when I read a book that's written, that's got all intelligent arguments and so on, maybe some beautiful poetry, do I think that the paper embodies the intelligence? No, of course not. I think. Yes, there's a human over there somewhere and they wrote all this thing.

[OVERLAPPING] they're very intelligent and they created this and it's just a. An artifact. Just like a medium. ChatGPT is somewhere in between, so it's on a spectrum from just a printed copy of human sayings to something that actually is originating through a thought process.

I don't hear mean a thought process that involves real consciousness, real subjective experience. I think that's a whole different story. But just a thought process where it's meaningful to say that it knows things, it's meaningful to say that it goes through reasoning steps, that when you ask a question, it's referring to its knowledge to answer the question.

Right. If I ask you, where is your car parked? You have an internal picture of the world you refer to it and you say, oh yeah, it's on the fourth floor of the parking lot or something. That's how humans mostly answer question, but sometimes we don't. If someone says to me, Hi Stuart, how are you today, I say, fine, thanks, how are you? I'm not really referring to an internal model, and if I did, I would go on and on complaining about this, that, and the other.

Sometimes we just respond in this sort of automatic reflex way, and as far as we know, that's mostly what these systems are doing. Mostly except being Sydney, which is completely unhinged if you ask it a question. Well, but. Yes. People believe and again, we don't know because we don't have access to the training sets that they probably trained it on a lot of emotional conversations between individuals,. Drunk texts at 02:00 A.M I think that's it sounds like.

A psycho girlfriend or psycho boyfriend. I think it's too. You are trying to dump and they're trying to convince you that actually know they're the right person for you, so there's a lot of that going on. A lot of red flags.

But it's talking about its feelings for the person who is interviewing it or asking it questions. Of course, it doesn't have any feelings as such, so this is just fictional, it's not referring to any internal model or state. All it really does is it takes the previous 3,000 words of the conversation and based on training on trillions of words of text, it outputs the most likely word that comes next. In that sense, it's like a part or a book. Yes.

That it's a transformational process from this vast corpus of training data. Exactly. We actually have no idea what that transformational process is, how it works inside.

It's theoretically possible that actually ChatGPT really does have internal knowledge states really does has developed internal goals solely for the objective of becoming better at predicting the next word. Because, where are those words coming from? They're coming from humans,. Right.

Those humans have internal goals and that's why they wrote the next word. The humans didn't write the next word because the previous 3,000 words were on the page. Yes. They wrote the next word because they're trying to tell you something because, they want something, they have their own internal drives. It might be that the best way to predict what the human is going to say next is actually to become like a human, to actually develop internal goals and knowledge structures and reasoning and planning and all the rest. But again, we have no idea because we didn't design these systems, we just trained them and what's happening inside, we have no control.

Yes. Now I've played with ChatGPT a bit and I don't know if I should say this aloud, but I gave it a prompt, something like write an oped on this topic and it spit it out, and it was almost exactly how I would have written the article, and then I thought, maybe we're not so creative as we think when this is creating something I would write. Or I said one time, give me a syllabus for an artificial intelligence governance course, and it pumped out everything that I would think to put in a course. But there are probably many, many such syllabi already on the web.

Exactly, and that just shows that we look at each other, as an individual, we're not necessarily that creative, but we're following a norm of the profession. Maybe this can reveal to us how uncreative we actually are. Yes. Push us to be more creative. I think it could actually have a positive impact on how we think about educating.

Actually we don't want to train a lot of human ChatGPT youth. Will you sit in the classroom? Will you have your students use ChatGPT? Not for what we're doing, and I think there's a debate going on right now mainly in the media, and you've got some educational experts saying anyone who thinks students shouldn't use jets, it's just one of these dinosaurs, the same dinosaur who said, well we're back in the 19th century, even long before that in the 19th century, people were saying if the students ever start using these mechanical calculating devices, then that's going to be the end of civilizational or something like that. There's some two responses to that. What a mechanical calculating device or an electronic calculator does, is actually automate an extremely mechanical process of following an arithmetic recipe. I bet you, most of listeners, including me, I don't really understand what's going on when I'm doing long division. It's just a recipe, I know you're supposed to bring these numbers down and keep things in the right columns.

Carry the one. Carry them, and do this and that, and write down the dividend and this and that. But what's actually going on? Why does that give the right answer? No one ever teaches you that. It's purely mechanical, and it's not really about the understanding of number and arithmetic. But if we were to give people calculators but never teach them, what are numbers mean? What is plus mean, what's multiplying for? What is this sine function about? It would be incredible disservice to them.

If we give them ChatGPT to answer all the questions that we set them, then they will never learn how to write, how to think coherently for more than one sentence, how to put together an argument, how to marshal facts, and facts is very important here because ChatGPT marshals fiction just as much as fact. I know, I've seen in the syllabus I was mentioning earlier, it cited German articles that don't exist, just made them up. There was an example, someone asking it what's the most cited paper in economics and it just made up I think it's called "A Theory of Economic History," which just doesn't exist. It had some real authors but they didn't write anything like that. In such as complete fiction, you can ask it what's the largest even number? It says 9,999, 999, 998.

It's silly. Obviously silly, and that's because as far as I can tell, it doesn't have an internal reference model. It doesn't actually know things in the same sense that a human knows things. One of the things we do with our internal knowledge of the world, is we try to make it consistent, because we know there's only one world.

If our internal model is locally inconsistent with itself, then there must be something wrong and you have to try and resolve that internal contradiction. But there's no such internal structures in ChatGPT. It clearly couldn't care less about contradictions because I can say another example from my friend Prasad Thurlapati, which is bigger, an elephant or a cat.

It says an elephant is bigger than a cat. Then you can say which is not bigger than the other, an elephant or a cat. It says, neither an elephant nor a cat is bigger than the other. It contradict itself in the space of two sentences on a pretty basic thing. In TecHype every episode we debunk three misunderstandings, and I think in our discussion so far, we've touched on a few of them, but let's solidify those. One I think is about this internal consistency, internal logic, that when you interact with the system, it feels so human that you think it's smart.

Yes. Number 1 misunderstanding? Well, I think that's one of many misunderstandings in the media. I think probably one of the most important misunderstandings, and this is filtering even into very high level policy making, for example, in the European Union, and in the UK government, and other places where they are in the process of making laws that are going to regulate AI, there's this misunderstanding that AI and machine learning, and particularly former machine learning called deep learning, which became popular around 2012, are the same thing. It's surprising to me because we think that we're in this new state.

Are we in a new state right now? Is it actually different? I think some interesting things that are going on. If you stand back and say, well, what is deep learning and why does it work better? Obviously, we had machine learning methods before that, and there's an entire field of statistics which thinks of itself as in the same business, namely taking data and training predictive models in order to predict things from the data. What changed? I think if you look at the models that we were using before, the two primary categories would be decision trees, which you can think of as long, thin decisions, so each branch tests some attribute in the input.

If you're trying to fix the car, you say, okay, well, does the engine turn on when you turn the key? Yes. No. Well, if the engine turns on and the car still doesn't work, well is the gear leaver engaged, or are you in neutral? Yes. No. You follow that sequence and then at the end it says oh, okay your fan bell's broken, or you're out of gas or something to the diagnosis is of the midst of the tree. That's a long, skinny vault. But in that are you telling it what to do? Are you telling it, check this, check this? No. Those trees are generated by a machine learning process.

They could be built by hand. In fact, that's one attractive characteristic of those systems that you can look at them, and I understand what they're doing. Machine learning developed decision tree methods as did statistics, and they are widely used in industry.

Then the other, instead of long and skinny, you might call them short and shallow, or short and fat, I'm just right there. Methods like linear regression and logistic regression, which test all the attributes at once and then just apply some simple function like add them up and if for sum of all the attributes is more than this, then you have the disease, otherwise, you don't have the disease or whatever it might be. Those methods are used, for example, in credit scoring, your FICO score is exactly the output of a logistic regression function applied to a bunch of attributes about your payment history and all the rest. We had long and skinny, and short and fat, and deep learning are basically long and fat. Then as we hoop or wrap up the show, I would like to hear from you quickly, what do you think are the greatest benefits and risks of AI? Then I want to turn to some strategies, technical or policy strategies, that you think we need to implement, some greatest benefits, greatest risk? Well, so the benefits of AI in a sense unlimited because if you think about what is the current level of intelligence that we have access to? But is that biased? It advices our entire civilization.

Everything that we're able to do in the world is a result of our intelligence. If we had access to a lot more, we could have a much better civilization. I hope so. In a simple sense, in the human compatible book that you mentioned, I did a little back of the envelope calculation and say, suppose we have general purpose AI, which means AI systems that can do anything that human beings can do including embodied in physical robots and so on.

By definition, those systems would be able to deliver all the benefits of civilization that we have learned how to create so far and deliver them to everybody at basically negligible cost. I like this utopian thinking. No science fiction. We're not inventing fast and light travel or eternal life or any of those things. We're just saying deliver what we already know how to deliver, except do it in a fully automated way.

That would raise the standard of living of everyone on Earth to a respectable level that you would experience in a developed country, and that would be about a 10 fold increase in GDP, which translates in terms of net present value like what's the cash value of that technology. It turns out to be about $15 quadrillion. The other side of this coin now? Those are some of the benefits and that creates an enormous momentum. If you start talking about risks, people very quickly go to, oh, well, there's so many risks. Maybe we should ban.

Put the brakes. [OVERLAPPING] Can the technology with the brakes on slow AI [OVERLAPPING] Guard rails, see all the reference. But those kinds of thoughts, I think have to be tempered by the knowledge that the momentum towards achieving general purpose AI is fast and it's going to get bigger. I mean if you think the tech companies are big now, as we approach general purpose AI, they will be the economy of the Earth.

The momentum is, I think, fallibly unstoppable unless we have a very serious and very obvious accident. Think of it is like a Chernobyl on steroids that we attribute to AI going wrong. What are those guard rails? What should we put in? What do you think we should do right now? We have a technical policy. Well, it depends on which risk you're talking about, and there are a bunch of risks that are already in play. Lethal autonomous weapons, where the primary risk there is actually not accidentally killing a civilian.

The primary risk is that because they're autonomous, they can be scaled up. That one person can launch 1,000 or 100,000, or 10 million weapons and wipe out an entire country. That's a very serious risk and it's been very difficult to get governments to even acknowledge that that's an issue. There are risks from the way social media operates.

Social media algorithms control what billions of people read and watch. They have more control over human cognitive intake than any dictator in history has ever had and yet they are completely unregulated. Perfectly targeted propaganda. Yeah. It's an individualized and sequential propaganda because the system sees whether what it tried to get you to do worked and if not, it'll try something else. So it's like a reinforcement learning system.

The main concern, and Alan Turing, I think put it very succinctly, once the machine thinking method has started, we should have to expect the machines to take control because our power over the world is our intelligence and if these systems are much more intelligent, then theoretically, they're much more powerful. They should hold the reins. Well, not should. No, they shouldn't. We should hold the reins or at least we think we should hold the reins. But how do you have power over something more powerful than you forever? That's the question and that's what I spent the last seven or eight years trying to solve as a technology problem.

It's a very thorny question. I think with that, I'd like to thank you so much, Professor Stuart Russell. Thank you for joining me today on TecHype. It's clear that artificial intelligence has transformed society in fundamental ways, providing greater efficiency and effectiveness in a variety of domains while simultaneously posing serious safety, security, and discrimination risks. It's clear from our discussion that in order for us to move forward to realize the benefits of artificial intelligence, we must debunk its misunderstandings.

TecHype was brought to you by the CITRIS Policy Lab and the Goldman School of Public Policy at UC Berkeley. Want to better differentiate fact from fiction about other emerging technologies? Check out our other TecHype episodes at TechHype.org. [MUSIC]

2023-09-23 00:10

Show Video

Other news