I believe chatbots understand part of what they say. Let me explain.

I believe chatbots understand part of what they say. Let me explain.

Show Video

doesn't artificially intelligent chatbot understand what it's chatting about a year ago I'd have answered this question with clearly not it's just a Turbocharger to complete or a stochastic parrot as people more eloquent than we have put it though for all I know they too might be chat Bots but I've now arrived at the conclusion that the AIS that we use today do understand what they're doing if not very much of it I'm not saying this just to be controversial I actually believe it I believe though I have a feeling I might come to regret this video I got hung up on this question not because I care so much about chat Bots but because it echoes the often made claim that no one understands quantum mechanics but if we can use quantum mechanics then doesn't that mean that we understand it at least to some extent and consequently if an AI can use language then doesn't that mean it understands it at least to some extent what do we mean by understanding does chat GPT understand quantum mechanics and will AI soon be conscious that's what we'll talk about today the question whether a computer program understands what it's doing certainly isn't new in 1980 the American philosopher John sir argued that the answer is no using a thought experiment that's become known as the Chinese room Saul imagines himself in a windowless room with a rule book and a Dropbox if someone drops him a note written in Chinese he looks up the symbols in his rule book the rule book gives him an English translation which he returns as an answer through a slit in the door no doubt drawing on the everyday experience of a professor of philosophy sir argues that the person outside the room might believe that there's someone inside who understands Chinese but really he still doesn't understand the word of it he's just following the rules he's been given so I'll argue is that a computer program works like that without any true understanding just following the rules there are two standard objections that people bring forward against Salt's argument one is that the system which understands Chinese isn't just the person inside the room but the person including the rule book so saying that the person doesn't understand Chinese might be correct but doesn't answer the question because on Source analogy the person alone doesn't represent the computer program another objection is that it might well be correct that Saul and his rulebook don't understand Chinese but that's because the input is so limited language lacks the physical information that we have learned to associate with words software that had the same physical information could develop understanding as we do unless of course we live in a computer simulation in which case you can file complaints using the contact form in the bottom right corner of your frontal lobe I think both of these objections miss the point but before I explain that I want to introduce you to the quantum room quantum mechanics works pretty much like seoul's Chinese room it's a rule book a set of equations with instructions for how to use them you give me a question I look into my rulebook that I keep in my windowless room and I return an answer to you through the slit in the door do I understand quantum mechanics Saul would probably argue no indeed for the most part physicists today aren't even in the room because who wants to spend their time sitting in a windowless room with a Dropbox when they can instead sit in a windowless room with a laser no we're now the ones putting a question into the Dropbox so to speak by feeding it into a computer the computer crunches the numbers and returns an answer do we understand those answers have we gone too far with shut up and calculate is the room even there when no one looks those are all very interesting questions but let's not get carried away we were trying to talk about chat bots so let's have a look at those today's language generating models are somewhat more sophisticated than just look up tables like so imagined and what better way is there to explain how they work than asking one itself language generating models like me are built using deep learning techniques specifically a type of neural network these models are trained on large amounts of text Data such as books articles and websites and learn to generate Language by identifying patterns and relationships between words and phrases when generating language the model takes an initial input such as a prompt or a question and uses the patterns it has learned to generate a response the generated text is not simply copied from the training data but rather the model uses the patterns it has learned to create new original text well that was not awkward at all but yes neural networks indeed learned similar to how humans learn they don't just memorize input they identify patterns and extrapolate them they still have many differences to the human brain at least at the moment most importantly the neurons in the neural network are themselves part of the algorithm and not physical as they are in the human brain and the human brain has a lot more structure with parts specialized for particular purposes but neural networks do capture some aspects of how humans learn and that brings us to the first important point when it comes to the question of understanding suppose you have children in elementary school and have them memorize the multiplication tables up to 10. if you want to test whether they understood multiplication you asked them something that wasn't on the tables we want to test whether they have identified the pattern and can use it on something else if you're in the Chinese room with a long list of examples you can't answer a question that isn't on the list this is indeed not what anyone means by understanding so I'd say so it's right on that account but this is not what neural networks do neural networks do instead exactly what we mean by understanding when we apply it to humans they extract the pattern and apply it to something they haven't seen before but this brings up another question how do you know that that's what it's doing if you ask a child to multiply two numbers how do you know they haven't just memorized the result well you don't if you want to know whether someone or something understands looking at the input and output isn't enough you could always produce the output by a lookup table rather than with a system that has learned to identify patterns and you can well understand something without producing any output like you might understand this video without any output other than maybe the occasional frown I therefore say that what we mean by understanding something is the ability to create a useful model of the thing we're trying to understand the model is something I have in my head that I can ask questions about the real thing and that it's useful means it has to be reasonably correct it captures at least some properties of the real thing in mathematical terms you might say there's an isomorphism a one-to-one map between the model and the real thing I have a model for example for cows cows stand on Meadows have four legs and sometimes go moo if you pull in the right place Mook comes out and also particularly sophisticated model I admit but I'll work on it once cows start watching YouTube Understanding then is something that happens inside the system you can probe parts of this understanding with input output tests but that alone can't settle the question when we're talking about neural networks however we actually know they're not lookup tables because we've programmed them and trained them so we can be pretty sure they actually must have a model of the thing they've been trained for somewhere in their neural weights in fact at this moment in the history of mankind we can be more confident that neural Nets understand something than your average first grader because for we can tell the first graders just ask a chatbot let's then look at the question of who understands what and why we have a model of the human body in our brain this allows us to understand what effects our movements will have how humans move in general and which parts belong where we notice immediately if something is off but if you train an AI on two-dimensional images it doesn't automatically map those images onto a 3D model this is why it'll sometimes create with things like people with half a leg or three arms or something like that this for example is mid-journey trying to show a person tying their shoelaces they look kind of right because it's what the AI was trying to do to produce an image that looks kind of right but they don't actually capture the real thing if you take understanding to mean that it has a model of what's going on then these AI is almost certainly understand the relation between shadows and lights but does it know that Shadows and Light are created by electromagnetic radiation bouncing off or being absorbed by three-dimensional bodies it can't because it never got that information you can instead give an AI a 3D model and train it to match images to that 3D model this is basically how deep fakes work and in this case I'd say that the AI actually does partly understand the motion of certain body parts the issue with chat Bots is more complicated because language is much more Loosely tied to reality than videos or photographs language is a method that humans have invented to exchange information about these models that we have in our own heads written language is moreover A reduced version of spoken language it does capture some essence of reality in relations between words and if you train a neural network on that it'll learn those relations but a lot of information will be missing take the sentence what goes up must come down that's for reasonably common initial conditions a statement about Newton's law of gravity further text analysis might tell you that by down we mean towards the ground and that the ground is a planet called Earth which is a sphere and so on from that alone you may have no idea what any of these words mean but you know how they are related and indeed if you ask chat GPT what happens when you throw a stone into the air it'll tell you the bluntly obvious and several flawlessly correct paragraphs but a language model can't do more than try to infer relations between words because it didn't get any other data this is why chat GPT is ridiculously bad at anything that requires for example understanding spatial relationships like latitude I asked it whether Windsor UK is further north or south than Toronto Canada and they told me Windsor is located at approximately 51.5 degrees north latitude while Toronto is located at approximately 43.7 degrees north

latitude therefore Toronto is further north than Windsor it'll quote the latitudes correctly but draw the exactly wrong conclusion it's a funny mistake because it'd be easy to fix by equipping it with a three-dimensional model of planet Earth but it doesn't have such a model it only knows the relations between words for the same reason cha GPT has some rather Elementary misunderstandings about quantum mechanics but let me ask you first imagine you have two entangled particles and you separate them one goes left and the other goes right but like couples after fight they're still linked whether they want to or not that they are entangled means that they share a measurable property but you don't know which particle has which chair it could be for example that they each either have spin plus or minus one and the spin has to add up to zero if you measure them either the one going left has been plus one and the one going right minus one or the other way around and if you measure one particle you know immediately what the spin of the other particle is but let's say you don't measure them right away instead you first perform an operation on one of the particles this is physics so when I say operation I don't mean heart surgery but something a little more sophisticated for example you flip it spin such an operation is not a measurement because it doesn't allow you to determine what the spin is if you do this on one particle what happens to the other particle if you don't know the answer that's perfectly fine because you can't answer the question from what I've told you the correct answer is that nothing happens to the other particle this is obvious if you know how the mathematics works because if you flip the spin that operation only acts on one side but it's not obvious from a verbal description of quantum mechanics which is why it's a common confusion in the Popular Science press because of that it's a confusion that chat GPT is likely to have too and indeed when I asked that question it got it wrong so I'd recommend you don't trust chat GPT on quantum mechanics until it speaks fluent latte but ask it any word related question and it shines one of the best uses for chat GPT that I have found is English grammar or word use questions as I was working on this video for example I was wondering whether Dropbox is actually a word or just the name of an app how am I supposed to know I've never heard anyone use the word for anything besides the app if you type this question into your search engine of choice the only thing you get is a gazillion hits explaining how Dropbox the app works ask the question to Jet GPT and it'll tell you that yes Dropbox is a worth that English native speakers will understand for the same reason chat GPT is really good at listing pros and cons for certain arguments because those are words which stand in relation to the question it's also good at finding technical terms and keywords from rather vague verbal descriptions for example I asked it what's the name for this effect where things get shorter when you move at high speed it explained the name of the effect you are referring to is length contraction or Lawrence contraction it is a consequence of the theory of special relativity which is perfectly correct but don't ask it how English words are pronounced it makes even more mistakes than I do what does this tell us about whether we understand quantum mechanics I've argued that understanding can't be inferred from the relation between input and output alone the relevant question is instead whether a system has a model of what it's trying to understand a model that it can use to explain what's going on and I'd say this is definitely the case for physicists who use quantum mechanics I have a model inside my head for how quantum mechanics works it's a set of equations that I have used many times that I know how to apply and use to answer questions and I'm sure the same is the case for other physicists the problem with quantum mechanics is that those equations do not correspond to words we use in everyday language most of the problems we see with understanding quantum mechanics come from the impossibility of expressing the equations in words at least in English for all I know you can do it in Chinese maybe that explains why the Chinese are so good with Quantum Technologies it is of course possible to just convert equations into words by reading them out but we normally don't do that what we do in science communication is kind of a mixture with metaphors and attempts to explain some of the maths and that conveys some aspects of how the equations work but if you take the words too literally they stop making sense but equations aren't necessary for understanding you can also gain understanding of quantum mechanics by games or apps that visualize the behavior of the equations like those that I talked about in an earlier video that too will allow you to build a model inside your head for how quantum mechanics works this is why I'd also say that if we use computer simulations and visualizations in science especially for complex problems that doesn't mean we've given up on understanding visualizing the behavior of a system and probing it and seeing what it does is another way of building a model in your head there is another reason why physicists say they don't understand quantum mechanics which is that it's internally inconsistent I've talked about this a few times before and it's somewhat off topic here so I don't want to get into this again let me just say that there are problems with quantum mechanics that go beyond the difficulty of expressing it and what alerts so where will the AI boom lead us first of all it's rather foreseeable that before long we'll all have a personalized AI that will offer anything from Financial advice to relationship counseling the more you can afford to pay the better it'll be and the free version will suggest you marry the prince of Nigeria of course people are going to complain it'll destroy the world and all but it'll help them anyway because when has the risk of destroying the world ever stopped us from doing anything if there was money to make with it the best and biggest AIS will be those of big companies and governments and that's almost guaranteed to increase wealth disparities we're also going to see YouTube flooded by human avatars and other funky AI generated visuals because it's much faster and cheaper than getting a human to retext or go out and film that old-fashioned thing called reality but I don't think this trend will last long because it'll be extremely difficult to make money with it the easier it becomes to create artificial footage the more people will look for authenticity so that stupid German accent might eventually actually be good for something if nothing else it makes me difficult to simulate will AI eventually become conscious of course there's nothing magic about the human brain it's just a lot of connections that process a lot of information if we can be conscious computers can do it too and it will happen eventually how will we know like understanding you can't probe Consciousness just by observing what goes in and comes out if you'd really want to know you'd have to look what's going on inside and at the moment that wouldn't help because we don't know how to identify Consciousness in any case basically we can't answer the question but personally I find this extremely interesting because we're about to create an intelligent species that'll be very different from our own and if we're dumb enough to cause our own Extinction this way then I guess that's what we deserve meanwhile enjoy the ride at least for now the best tool we have for understanding the world is the human brain but if you really want to understand quantum mechanics or neural networks then passively watching a video isn't enough you have to actively engage with the material brilliant.org you have been sponsoring this video is a great place for that brilliant offers courses on a large variety of subjects in Science and Mathematics and they add new content every month the great thing about their courses is that they're all interactive with visualizations and follow-up questions so you can check right away whether you can apply what you've learned and that's really what understanding is all about when I need to freshen up my knowledge or want to learn something new first thing I do is look it up on brilliant to get some background on the physics in this video check out for example their courses or neural networks and Quantum objects or even better check out my own course about quantum mechanics my course gives you an introduction to interference superpositions and entanglement the uncertainty principle and belt's Theorem you don't need to be an expert to take this course I've worked together with brilliant so that you can start from the very Basics if you're interested in trying really it out use our link brilliant.org Sabina and sign up for free trial where you'll get to try out everything brilliant has to offer for 30 days the first 200 subscribers using this link will also get 20 off the annual premium subscription thanks for watching see you next week

2023-03-12 15:30

Show Video

Other news