Directors' Conversations: Oren Etzioni, Allen Institute CEO
Welcome to, hai's. Directors, conversations. These are informal, conversations. That. Allow us a chance to discuss the latest developments, in ai. With leaders in the industry. I'm jonathan, mendy. Heise, co-director. And with me today is orin ezioni. Warren, is a distinguished, professor, of computer, science at the university of washington. And he's also the director of the allen institute, for ai, in seattle. Oren's a pioneer, in the areas, of meta search. Online, comparison. Shopping. Machine, reading. And he now focuses. On. Creating, high impact, ai. That benefits. Humanity. And that's a mission that is very much in line with hai's, own mission, so, we're, fellow. Fellows at arms. So thank you for joining me today oren. Thank you john it's a real pleasure to be here. So i thought. I thought it would be fun to start, by, talking a bit about, gpt-3. So, there's been, a lot of a huge amount of attention. Both in the popular. Technology. Press, and within the ai community, about, gpt3. So let me explain first, a bit about what gpt3. Is, so. Gbt3. Is a a very large, language, model, and it is a model that generates. Text, it generates, text by, taking an input of a sequence, of words. And then it predicts what is the most likely, next word and thereby. Generates, text. Now gpt3. The reason it's it's gotten so much press is that it, there are many surprising, characteristics. Now not surprisingly. It does generate. Uh sort of superficially. Coherent. Text. Uh. Quite remarkably. Coherent, text. But it has surprising, characteristics. For example, on, on some of the. Natural language, processing, benchmarks. Uh it achieves. Really surprisingly. Good performance. Without being fine-tuned. And that's, given rise to. Questions, about, um, you know, how much whether. Gbt3. Actually, understands. The language. Is it intelligent. Or is it at least on the path, to. Uh. To intelligence. Uh so, i want to ask you one. First of all just generally. What do you make of gpt3. So, um it's a great question john in a very timely one right because the entire. Ai, community, and industry, and academia. Uh in the popular, press. Uh. In the student halls. They're all, a twitter i would say both, literally, and figuratively. Uh with examples. Of, uh. Prowess. Uh the kind of remarkable, things generates, even, uh. Code right even uh software. And at the same time with you know discussion, of what. Uh it cannot, do, but before i answer your question, uh. About what i make of it it is helpful to put in a historical. Perspective. So. For as long as i can remember, being in the field and i've been doing this for. Uh 30 years or so, there's always some mechanism. That. By mechanism, i mean an ai program an ai approach, algorithm.
That People are very very excited about. And have, i think somewhat overblown, expectations, for so when i was an undergraduate. It was explanation. Based learning. Never mind what it is, but it was you know it was ebl, it was a, a, tla, a, three-letter. Acronym. Uh and then, you know, at some point it was expert, systems. Which, uh, people were if you go all the way back to before. The inception, of the field it was the logic theorist. Uh right the idea that, uh, um, we built a program. Uh in the 50s, we collectively. Speaking or early 60s. That could prove. Right a, a, theorem in in principia, mathematica. Right and if we can prove, a logical, theorem then surely. Uh, human level intelligence. Is just around the corner right and it's been a good 50 years, that hasn't happened, so i think the biggest point to make, is that we're seeing the same thing, it's deja vu all over again, we do have a remarkable. Mechanism. And it does exhibit. Impressive, behavior. As did the logical, theorist i mean imagine, in the 50s right being able, to automatically. Prove. A theorem. We thought at the time right computers were just good. For crunching, numbers. And here it is doing something that only highly trained uh logicians, can do, so here we are in the same situation. Doing remarkable. Things, but at the same time we are just so far short. Of, quote uh true intelligence. Tear improving, is that's, that is actually my my field, and. Uh i think that was over hyped it was clear that it was not. That the performance. Very quickly exploded. I mean the difficulty, of the problems. Very quickly exploded. But, so so. So what about today what about gpt3. So. The first warning signal should be, uh the notion when something is so simple, right. All we're doing right, the. With a language, model a generative model, is as you said, um. Analyzing. A very. Large amount of text, and computing, the probability, that given what we've seen before, what's the probability, that the next word is. Architecture. Versus, elephant. Versus, the. Right, that's a. In a in a sentence that's all these things are doing. Now you do that, over, a huge amount of text, that doesn't mean you're reading it doesn't mean you're understanding, it, doesn't mean you're really representing. Its contents. Right so. Do we have a program, or is gpt-3. Able to understand. A single, sentence. In the kind of rich way that you or even a ten-year-old, child, understands. And the answer is resoundingly. No. So. It's remarkable. What we can do with it, particularly, at a certain, superficial, level like generate. Fluent, text, and so on, but we have to be very careful to distinguish. These. Impressive. Performances. Or, i don't want to call them trick, these, impressive, behaviors. From. Genuine. Intelligence. Would you, want to rely. On what gpt-3. Generates. Uh. When you're formulating. Foreign policy, even if, or when you're taking, the uh, i don't know the sats. Uh, and uh the answer is no, uh it has no. Responsibility. It makes, huge, uh errors, in a very strong, sense. So oren. I was thinking. Uh the other day about gpt3. And about the difference between the number of words, the, gpt-3. Has. It has processed, in its training. Compared to the number of words that a human, receives, or processes. In a in a day or a year or a lifetime. And. If you think about it so, so gpt-3. Um, is trained, on. 570. Billion. Megabytes. And that is roughly, speaking, 57. Billion. Words. So that's the training set. If you think about what a human, a human probably. In it in a human's lifetime, 70 years. Uh processes. Probably. About. Uh. Half a billion. Words. Maybe a billion so let's say a billion. So when you think about it, gpt-3. Has been trained, on. 57. Billion. Times. The number of words, that a human, in, his or her lifetime. Will ever perceive. And given that, when you see. Gbt3's. Performance, so some you know. It does produce. Remarkably, coherent, text. Usually. Not always, but usually. But then it makes as you pointed out it makes huge mistakes. And it makes. Sometimes, grammatical, mistakes. Oftentimes. Semantic, mistakes, that are, that are quite. Quite laughable. When you think about the difference, in the amount of training. It seems to. Imply, that there's, something. We're not going to get. To. Actual, intelligence. Or actual, understanding. Of language. With this kind of approach. Something's, wrong, about what we're doing. So anyway i want to throw that out there and, see what you think about that. Yeah so so john i think, there, are several threads. Um. In what you said that are worth highlighting. The first one i would say is about, just data efficiency. Right so it's clear. That, what's happening in our heads.
Is Far far more, uh data efficient. Than, what's happening with our deep learning systems. And again, people will point to evolution, right you have to defect it into, account etc etc, but what's remarkable, is that people can even learn from a single example. Right kids hear a word once. And they can already start to use it in context, recognize, it, uh etc. So that's. Uh that's point one, but even more so, um. We learn very interactively. Right we don't. Uh, even though we're exposed, a lot of words. Uh we don't just read them and and assimilate, the uh the probabilities. And so it's really clear that. What people are doing. And what, uh, gpt, 3 or deep learning in general. Is doing, is very very different. And the question we need to ask ourselves, is. Uh, is that a an alternative, route to intelligence, right it's not going to be like people. But neither is a, boeing 747. Like. A bird, right they're both, engaged in flight but, very different. Technologies. Very different specifications. And, requirements. Although, they both, really depend on aerodynamics. And there is there's very there are a lot of things that are the same, and as you say a lot of things that are different. So. Uh let me one last thing on gpt, three and that is. What are your thoughts about potential, abuses, people. Particularly. When openai. Released gpt2. They said they were not releasing the system because of, the potential. Abuses, that it could be put to. Do you, do you see any, does it worry you, for example, that there are systems, or that there's a system. That is so good at producing, coherent. Apparently, coherent, text. Yeah i. I do think there's a very. Important reason to be worried, and i think that it goes, a lot broader, than gpd, three or even text. So, um i wrote a piece for the harvard business review. Uh a while back. Talking about ai-based. Forgery. So there's the term floating around deep fakes. Which means we can, see images, produced, by the machine that are not real, even videos, right it's starting to happen that video and audio. That sounds completely, genuine. Uh isn't. So, um, the fact of the matter is, when you're contacted. Online. And you get an email message, you get a news article. You get. Even an audio or video, you really don't know. Whether that's genuine, or a forgery. And they've been documented, cases of criminals. Using that, to persuade, people to transfer. Funds. And you know you hear your boss with his inimitable. Accent, saying, i need the money right away, go john. And, right um. So so you follow, their, their concerns. So i think that there's a very major, problem, and of course, uh we saw it in the previous, election, with, with the bots, in social media. And, uh we have an election. Around the corner. Where, again, uh both news stories. Uh, right misinformation. Disinformation. And bots, uh engaging, with people in uh in nefarious, ways so, i i would say that. This is a bit of a dramatic way of putting it but, really the fabric, of reality, is under siege. And we need to figure out proactively. Uh technologies. And methods of dealing with that. I i agree completely. So let me let me change. Change topics for. For a bit. You've, uh.
You've Written. Several pieces actually on on ethics, in. Nai. Um, and on incorporating, ethics into ai including a paper by that very title. That i happened to use. Uh last year, in a recent, undergraduate, seminar, that i, taught. So, let me i want to talk to you about that the paper. That paper, uh you focused, on autonomous, vehicles, and specifically. Self-driving. Cars. First of all i'm curious, why of all the ethical issues. That come up in ai, you chose to focus on on that, i should say by the way that, it was co-authored, with your father, right. Yes yes my my father, amethyst. Was the first author. And really the. The driving, force, uh behind that i was the, ai, part and he was the ethics part i don't consider myself. An ethicist, or a, a, philosopher. And the answer to your question, without getting into the whole piece which i'm happy to discuss, is actually a very simple one, whenever, i talk to people. About. Uh. Ai. About, autonomous, vehicles, self-driving, cars. The conversation. Inevitably. Turns. To uh what's called, the trolley problem to this ethical, dilemma. Of if a car is driving down the road it has a choice, of running over you know one person or four people uh, you know, an old lady or, or a. Nobel laureate. You know there's different many different variations, but, uh. That, ethical, dilemma, seems to, really, um. Uh tantalize. And even scare people. So i have to say that one of the reasons, that i picked the paper, was that you, say, in the paper. That, we shouldn't, focus, too much on trolley, problems, they're really edge cases. And and, this is a this is a view that i also. Have or had. I mean one of the interesting, things, that one of my students, pointed out is that, once you get into autonomous, vehicles. Because, of their processing. Power. And potentially, their ability to sense the environment. Quite. You know, better than the average, driver. We may in fact encounter. More trolley, problems. Than we would with just a human driver. Because we can calculate. Various, alternatives. And, and we'll have to make choices, between those alternatives. But, let me, get back to your paper, um. The. Now let me describe, and you correct me if i'm wrong, but the ultimate conclusion, of the paper, was, that self-driving. Cars. Should be designed. Really. To first of all obey the law. And you, you say that. Obeying, the law that handles, most of the ethical, questions, of an ai. That an. Autonomous, vehicle will encounter. And then for other ethical, issues. Uh that are. Independent. Of the laws, not constrained, by the laws. You advocate. Developing. What you call ethics, bots. That. Observe. The user's. Ethical, preferences, from other sources, of evidence. And then, implement, or guide. The car, accordingly. You give the example. Of um. You know if you're a member of. Uh greenpeace. Maybe. Uh you will then refuel. At um. At refueling, stations, that, i don't know. Use, uh, a lot of. High percentages, of ethanol, or something of that sort. So. I want to ask you about. That conclusion. And there are a couple of things there are a couple of challenges, i want to put to you. First of all. I think it's kind of. Naive, to think that legal. Behavior. Corresponds, to ethical, behavior, that there's a, there's a direct, connection, between. The laws, and obeying the laws, and. Behaving, ethically. I think the examples, you can come up with lots of examples, where swerving, into oncoming, traffic or swerving. Out of your lane, is actually the ethical, thing to do. Uh, to avoid. Uh. Causing an accident for example. Now secondly. I, i do find it. I'd like to hear more from you but i find it kind of implausible. That ethics, bots. Are going to be able to draw useful. Relevant, conclusions. That apply to the kinds of ethical, quandaries, that will be encountered. While driving. So your example, of. You know the greenpeace, example or whatever it was. Um, is is a very limited example. And, and. I i'd like to hear your more thoughts about that why why should i believe, that that is going to be, a useful, way to approach, things. So to the first point, it's definitely, not the case that, if we just build, a legal, driver. Uh, right one uh that obeys the law, uh that gets us even eighty percent of the way there uh ethically, so i completely agree that's a constraint. And it's a helpful. Uh constraint. And i also very much agree with you that. Um. With the nuances. Of the real world. Uh. Designing. Uh an ethics bot that would actually.
Exhibit, You know act according to my preferences, even if it has a lot of data, is very tricky so to use your example of energy efficiency. What if there is a place that sells you know uh ethanol, based fuel, but it's 40 miles away, it's 100 miles away right at some point, if i'm concerned, with energy, efficiency. And climate, engines, and such, i'm still not going to drive. A hundred miles just to fill with a different kind of fuel so there's all kinds of nuances, and subtleties. Uh that go into that. And. Current ai systems, are not able to model that, the paper. Uh, tries, to call. For more research, in that direction. And tries to suggest, that the same way that we learn. All kinds of things. Inductively, from examples. Right which is the the current paradigm, that gpd3. And other things, operate on, uh ethics, uh or at least approximations. To ethics. Uh are not immune to that, but but it's by no means a solved problem. But so so uh, you know uh tldr. I agree with everything you said, but i do want to step back. And make what i think is the most important, point, and that's what motivates. I think a lot of the article, which is. The thought of autonomous, vehicles. Has a lot of people worried. There's a worry about jobs of course which is a very legitimate. One, but there's a lot of worries, about. How, is that car going to behave, am i going to be comfortable. With all these autonomous, vehicles, on the road. And i think that, the appropriate, philosophical. Perspective. Here. Is, a utilitarian. One, we, have actually struck a faustian, bargain. With transportation. Technology, that leads, to. Forty thousand deaths. Uh more than a million, injuries. On the us highways, alone, each year. If we're able to reduce, those numbers by rolling out any technology, and certainly autonomous, vehicle technology, over time. Then then it's a moral imperative, to do that. Whatever, happens, with the. Trolley, problem, whatever.
All These, kind of uh edge cases, uh do. If we can save lives using technology. We really should do that. And and i think that's the primary, point and that's the way to think about it and so, the article. Attempts, to. Um. Ward, and kind of, bracket the discussion. Of these edge cases because. We need to focus on the main event. You know it's interesting, i am. I. I also agree with what you've just said. But, something that that's, very interesting, is going to happen. Uh, once we do get autonomous, vehicles. Uh. On the roads. And that is the following, they're going to make mistakes. So they're going to be in. Be in accidents, and. Perhaps. Kill a child, for example. And, we will have recordings. Of exactly, what happened. And. Here's the worry that that i have. It can be the case, that an autonomous. Autonomous, vehicle. Is, statistically. Far, far safer. Far far safer. But we're going to have cases, where they make mistakes. And. Looking at the mistake, a jury, for example. Will be able to say well if, if a human, if an attentive. Human, driver. Had been in control. That wouldn't have happened. And. Then the question, is, how, how persuasive. Is this statistical. Argument. That, this technology. Is far, far better. Compared, to the gut, feeling. That. Really, this was a mistake, that the ai, made. And so. The company, presumably. Should be punished, for it, and i'm worried that we will end up as a society, going the wrong direction. Because of that, so rather than go with, the statistical. Fact. That, this is a much safer technology. We will go with the kind of intuitive. Gut feeling, that. We don't want. We don't want, a mistake. Made by the car. To then. End up, with a lawsuit, a major major lawsuit, against the car manufacturer. So it's a reflection. Yeah so. I'm very worried about that too so i think what you're saying is, uh, we could have a technology. That in the aggregate. Would save, uh many lives. Uh, and its progress, or its proliferation, would be [ __ ]. By, our, uh, kind of litigious, system, and the, uh, the jury's, kind of, visceral, response to a particular event rather than thinking about the statistics. My answer to that or the reason i'm, i'm optimistic. Is i do, think it's going to be a gradual, process, so first of all these, full-blown, autonomous, vehicles, we've learned in recent years, they're further, away than we think. And again it's part of this right. Sometimes, overblown. Expectations. And so we'll have time to uh to get used to them, and in the meantime, when we have. Semi-autonomous. You know different degrees, of autonomy. Like we do in you know current. Teslas, or what have you, i think that it's essential, to have. Clear, human, responsibility. Uh for any mishaps, so. The car, has certain capabilities. If a person the driver, misuses, them. Then, the driver, is at fault. The human driver. And then if the, technology, doesn't operate the way the manufacturer. Uh. Specified, right the manufacturer, could be, at fault the same way, if you know. Uh a seatbelt. Uh or an airbag. Doesn't work. So. So i think i think we do have a, rational, path forward. So so actually let me let me talk about, i, i'd like to, talk about another article, that that, you wrote which i found really fascinating. And and that's the canary, article. So. Um. Obviously, there's been a lot of, a lot of talk a lot of hype about. Uh artificial, general, intelligence. And, the. Possibility, of a malevolent. Agi. As they call, as they call it so artificial, general intelligence. Is, uh when, when we get an aai. That is as capable, as humans, and.
Presumably. Potentially, more, more capable. Uh more intelligent. You wrote a paper. About. Um, basically, saying, don't worry. Don't worry at this point. And. Don't worry because there are certain. Indications. That we. Will see. Before, we actually. Get to that point, and, you call these canaries, in the coal mine. Can you i don't know if you can remember can you tell me about the canaries, that you chose and why you chose. Those particular, ones. The the context, is that there's a set of uh very, uh smart people. Uh. Like, um, stuart russell right who did his phd. Uh at, stansford. Some some, decades ago and nick bostrom, and others, who. Have been, uh, focusing. On the fear. Of ai. As you said being malevolent. And taking over. And they say this is in some sense the greatest, of all risks the mother, of all fears there are many concerns about ai. But they're focused, on that one because. They think potentially, it could spell out, the extermination. Of the human race, and other folks like rod brooks who, um. Uh and and and myself, and many others say. These fears are overblown, and actually they're distracting. Us. From, uh, from. Uh real concerns. Like unemployment. Or privacy. Or or fairness. Uh, uh again uh andrew inc. Uh former stanford, uh, faculty. Said uh worrying about ai turning, evil, is like worrying about overpopulation. On mars, it's just. Too early, and it's too hypothetical. It ignores. Uh all the issues. Another really important point is if we get. Obsessed, with this, cataclysmic. Fear, we might not think about the potential, benefits, of ai and i'd love to talk, about. Uh, what some of those are but, uh so so in that context. Uh i got tired after a while of this kind of. Speculative. Back and forth like i think it's, a ways off and other people say but it could happen. In you know five years and, uh, how do you know, right it's very hard to prove a negative. I said can we and i'm a scientist, i'm not a philosopher. Can we take this and put this on a more empirical, footing, can we identify. These canaries, in the coal mines of ai. These uh harbingers. Or or tripwires. These warning, signals. That if they happen. They tell us that ai's a lot closer, than we thought, and if they don't happen, then we feel look it's still just hypothetical. Decades, maybe even centuries, out. And, um. In that context, probably the most interesting, one. That i identified. And it's very closely, related to the, mission of your institute.
Is To highlight, that the role that humans. Continue, to play. In every, success. Of, uh of machine learning. In fact, i go as far as to say the machine learning is really a misnomer. Right, uh. When we say the machines, learn, it's kind of like saying that, baby. Penguins. Fish. What baby penguins, really do is they sit there. And the mom or the dad penguin, they go. They find the fish, they bring it, they, uh chew it up and they regurgitate. It right they they spoon feed morsels. Uh to to their babies in the nest, but that's not the baby's fish and that's the parents fishing, well the same thing is here with machine learning. We define the problem. We define, the representation. We. Do everything, except. The quote last mile. The finding the statistical, irregularities. In the in the data. And that we we give to the machine and it does a a, super human job at that, so i gave an example if we had a machine learning program. That could actually, formulate, its own machine learning problems. Decide. This is something i want to learn. Label the data, formulate. The loss function, et cetera et cetera et cetera. Then, uh that would be a canary in the coal mine of ai. Yeah i i think, of all the canaries, you picked out that's, that's the best one that's the very best. You also said let me just, one other thing. Besides, the canaries, that you mentioned, you you. Also said that the turing, test is not a good canary, because. If we get. Uh, an aai. That successfully, passes the turing test. Then. We it will already. Be there we will already have genuine, intelligence. I actually don't buy that i i think that the train test. There are different ways of interpreting, the turing test but. But, it is, um. It is so dependent, on the idea. The ability, to fool. The. The human participant. So the turing test, you have a. Machine. And you see whether or not the machine, can fool the, um, the interlocutor. About whether it's a machine, or whether it's a real human. And. You know all of the. Uh systems, that have. Performed, well on the turing, test, are systems, that, are quite. Obviously. Designed. To fool. To mislead. The interlocutor. To, change the subject, for example, in order to avoid. Giving a sensible, answer which you can't do. So i i actually find the turing test to be, sort of artificial. Now. If what you mean is, passing, the turing. Test, for, arbitrary. Lengths, of time. Then maybe you're right. But, any finite. Amount of time interacting, with such a system, the system may just be. Using techniques, to fool us and, and i don't consider, that. Even an approximation. Of intelligence, it's, it's intelligence, on the part of the programmers, that design the system. Anyway, so, so we could talk about the turing tests, um. I just just. John just to respond. Really quickly i i do agree with you that the turing test. As implemented. And john markov, said this brilliantly. It's a test of human gullibility. Uh and so, uh yeah it's it's very easy to fool. What i mean by. By, that is a as kind of, uh. An ultimate test of intelligence, and i i take that's what uh alan turing himself, meant. Is is is a true test i don't necessarily, think it requires. Infinite, time but it does require. Uh. Being. Careful, and methodical. And, comprehensive. So for example. If somebody came to me and asked me to administer, a turing test, the first thing i would do is i would, give, the program. Uh. You know the sat, the full sat including the essay questions. I would say, do well on that, and then we'll we'll talk about the weather we'll talk about movies we'll have all that chit chat, where you're so. Able to fool me with social gambits, but if you're not able to score reasonably well, on the sats, and if you're not able to make the same mistakes. That humans make and you're not able to write an essay, that doesn't just sound coherent, but is coherent. And and and. So i kind of know how to probe the weak spots.
Of Of the technology, and and that helps, so, so, again a true full-blooded, turing test, not not a i'll i'll call that the exioni, test right. So that is that is far more elaborate. And there i i think i i probably agree with you that's far more elaborate, than. Than, touring, the setup that turing described. Let me let me go on to a different a different topic and um. You know a lot of people have. Have talked about. The. Fact that. These showcase, systems, the most prominent, kinds of ai, systems. That. Get in the press. Are based on, competitive, or adversarial. Games. So reid hoffman. Has said to me many times that, unless we, want to end up with an adversarial. Agi. We better start thinking about how to train, cooperation. Or build cooperation. Into the system. And i was really pleased, when when i heard. That. The allen institute, had. Recently, published a project. That is fundamentally, based on cooperation. And i was wondering if you could tell us about that. Maybe even show us. A little bit of the system. Uh. Sure, i i, again. Uh, you're absolutely, right that. Uh, the interaction. Between humans and machines. Is very fundamental. To, uh ai and of course that's the the mission of, of your institute. Which i very much uh. Applaud. And. That was part of the message, with the canary's, example too where, if you uh drill down into what's called machine learning and kind of this, bastion, of autonomy, is full of, uh human, uh intelligence. In fact uh, if we go to what's considered one of the, landmarks. Right, of ai the the. Um. Victory. Of alpha go, over lisa doll right a ai program, uh. Beats the world champion and go, what i said is this is a human victory, right just just like your point john about intelligence, of the programmer, it's this human team. At, uh google deep mind. That did a tremendous, job. Using their technology, to to defeat lee sedol. So so we became, interested. In. Um. Collaborative. Games, and a natural, game to think about is pictionary. Because in pictionary. Uh, right you're drawing. Trying to convey, a phrase to me that i'm trying to guess, or or the other way around, and so now i'll pause. And. See if i can bring up. An interactive. Demonstration. Of the game. So, um.
John What you can hopefully, see on the screen here, is. Our version of pictionary, which we call uh iconery. Both. To not violate, anybody's. Trademark. And also because we make extensive, use of icons, but, rather than talking about it let's play, and so uh john you and i are gonna play together. Uh. With. Uh what's called allen ai our teammate, this is a, an ai program, we're not playing against, it, uh and in this case to avoid my trying to draw with the mouse which is going to be a disaster. We're going to guess. So, what you're seeing on the screen here let's go with easy phrases, we have, a limited, time. So what you're going to see on the screen here. Is. The ai, program. Uh, suggesting, to us a phrase it's indicating, it with a set of icons. And the phrase, is blank, some word the, blank. And and john i'll i'll go to you, what, what do you think the the phrase might be, based on the uh, what the program has drawn. Drink the coffee. Right so let's type that in drink. Uh. The. The coffee. Oops, uh, it's not tolerant, of, misspellings. Okay so now i i type that in, and i press submit, and that's my guess. And it instantly, comes back and says. Okay the coffee is right, but we've got an incorrect, word here, so now before. Um. Uh we do another guess, one of the fun things i can ask the program, the way you can ask. A human partner to draw again and i really don't know, what it'll draw, in response, to this. Incorrect guess it's going to try to get us, to, change the word drink. And what it's done is it's kind of uncluttered. The drawing. And has us focus, on the nose. And these, squiggly, lines. Arising. From uh from the coffee, so. John, uh, did it, clarify, that for you, how about smell the coffee. Smell the coffee, yeah. So, um. Okay so we smell the coffee. And. Yeah, it's, not a stickler, for smell versus smelling. Boom uh it's gotten it right. I want to show you uh one more thing so what i want to explain, is that the program. Allen ai, is trained on humans, playing each other, and it learns from that how to play. With, uh. Another human. It has, a. Vocabulary. That it's figured out, how to map to icons. But one of the really interesting, things, uh in that vocabulary. Is that it doesn't have a word for doctor. So if you look under. B here where i'm highlighting, the mouse, the phrase is doctor, giving, medicine. To the old woman. And interestingly. It doesn't have a word for doctor doesn't have a word for an old woman, but it automatically. Learns. To represent, the doctor, as a, person, with a stethoscope. And some pills, and or represent, an old woman as a woman with a king, it's learned that that helps people. Guess that, i'll, also note, because we often see examples. Of bias. Creeping, up in machine learning programs, i'll notice that i politely, say person, but this is really the icon for a man, so it's learned. Incorrectly, of course that, a doctor is associated. With man, it's it's developed, a. Sexist, bias. But sexist, bias, aside. We're quite proud of the fact, that it's able to. Uh deal with the compas, is basically able to create novel compositions. Uh. And use those to encode. Words and phrases. That were not in its training set. And, uh, use those to cooperate, with the person. Uh, to uh to play the game, and just. The to end it, this is really the point of this work that we're, we've shown that the techniques, that we know and love, can be extended, to cooperative. Games, and can be extended to games that involve. Uh phrases, and images, and and this of course goes way beyond.
Uh Games like chess, and go, thank you that's that's terrific. I'm glad to see cooperation. As the focus rather than, competition. Yeah and and. To to just, to tie that back to the. Uh, kind of super intelligence, in the canaries, article. Uh, while the canaries, haven't yet fallen, i'm not suggesting, that we rest on our laurels, and be completely, complacent. I'm thinking the research, on cooperation. Research, on natural language, research, on, human ai. Together. Right. Human ai interaction. Is a way that will lead us to a future where hopefully. Uh machines, and people can work together. Uh to benefit humanity. Well oren i think. I think that's that's a wonderful, place to to end the conversation. Um, and and as you say it's it's completely, in line with with our mission here at hai. And. I'd i'd like to see the allen institute, and hai. Partner, even more closely, than, we have on on. Certain things. So i want to thank you for this. Really illuminating. And fascinating, conversation. It was a. Good to get a chance to to talk to you about some of these things. And i, also want to thank, our audience, for listening, in. You can visit, our website, or find. Find us on youtube, to listen to other great discussions. With, leading ai, experts. So owen thank you, and thank you to the audience. Thank you very much john it was real pleasure, to have this. Dialogue with you, and i hope to host you on one at. The allen institute, for ai to, uh also have a chance to hear more about uh, your your opinions i'm afraid we we didn't get to that as much as i would have liked but thank you so much, anytime or. Bye-bye. You.
2020-10-09 00:19