Faculty Forum Online, Alumni Edition: The World According to Deep Learning and Second-Wave AI
Hi, I'm Whitney aspect, the CEO of the MIT Alumni Association, and, I hope you enjoy this digital production, created, for alumni and Friends like you. Good. Afternoon welcome, to the MIT faculty forum, hosted by the Alumni. Association. My, name is Ron McCullen, and also it was moderator for today's presentation the. World according to deep learning and second, wave AI a, little, bit about me I'm, a science and technology writer for publications, such as nature, Scientific. American, the. Atlantic the, nation in on dark I'm, mostly worried about artificial intelligence my. Tricks playing, in cognition, infectious. Disease and the, science, of violence I was. A night time strengthen, fellow from. And hear the mighty from. 2015. To 2016. As. A reminder we welcome your questions during, this chat alumni. Joining us that we assume can use the Q&A feature on. Your toolbar, for. Those viewing on YouTube, give me add your question into the comments, field, next to the screen we. Have to encourage you to tweet using the hashtag. MIT. Better world we'll try to get to these questions, as we can I'm so. Delighted today to introduce, our featured presenter. Brian. Kent was Smith the, Reid Hoffman professor. Of artificial. Intelligence and, the human, at the University, of Toronto. He's. A professor of information. Philosophy. Common. Science and the history and philosophy of, Science. And Technology, professor. Kent Will Smith is a senior, fellow at Massey, College he. Holds a Bachelor of Science Master. Of Science and Doctor of, Philosophy degrees, from, the Massachusetts, Institute of Technology, from. 1981. To mention me6, professor Smith would be principal scientist, at the, Xerox Palo, Alto Research, Center. An adjunct, professor of philosophy at, Stanford, University. President. Smith was a founder. Of the, Center for the Study of language, and information and, staff University, the, founder and first president of. Computer. Professionals, for social responsibility, and president. Of the Society, for. Philosophy. In psychology from 1998. To 1999. From. 1996. To 2001, professor, Smith with a professor, of cognitive science, computer. Science and, philosophy at. Indiana, University, from, 2001, to 2003, he was at Duke University as, the Kimberly, J Jenkins. University. Distinguished, professor, of philosophy. And. New technologies, and he's also in the department's, of philosophy, and computer, science. What. Weekend momentum, moved to the University of Toronto, in 2003.
Initially. He served as the Dean of Faculty of information, from 2003. To. 2008, and we're. Very pleased to welcome our distinguished alumni expect, to know what my good afternoon professor, Kevin, Smith. Well. Thank you very much I hope you can hear me um, it's really kind, of of you and the, whole our marquee Alumni Association to. Have. Hosted this event. And. I'm just thrilled to be here I'm gonna try to keep my remarks as, short as I can generally, capable of in. Order to have lots of questions so I look forward to that. Thank. You. And, thank you dr. Smith. Sorry. Sure it's like go ahead and. And, just. Plunge into the session. Please. Okay. Well. Listen. One of the things I wanted to talk about was the sort. Of background of this project, um when. I started, out as an undergraduate in 1967. I wrote my first day I program, back then. But. I also was very interested. In how. Computation. Would. Be capable of, understanding issues, that mattered, about us whether. It could do justice to the sort of depth and complexity and so on and so for the human condition and that. Kind. Of it's. Almost like a sea peace no. Dialectic. Between the technical and the and, the. And. The human that has. Been. In focus for me my entire life. So. It's. Funny because when I started in my tea in 72 I actually joined the social inquiry, major, but. Then, then. Quickly I moved over to the AI lab and I was thrilled to be there in part of our colleges but I've always had a kind of wonder. About the adequacy of AI but also about. What AI is and, what about even the notion of computation I've, been struggling for now 30 40 years to. Understand what computing, is and I. Actually. Don't think current, theories of computing, are do, justice, to the notion of computing so I so it's kind of like I was playing Dungeons & Dragons and, I always found stairway, as it went down words and different philosophical issues, so I just sort of always. Probing. The. Probing. The philosophical, foundations of everything. So. That's. Kind of a background to this whole little. Project I don't know if I can this, is um, this. Is the. Talk I'm going to give is actually. Arises. Basically, out of. This. Book which is coming out this summer the. Promise of our fuel intelligence, reckoning, and judgment I sort of mean that to connote the day of reckoning in the day of judgment because I think they're very serious, things at stake. And that's, that. Sort of what I talk about in this book what. I want to talk about to. Today. Is. What. I think matters about, all the recent stuff about about. AI the deep learning second, wave AI stuff so, there's lots of different discussions of it and I actually think the most important. Insight. In a way that backs, that's. Behind. Deep. Learning in AI has, to do with this. Sort. Of glimpse, it gives us into the nature of the world so rather than in fact voting on technology, itself I'm sort of initially. Interested in the structure of the world but. In fact is why, the, worlds being a certain way is allowing, deep learning and AI I think, actually it's like a way they are actions to do justice to it and that's where in fact deep, learning and secondly. They are getting, their power so. So. Let just let me just say a few things about. Deep. Learning in that regard I'm. Ok. So sorry the. Technology. Here is um maybe. That's. Less than perfect so, here's the kind of picture, of how deep learning systems, and neural networks and so on is always probably those of you in the audience are actually seeing these things of sort of how they're characterized, there's input layers and it goes from very similar Cynthia brother I actually don't find this slide. I mean it's perfectly fine but it doesn't actually reveal, the kinds of things about deep learning that I'm interested, in um, here's. A slightly. Better characterization, of deep learning which is the deep learning is essentially, a statistical. Method a method. For statistical, classification and prediction of patterns, based.
On Sample data now often quite a lot of it deep, learning is as much a big data phenomenon as it is anything else. Using. An internet-connected fabric. Of processors, arranged, in multiple layers so actually that I think I mean the picture is an example, of that but the. The. Fact that this is doing statistical, classification based on sample data I think is actually important, but still I don't think we've gotten to the heart of the issue of deep learning and why it is that matters um. Here's. A yet better but I don't think, characterization. Of of. Deep. Learning so go first, wave AI essentially, the stuff that I got born and bred on and it actually was just in. In. Full flight I mean a flight but in full flower, FBA, I loved when I went there at first in 1972. And 73. Called. Good old-fashioned a guy by my dear friend now, departed, friend John Hoagland or, go fly as it got kind of universally, knows one, way to characterize, it you. Know people say look was based on logic and so on and so forth and that was sort of true but but logic I think wasn't as important to maybe as these these. Facts about it which, is the conception, of intelligence, which does some extent arose out of introspection, on our part was. That intelligence, was deep. Many, step inference, by, a serial process are using modest advance permission information. Involving. A relatively, small, number of strongly. Correlated, variables so, in fact if you think about you. Know for all men, if. X is the man that X is mortal in Socrates is a man and therefore Socrates is mortal and so on it's somewhere, formal. Logic is extremely, strong correlations, there's not and implies and so on and so very various forms of correlation. Proofs. In first order logic can. Also often, be quite deep and some of the systems that we were building back then actually got fairly involved in. In. Pretty, deep chains of reasoning. Deep. Learning is in some ways the opposite, in five different dimensions, on this I think is interesting it's typically, shallow. A. Few steps of inference basically, by a massively, parallel process, not a serial one based. On massive, amounts of information not modest in that formation involving. A very large number of extremely, weakly. Correlated variables, and that. Stuff, is actually. Went. Wrong in terms of histology and also in terms of what it tells us about the world so. Those. Two things in particular the large number of weakly corioli variables I think are our facts, that I want to talk about talk, about more, so. I'm. What. Was true back then was that we kind of thought that the world was, was. Such that the Gulf I forms of reasoning would actually work in, terms of essentially a world, that we imagined. Is. The sort of world that would be amenable to the, sorts of imprints that we're described on the left or the sorts of inferences, or imagined logic so, I'm gonna put up next a slide. This was actually at Stanford, Research Institute in California when I went out there I, think this this is from like 1980, this, is shaky they're one of the first mobile, robots. Mobile. Robots, but. I want to draw attention to is this is the world that they built for shakey because what they did, was shaky, couldn't handle the way the world really was and so, what they did was they built a world, that actually was like what, it was that those of us in ngo5 thought the world was like mainly they sort of projected, our idea of the world on to the world and made, a world out of simple. Discrete easily, describable, objects, um. This. Is not actually what the world is like and I think one of the things I tell students is that I think AI is essentially. A, history. Of enormous. Humility, as we encountered, the inadequacy. Of a lot of our assumptions, here's.
A Picture now this. Is a pretty ordinary picture I just picked it up randomly on the web it's it's a picture of an environment, it's a room and it's a basically almost empty room like the room that we just saw for shakey and you. Know it's got two people and so on and somewhere, but. It's a tremendous amount more complicated, than the world that we just saw but, not only is a tremendous not more complicated, what. You're. Seeing this world and you were actually not. Seeing what it's like as it were directly, and. You're not even seeing the two-dimensional, projection of, what it is that it's it's like you're. Actually looking. At that and you're processing, that image with. Your brain which is a neuronal device comprising. 100 billion elements with ten you know ten to the ninth elements with ten to the 14th. Interconnections. Between and, among them honed for the equip purpose, of actually dealing with human vision over 500, meals of evolution, and there was this is a true what, what arises, in your consciousness, when you see this picture is. Something. That, has been processed, by a. Processor, of. Complexity. And sophistication that, transcends anything we've ever been able to construct so. That. Make sweet, wonder. And in fact I've talked to this artist friend of mine what's the world like such that in fact we process, it and it delivers, to our consciousness a sense of a room with a man at the door. So. This friend Adam Lowe he actually painted a picture. Of. What, the world looks like before all, that process and here's. This picture um now this picture is actually a chapter in my in my book from a long time ago on the origin of objects this, I don't know if you can see my cursor. I can see my cursor but anyway I hope you can. This. Is what he thinks arises. Arrives. Into our perceptual, processes, now this picture I can parse so easily that I can't remember not being able to parse it but not everybody can parse it right away but if you can see there's a person here about 3/4 of the way to the right the heads up near the top they've, got two legs they're walking their, arm is here and they're actually carrying a pail the pail this is around here with a bunch of things on it they're, arriving towards, the door here's an open door it's in the basement of his house I've been to this doorway, inside. Is a wooden.
Box With scraps, of wood. And. So on and so forth and he's just walking by this door and, he's. Sort of conceit. I mean it's not a theoretical claim this is a conceit in a way is that we. Take, images. Of this kind of complexity, and turn them into things that are parsed for our. Minds. For our concepts, and. Here's. A. Way. To describe this which is if perception, is computing function f. In. Other words from a scene towards a conception of the scheme he has made it a picture of after the minus 1 of a scene so, that when we apply perception, to this scene we end up with what it is that's the input to the perception, in the other case um and that's. Basically, um. That's. Basically what he, comes to now I'm. Gonna stop this for a minute um. Okay. So. The. World is not the way we imagined, in the days of ngo5 that's basically, the bottom line in this though really is a mess our ideas, were clean and sharp and distinct and stuff but the world itself was a mess he. Is a mess will. Always be in this and the question is how can an AI system actually deal with that mess. The. Conceptual. Structures. Of Gulf I were inadequate to that mess and I, think that was really to, my mind the ultimate reason for the defeat. Of go Phi which is that the conceptual, representations. We use are inadequate to the world in which we deployed, these systems. Now. There was a reaction to this and it started pretty early and one, of the things that was one, of the first reactions to, it was in fact rod Brooks who, was at the iLab as you all know so. I am actually gonna put up a slide. Of. Some. Of his. Creatures. Okay so here's a here's an early thing of the sort that was rod. Took a room. On that on the ninth, floor of tech Square back at the AI lab was at those days and he actually put sand all over the floor of it and he had these little robots some of these were pretty small like six inches long right inches, long and so on and, these things clambered around and climbs over rocks and so on and so forth then it was a revolution. With respect to, robotics. And, it was it was actually successful these, robots, were able to clamber. Around, in rooms that were not neat and clear and distinct the way ngo5. Assumed. These. Robots. Went to the moon or and and so on as you can see so there's two guys in there these are obviously bigger than the ones he had a ninth floor but it's basically the same idea based on on rods idea and these. Things were the first robots, that could deal with a world that was not a, clean. And were clean, distinct, world of objects um, but. The thing about these things was that. They. Dealt with this world. In. A rather striking, way. They. Didn't, reason, in terms of clear, and distinct categories they didn't represent the world in terms of cleaning distinct categories so in that way they made in advance in, terms of the ontological assumptions. Over ngo5 well. The thing is they didn't reason at all that's, why they didn't represent in terms of cleaning distinct categories, because they didn't represent the world, were purely reactive behavioral. Robots and this, was sort of a. So. Rod Brooks run the one the, computers. And thought award intelligence, with a trope representation. Which made him famous and, actually probably let him did be appointed, director of the I love it at mighty and all this kind of stuff other, people like David Kirsch wrote.
This Paper. Talking about the sea change in AI today. The earwig tomorrow man which was another extremely famous paper look, at the first sentence on the abstract they're a startling, amount of intelligent, activity can be controlled, without reasoning, or thought. Basically. The. The. Attention. In AI moved. Downward. From conceptual. Representation crypt, arithmetic on all of these kinds of stuff towards actually navigating, the world which was a mess and so. We got this is a wonderful, book if any of you teach. This. Kind of robotics I mean it's it's from roughly the same area era I think um, Valentin. Valentino. Brayton Berg's book called vehicles and stuff but you can see from the picture and the upper, right that he, showed how you just if you connect a vehicle to sensors and one. They attracted, to light and you hook them up the wheels and stuff the thing will steer around and actually. Produce. Remarkably. Intelligent. Seeming, behavior. But it still behavior. Now. Behavior, is not enough and. One. Thing, that in fact. Led. To I think the successes, of deep, learning and and, second wave AI was sort of well if we're gonna get past behavior, and get back to thinking how can we actually get back to thinking that is not based on clear and distinct objects, so. Um. Let's. Go back to these neural networks which we saw before. Now. Here's a metaphor, for what I think we. Learn. I'm. Gonna show you a picture this is a picture of some islands in Georgian Bay where I have a summer house I don't have it on these islands but that's basically what it looks like up there there's thirty thousand islands. And. This is a picture they're pretty, messy, you. Might think that these, islands, per se illustrate. The, inadequacy. Of clear and distinct concepts, of the sort we imagined oh five for doing justice to the. But. This picture is, in fact still a little, bit go file Ike and it's like, it's. Like. It's. Like. Don't. Lie in that regard which is that. It's. Still essentially.
Parsed, Into objects, what, I want to show you next is in. Fact. The. Same photograph, in fact the photograph that I made this photograph a dog with. Out, the water without. The boundary. Of the water demarcated. So. Look at the transition, between this and the, next picture. What. This picture shows is, the subterranean. The. Texture, of what. Under. Water. Islands. Are and the ayahs are revealed for, what they are as essentially. Outcroppings. Above the surface of the water that. In fact um. Connect. A world, of stunning complexity, and. I'm. Gonna wear and. That. So. So, the world i believe is like the. Picture of the, submarine. Structure. The, complexity. Of the world out there is like what's underwater there and our concepts. Are just islands. Above the threshold of consciousness. That. In. Terms of which we have words in terms of which things are effable in the sense that we can actually articulate. Them but. In order to think we have to deal with that with the full complexity, of that subterranean. Structure. And. It's. Really that that, I think is, the. Demand for our AI system. Instead. How do you deal with that kind of subterranean, structure so, if I recognize, somebody like rod Brooks or. Or. You. Know my partner. Or student. Or whatever like that you. Know this is something that's said about deep, learning which is that, it's. We. Don't recognize that this, person has cheeks of this width. Or eyes of this kind or glasses or something we, recognize, thousand. Thousands, and thousands, of of. Microscopic. Features basically which. We. Process. And we. Get up to a higher, level concept. Like this is rod, or this is Trump, or this is whoever. In. Other words it's only when it rises above the surface of the water that, it actually emerges. Into anything that we can conceptually, describe. But. Then in fact our navigation, of the world is dealing in a non, affable, way in the non conceptually, distinguished. Way without. All that underlying. Structure. And the. Thing, the reason that's, right the auto logical, consequences. That I think motivate. The deep learning architecture, is that the deep learning architecture, is capable, of dealing with that. Underwater. Complexity. Millions. And millions of features underwater, millions. And millions of extremely, weak, correlation, between and among them that actually allow, it to, generate, sort of the conceptual representations. And this, pushes one into a kind of concern. About. What. The role of. Conceptualization. Is what's the control, of language, what's the role of those things which we can articulate, and I, think it's it's it's, it's as it were late, in the process that we arrive in the realm of the articulable, and that it's only in terms of articulation. That we actually have introspection, but that a huge, amount of our thought. Processes, are dealing. With this crea articulate, underwater, structure, and it's. Because, of that underwater, ontological, structure, that the, features. Of the deep learning architectures. Are as powerful as they are. I'm. Gonna actually show one more thing. That. I think is actually, a little description I once wrote about. Let. Me see this thing, I. Hope. You can see this okay. Objects. Properties and relations in other words you. Know it was the conceptual, material ontology, that, we thought was what the world was like when we were doing. Go phi i think. Of those as long-distance, trucks and interstate highway systems of intentional, life of normative life of conscious. You know articulate. Life they're, undeniably, essential. Because. In fact they allow us give and finite resources, to integrate vast open-ended, terrains like the stuff you learned at MIT and so on and so forth into huge conclusive. Objective, worldviews, critical. For communication, critical. To deal with situations they're a long way away and so on and so forth but the cost of packing them up into those islands, into those discrete. Objects. Cost. Of packing them up for portability long distance travel is that they're thereby insulated, from the extraordinarily, fine-grained. Richness, of underwater, life of the, particular, indigenous, life the, richness. Of the very lives they sustain, and of the lot of the world that we are all part. So. Here's. The summary so far. Go fight tried to, implement, intelligence. In terms of clear, and distinct conceptual, categories that. Didn't work. Rod. Brooks and what I think of is way 1.5. Got, over that assumption but the way it got or it was diving underneath it and being just reactive. And stuff, but the thing is you can't get terribly, far without you can do impressive things with Brooks.
Robots But you can't think, about what's distance you can't think about what's absent you can't do, all this planning and stuff you, need reasoning, and so what. Deep learning and second way I have done it they've actually allowed. Us to realize, that realistic, reasoning requires vast. Amounts, of submarine, complexity. Underneath. And among the concepts, in terms of, which, we, think. All. Right so I'm actually gonna stop. At. This moment. Because. It's 12:30 this is actually only part 1 of this talk I was going to go on and talk about what the mod and does on top, of this about what the epistemological, consequences. Are of this being the structure of learning and talk, about where, I think AI is and what it is that it doesn't do in. The book I actually make a big distinction because I call reckoning and judgments which is a much more serious thing I don't think we're anywhere near judgment on we can talk about that but how about I stop and see how we're doing hopefully, there'll be people will have questions I remember. From MIT that people with questions were not hard to find, thank. You so much professor, Smith for that very engaging. Presentation. Reminder. To viewers to ask questions, of our faculty guests today, the. University of Toronto using. The Q&A feature in, zoom or. The comments, panel and YouTube, live a. Professor. Smith it comes out in October can you tell us a little bit about it yeah so it's um, well. So a problem, I have is, that, things, that I start out thinking are gonna be short tend not to be long so I went, to MIT with what I thought was important, they're. Not gonna publish his too damn long so I actually. Was writing this thing I thought with a paper and that, turned in like, Topsy had growed and so I was turned into this book and so. It's not a very long book especially for me well me like 140, pages but. It's a book it's done it's. Off the MIT press it's gonna be officially, published, on October 9. I think, they're, gonna be copy sent to reviewer than stuff before that and I, actually was just visiting. See. And they pointed out that if you go on amazon.com, it's, actually already instant and you can even pre-order it there so it's. Already list for them but I just saw you are you thought I don't yeah I'm. Up here with amazon.ca, and, we're a little behind. So. Just a couple questions and once again I'm gonna I'm. Gonna reminder to our viewers to, ask questions, of our faculty guests today Brian counteth myth of, the University of Toronto using the Q&A feature and, zoom or the Commons kind of on YouTube live there, are some really interesting comments, so. Far only, two new feature which I'll be getting to in just a moment yeah great we'll. Ask you a little bit about your, background, professor. Smith I see that all your degrees are. In computer science from MIT but. I or you, have appointments, and you teach in. Philosophy. Can, you talk about that it's. Not normal, career path no, it's not the normal career path and you know it's interesting I don't know if in, the audience there are there are people who. Are. Something. Like this um my. Dad was a comparative. Theologian, and you. Know and so as a kid you know he knew 38 languages so I just ran, from, there I don't know any language other than English and I went to MIT I did, this technical stuff is far from from. Where I grew up as I could find basically I, think.
I've Always been a little bit congenitally. A philosopher, in the sense that you know so so here's, a way to think of velocity closby. Was the hallway, originally, it was just this Natural. Sciences I mean it was the it was natural knowledge and out. Of it grew natural, philosophy, and then a natural. Philosophy, grew physics, and stuff and so then these things actually got identity, as fields and one way to think of them is that they got, rooms and they, built rooms off this hallway and then. I think analytic philosophy especially got rumen and they tried to become a room of its own, I like. The hallway I just, am interested, any sort of big deep questions, that undergird, everything so that's the kind of philosopher I am. When. I was at MIT there were a bunch of people like Jerry Fodor was there and that block had a variety of other people and they got together and they were talking about you know the computational. Theory of mind it was all brand new and very exciting and stuff and through some friends I got to go along to these meetings. Which was a huge honor and, gradually. You. Know I got my PhD when it's a part and stuff I started talking to these people and cognitive science the. Philosophers were very pleased to have somebody interested in the. Philosophical questions and so on and so forth so I talked in that stuff and I got more immersed in and then at one point I said you know. Not only am i interested in the Philosopher's questions I might like to be be, a philosopher, and then. They went oh that's, different, so. Imagine you know I had good friends on the Blue, Jays or something I said look it's great you're great. Oh. Actually. I'd like to play. Like. I can't play baseball it was like a complete, gulf to actually participate, not, just me so. I probably spent 10 years learning. How to go to talks how to sing what the questions, were what they sister he wasn't so on and so forth I've never taken a philosophy course in my life, but. I just marinated. In it for a long time and. You. Can do that and you can actually cross, between in among different. Fields and at first I was. You. Know first I was just an adjunct professor. Indiana. And they didn't want me as an actual professor, and then I was there for a while and they actually realized maybe the guy small.
Children And do terrible things I'm just yeah tech, head or something that was actually a person who thinks about real questions so, gradually, you can actually cross. These divisions. So it's been very very slow and very gratifying, what's, curious is that the faculty, I mean at the moment is the social science faculty so I've now actually taught in humanities, and Social, Sciences and, engineering. And science and stuff. Which. Is kind of what the universities were for and there is this word uni at the beginning University, which is supposed to actually be kind of a single sense of knowledge the. University has gotten fractured, in. Such, a way that careers, often, require a narrowness. Of disciplinary. Narrowness, but. But. I actually think the moment, is serious, enough that we absolutely need. People. Who can cross this so. One thing, that I, okay. Here's a metaphor in that I'll shut up you can see that I'm not likely, to write things that are too short one. Way to think about the. Discussion, about AI at the moment is that there are people so think of a graph if you can see my hands think of a graph in which this, is. The, dimension of technical expertise, and this is the dimension of depth. Of understanding the human condition, there are people who have a lot of technical expertise, out here. But. And their triumphalist, about what's happening at AI but their sense of the human condition is about a millimeter deep like. I don't know maybe we're a cruiser or, I. Don't even even. Bostrom, or tegmark or something like this there, are people having tremendously. Deep sense of what constitutes, Humanity and what's mattered about civilization, is history and so on it's not worth the, room is their understanding of Technology, is a millimeter deep so, it's like we've got people up here near one zero we've got people over here near zero one and I want to put a stake in the ground I mean we should be out there at point eight point eight now, I can't get to point 8 I'm not in fact quite either of them anymore. But at least I want to put a stake in the ground around point five point five or point six point six that's. Where the debate is needed, I think so. That's I think the consequence. Of ending up in philosophy not only in philosophy but in philosophy and the other kinds of disciplines. You. Sorry. I'm actually not hearing you. Okay. Dad. Can ya, I. Want. To go to questions because, you're. Sure one, night I'm two questions, on the third. Wave of AI from. Anthony, change. Well. Okay so I talked about that a little bit in the book um briefly. My, sense, is that the. Way people are using the word third wave AI for. Having, a context, of where you. Know a causal. Model of the world or B context-aware or something like that I don't. Think it's anything, like different, enough from second, wave AI to actually deal with what I think we need. What. I talked about in, the. In. The book is this notion of human judgment of, what it would be so think about what it is to say of a person that's the person with good judgment I. Think. You. Know John Hoagland again the coiner of the phrase go Phi was actually very good in this regard and stuff I think. And. Also the discussions, if if people are, aware of them about what would constitute genuine. Intelligence, are authentic, intentionality, or stuff. Rather. Than just. Well. People talked about simulations as up I don't think computers, or simulations, I think, they're real but.
I Don't think they have. Anything. Like the depth of the understanding, of the world that I think real. Human judgment. Needs. So here's a couple of properties that I think. Actually. Maybe I can put this up but. I think. A. Real, judgment would require. Yeah. Here are some some properties, hopefully. People can see this. That. I think a system wouldn't even really be able to judgment it's got to be directed, towards the world not, just directed, towards this representation and not just to have representations. So. I actually think being directed to the world not to the representation, is a hugely, demanding skill, and, I think if I push if. I click on a button on my computer. And it says you know eject this disc or eject this USB, key or something like that I don't think the computers are directed towards the USB, key I think they're just directed towards that. Which is in the drive because it doesn't actually have any understanding of what's in the drive as opposed to what you, know we have a difference we, understand the difference between being. In the, USB. Slot and the thing that is in the USB slot and we understand, that the thing that is in the USB slot might not be the thing we expected, computer. Has no way of understanding that I don't think computers can distinguish appearance from reality I don't think deep. Oh, I. Don't know alphago or alphago zero or something would actually know the difference between its representation, of the game it's playing in the game it's playing, that that is a representation of, I think it's you have to care about the difference between your representations, in the world stakes. I think you've got to be essentially, existentially, involved, in the world in order to know the difference between the world and your representations. Of it, you. Have to be able to distinguish what's actual and possible and so on and you have to know these things here that you have to know that that towards, which you're directed is here in the world that you're in the world the world, you, you have to know that both you and the. Object. Are in the world is big. Existential, things. That I think have been, hewn. In through thousands, and thousands, of years of human cultures. Which. Didn't actually require changes in our DNA or architecture, or something about actually real real kind of standards, that we hold adults, accountable. To. One. Thing okay so I don't think third wave AI is, actually dealing with it I also think, that. Pursuant. To what I was saying in terms of the nature of the world. In. The first part of the target. One. Of the things judgment, requires I think is actually an ability to, recognize, that how you take, the world to be but I call how you register, the world be. It in terms of objects be it in terms of fantasy. Bein in terms of. Differential. Equations whatever you, have to hold those registrations. Accountable, for the world and no, addition, of a particular, representation like a causal, model is actually. Gonna give you a sense of the world that the model is a model of because there's gonna be one more model I. Think. We know, the difference between models, in the world that we always in every step of our life we hold models, accountable, to the world and what it is to hold a model accountable, to the world the. World itself not just to the world as described. In yet another model that's. Our that's a profoundly. Different thing from anything I think that's been envisioned DNA I ever, since I was there. In 72, and all the way up through go Phi and all the way through second and nothing. That I've seen about, third wave AI addresses. That at all so I think we're a long. Way away from understanding how. To. Get to actual judgment. Thank. You. Another. Question, and, this is on quantum. Computing, what impact. Of quantum, computing, will have on deploying. Will. Okay. So here's. A funny thing I I. Have. Lots of undergraduate, students who say look you, realize you know consciousness, is actually quantum. Phenomena and. I say you actually realize that in fact a lot of things are quantum phenomena for example the whiskey if, I'm a glass of whiskey with an ice cube in it the ice cube is likely to float that's a quantum phenomena, and actually. Everything since.
Quantum Kind of appears. To be right so but, the thing about the glass in the whiskey is that in fact I mean the ice in the whiskey is the fact that ice floats, so the solid, form of water is actually in the. Liquid form, requires, a quantum mechanical explanation so, I'm not just quantum phenomenon or requires a quantum mechanical explanation, this. Quantity the -, require a quantum mechanical explanation. Personally I doubt it I mean I have my own ideas about. What. I. Think, I don't. Think quantum. Mechanics, is gonna get to any of the hard issues of judgment or any of the hard issues of consciousness. I just don't think that now, that was not your question your question was how is it getting back deep, learning, there's. No doubt that quantum. Superposition, and so on things like this can actually do a lot of. Parallel. Explorations, of alternatives, especially. Especially. In. Especially. In problems which are sort of. Semi. Neatly, decomposable. In two orthogonal possibilities. Because like. You. Can to. Them. Without actually needing a lot of interaction, how, much. How. Many things will actually submit to, that kind of algorithm, certainly you'll be able to break credit card security, more easily, is, that gonna have a conceptual. Impact. On the problems. That I are gonna do with I don't think anything, about this kinds of stuff I'm talking about it existential commitments, stakes and judgement and stuff I don't think yo, yeah of course we're gonna use quantum we. Already use quantum. Computers, in one sense in that they're all quantum mechanical I don't. Think that. The algorithms. That come out of quantum mechanics will necessarily be irrelevant, with low-level, search things and stuff but it's not the heart of the beast. Thank. You there are several questions on, ethics, brightened. And. Of course you're, following the news that's a major that's. A major issue, in the news right now with Facebook, logo and all that um I, know we only have a limited amount of time though, but some. Of the more. Pressing concerns. Or challenges, that you're researching, as, far as ethics right. Well. It's interesting I mean I have a ton of respect for ethical issues it's not that I don't, it's. Not that I don't don't think they're they're, serious. I'm. Not okay. So two things I don't think, ethics. Of AI is actually, the entree. Into the most interesting, issues about a I'm. Actually interested in what is AI what can it do and what can it not do if, we understood what it could do and what it could not do then we might be able to frame at with, respect to it but given that I don't think we actually have that the, the. Cartography. Of the. Landscape. Of issues, where you elbow in eiated I think it's who, knows about the I mean a lot of complicated issues about nuclear weapons and you know and. You. Know engineering, in sometimes ever I'm not training them I know complicated, they are but, I don't think, the. Ethics of it I don't find the ethics of AI an entree. Into what. I think are the serious issues of a it's. So, sociologically. It's because it brings people. You. Know it does a little bit of bridging, between the pure, technical, kind of calculative. Kinds, of. Randomness. That are leading to lots of powerful systems and people who actually consider, societal, and ethical and human issues inside authority but. It's. Just. I'm. Afraid it's a more popular, than it is a deep issue that's, one thing I. Don't. Know I don't, know if I should say this but anyway I have a couple of PhD students are working on, driving. Driverless car companies and we. Were having a drink one night and I. Said, I figured out the trolley problem you're at the top of a hill and your car goes out of control and you can't control it there's two roads and you can steer down, one and the other and on one of the roads there. A bunch of people talking about the trolley problem and my. Students said run them. I. Don't. Think the trolley problems will boss way to get at the ethics of this.
Kind Of stuff I don't actually think that ethical discussions, are the right way to get it what cars. Are doing and stuff my question about cars is just driving. A car required judgment, would it be it could. It be something that reckoning was being nutball, enough. For, of. Course you're talking about self-driving. Cars which are. Well. People are assuming that the morning will be adequate to impact dealing with drugs on private cars I mean. We. Could I would, actually have to get that whiskey if we were really going to talk about you burning but I mean about driverless, cars but yeah okay. And we. Have two questions or run, once again thank you so much for your time no Israel, there. Are two questions including, one from Eva on how. We as humans, can interact, with. Will. Okay, so a. Couple. Of things um I. Actually. Don't, think the, right way for us to get a grip on what's happening, now is, to. Frame things in terms of human and ion or human. Machine I actually think that's a fatal distinction. Because. First of all there are many human things which, are not my favorite I mean people do automatic, things that do silly things and so on and so forth there are people who do terrible, things there are people we elect to be repairable things I mean I don't want to valorize, the human as if it was a locust of. Edifying. Moral inspiration, in all circumstances or anything like that on the one hand and on the machine side I think in fact we're probably machines in some sense I mean unless you're a dualist, you. Think we're basically, arrangements, of atoms that work as their way and machine, architectures, are changing a strong linear fast I, don't. Want to define the thing, in terms of people and machines I want or they hiring like this I would like to understand what, intelligence. And what kinds of tasks require what kinds, of intelligence, and intelligence, I think it's too broad a word we got to have a map of the different kinds of officer so, how do we interact with these things well the. Question is what are they are they things capable, of friendships, and then we should interact with them with friendship, I'll tell you one thing I don't think we should do I don't think we should interact with them without. Adequate, conceptual. Grip. On what, they are and basically take the fact that they've demonstrated some, behavior which in people would mean that this piece person has these properties and so therefore we assume that this thing will have those properties and I correct with that way and not only interact with it but give it responsibility, for things like designing with you, know prison sentences, or or education, for our children, or something like that in other words we will miss.
Misunderstand. I also. Don't think we should in fact take which I worry about the word shouldn't take what deep learning can do as a normative. Standard so that people should act more like a IDEs, and, Mike one thing I asked my students I say look if these machines are such good reckoners, what. Would it be for you to raise children and say that the children look you don't have to reckon about anything because reckoning is being, handled by machines just like they. Think long division is being handled by machines now. What. Would it be to raise the standards, on what it is to be human. So. That we lift the, human condition, up to a higher level in virtue of that which the AIS can do. That's the quick question I think. Sorry. I I need to hear you. Okay. Hurry now yeah yeah I can't thanks alright yes. This. Is a really fascinating we. Have almost 90 questions so far um I. Don't. Expect you to get to them all in. The next few minutes but um there's, another one and this is from Ricardo Road blast yes, what development, and deep learning do you see on an increase, in the number of or. Knows. Okay. So is if the question is how, do I think people he's gonna advance past. Where. We currently are right. I. Don't. Think that's the most important. Issue. Basically, with respect to even deep learning itself, so, any. More than increasing the megahertz is gonna profoundly change I mean yeah be nice to have a computer. Running at 50 Hertz 50, gigahertz instead of five I. Think. There are real issues for example think, about my picture, of the underwater. The. Structure of the underwater, terrain underneath, those islands one, of the things that go fight was really good at was. Compositional. Reasoning, it's such that you could in fact deal. With negation, and implication. And and all. Of this kind of stuff that we do so well and logic but those concepts. Were you. Know we're laughably, discreet and clean compared, to the actual underwater structure I think. Figuring out how to deal with compositional. Entertaining. Of hypotheticals, and what if this were true but that weren't true and so on and so forth, in. Ways, in which the concepts, instead of being treated as atoms, atoms the way we did in logic if the concepts were treated. As the, tips of hugely. Complex, underwater, structures, that if I'm in fact emerged, out of, the surface of ability. So. That you could reason without losing, the underwater structure, but actually embed things and hypotheticals, and all of this kind of stuff. I think, that's an enormous challenge, and they candy and the processing. Demands of that are staggering. If you do it in certain ways but maybe we can do it I've talked to Jeff Hinton who I've known for many many years about. This maybe we could do it in when you could lose some of that underwater structure but not all, of it and so on and so forth so there's huge issues there there's, lots of issues I think. Way. More interesting, than. Just. Numbers, of players and yeah. We might need one here we might need more processors, but that's not the conceptual. Point of course. You just mentioned, Jeff who, is your colleague of the University of Toronto right, described. As the father of deep learning yeah, right, there's. Another question from I'm Robin. And this was. You. Hear me yeah I can hear you sorry. About, yeah. Yes. Several. More questions will also come up about should book can, you talk a little bit more about that about. Facebook, no. Okay. Talk. More about it we had six, by nine inches I would. Is. There any other in the questions or anything sort of specific, that people are hasn't um yeah, one person asked, actually on can you talk more about, the. Influence, of books, on your work I. Mean. Runs a great guy, I I. Haven't seen him in years and years and years I wouldn't, say. Well. I would say this I wouldn't say anything terribly, correct. Because I think he's operating different level but there isn't sort of challenge, that he says so he wrote that intelligence. Whatever. Four. Years ago nine years ago he changed it to intelligence. Without reasoning, I think and in, there he says look you should use the world as its own best model, when. You can write, it originally was usable I, think the world's great if you have it as a model I mean, it's there in front of you but enormous amount of stuff is not there in front of me so I mean. If I turn around for example there's a door there but it's shut so I have no access to what's outside the door I actually, pretty, damn sure that if I open the door there's gonna be a hallway there because there was the last time I opened it.
But. I have, to represent that floor. In that hallway because I don't have access to it and what about tomorrow I actually don't have any access to the president that means in the future of the past because in fact physics, prohibits there a connection, with neither the present or the past I mean physics is local on both space and time I think. The locality, of physics is a staggering. Limitation. And the fact that we get around the world as well as we do in. Spite of the fact that we have to honor the locality. Of space and time is testament. To us in. Order to be able to deal with the distal, in a way that not the proximal, not what's right in here and now but what's not right here and now, think. About, poppers. I think, wonderful claim that. We we, we. Think, so that our hypotheses. Can die in our stead, you. Know you think we'll look I'm gonna walk across the highway here but actually this is a semi, truck, coming at 80 miles an hour and actually you know if I imagine if I walk right there if I'm only gonna be halfway across when it comes thering by and actually not gonna fear too well so I'm gonna stay over here you, can't check the world on that without running out there and dying so. Um. So. Anyway rod, Brooks says use the world when you can and, only use representation. When you can't. And I have used. That as a prompt, in my undergraduate classes to say well what that's. A great challenge from rod what, do we need what. Are the situations in which we can't use the role as a direct model that's I think actually, undergirds. My sense of what representation. Is and I talk about I articulate. This thing about what representation, is why you need it. But. That's just the type, of I. Mean. Being able to represent tiniest. Step towards, anything that I would actually countenance. As full-scale, judgement. And. We've just a few minutes here I. Just want, to ask actually. Let me congratulate. You first and that's for your recent employment yesterday no, we're really great, at the. University of Toronto can you talk with me a little bit about that appointment and the backstory on Reid Hoffman yeah. Down, man I think many of them that I would appreciate that well, you know it's interesting I don't have I mean my my, partner has a son, and so I like the stepson was wonderful, I don't. Have my children as it happens. But. I have put my heart into teaching for many many many years it's one reason I think I haven't. Published. As much as I should and anyway as it happens in. 89 I was teaching this course a bunch of us started, this undergraduate. Program it's called symbolic systems but it's really the cognitive science undergraduate, program at Stanford that's really what it is we started to sing up and, a bunch of students were in there and there, was this guy called Reid Hoffman and he was you, know he was he was great I'm affable, very. Smart um he. I have. An email from him from that class saying he was not going to get his assignment, in on time because he had, I. Don't think. I could blackmail him with it but anyway I still have the record. So. Anyway. You. Know I wrote him but what. A reference to go to Oxford, you a PhD in philosophy and he went to Oxford but after, a year he said. This is not for me he went back and you know became CEO of PayPal of had started the rest everybody your, audience knows more about than I anyway. A couple years ago I sent him a note and I said look this fabulous you've gone all this guy stuff let's get together and talk about it and skales wanted to congratulate. Him on this and he said well what about all those books you were writing. You're. Supposed to have them all done by now it's something like that and I said well you know I would but I actually you. Know academic life is time consuming and, he said well maybe I can do something about that, so. I'm. Enormous. Ly grateful to him I mean you know you cast seeds in it. It's. Like a farmer you throw a lot of seeds out there when you when, you, teach. And it doesn't they don't have to turn anything that you're aware of and so is that worth I mean that's not my any less teach but. He came back and he said look um he, was very grateful and he would actually like to hold me you know account look exactly the sort of holding to account that I actually believed, in so. He gave, this. These. Funds to the University of Toronto for, essentially, up what.
At Stanford we used to call a folding, chair. It's. A chair that will last until you. Know for the six years and then I'll probably retire and then the careful done so. I have a ton of thanks, and respect and so on and so forth we're in for doing that I know, because there it just was. A. Magnificent. Act of generosity on his part and. People. Say well what does he require for. This chair and I said well it's not like there's anything written down on what he requires but he wants those books, I was talking about 30 years ago to get published and so. This. Book is a first. Little step towards that. Sorry. I'm not I'm not hear you again. Thank. You hear. Me now yeah, I get thank. You so much professor brian Campos but we've had an amazing discussion. Here we, had more than 90 questions and obviously. We didn't have time to get them all maybe we'll do, a part or something in future who knows but you were a really fantastic guess. We appreciate, that on behalf of the MIT Alumni Association, thank, you for tuning into this faculty for all online and thanks, to a special Brian, Kent Wallace Smith of the University of Toronto, for joining us. Thank. You to everybody who came along and knew. Limit thank you so much your little alpha staff will be shorter for all questions that were not addressed, on air to you you, can tweet about today's, chat using the hashtag, and I've seen better world you can send any questions or follow-ups to alumni. Learn. At MIT. Edu. And. Thank you for watching. Thank. You. Thanks. For joining us and for more information on how to connect with the MIT Alumni Association, please. Visit our website. You.