AI Expert on the Dawn of Conscious Machines

Show video

Professor William Hahn is an Associate Professor  of Mathematical Sciences and a founder of the   Machine Perception and Cognitive Robotics  Laboratory, as well as the Gruber AI Sandbox. Both   you and I, Will, we met at MindFest at Florida  Atlantic University a few times, and a link to all   of those talks on AI and consciousness are in the  description. Will, please tell me, what have you   been working on since we last spoke? Well, first,  just want to say great to see you and really happy   to be joining you on TOE today. Really excited.  You've got such an amazing community. Same,   man. It's been a long time coming. Thank you. I'm  working on a whole bunch of different things. The  

thing that's been in my mind the most is this idea  of info hazards. And in particular, this theme   I've been bouncing around called lethal text.  Okay, let's hear it. Um, well, so as everybody   knows, you know, AI is here, and everybody is kind  of prepared for the technological revolution that   we're witnessing. But I think the more interesting  developments are actually going to be in our mind.  

They're going to be the changes in language,  how we think about language, how we think about   ourselves, and how we think about thinking.  How we think about language. What do you mean?   So everybody, I'm sure, has gotten their hands on  one of these large language models at this point.   And they have just absolutely revolutionized  the way we are thinking about words, the way   we're thinking about language. And as people  might be aware, it's now becoming possible to  

program a computer, largely in English, that we  can ask for computer code at a very high level,   things people dreamed of back in the 50s. And now  it's possible to just describe what you want the   computer to do. And then that behind the scenes  is getting converted into runnable computer code.   But I think that now forces us to think about was  language always a programming language? Is our   mind something like a computer? Not in the obvious  sense of transistors and gates and that sort of   thing, but is it a programmable object? And if  so, how is it programmed? So where do you lie on   the is a brain a computer question? I think the  computer metaphor is probably the most powerful   that we have so far for understanding the mind.  And what's interesting is if you go back through   the history of technology, every time there  was a revolution in the mechanical world,   let's say we adopted a new metaphor for how the  mind might operate. And so in the ancient world,   it was dominated by a clockwork universe, the  idea that the world was made out of cogs and   gears and things like that. And then later we saw  things like the emergence of telegraph networks   and switchboards. And at certain times we saw  the emergence of things like steam engines.  

And we actually still have this thermodynamic  hydraulic view of the mind is still residual   in our language. We talk about people being  hotheaded and have a head full of steam and   they need to cool down and so on. And we still  use these sort of thermodynamics metaphors. And   a lot of people would argue, well, the computer  is just the current metaphor. It's the metaphor   of the day. And that will right. We'll change  it as we go on. But the thing about computers   that that Turing showed is there's a kind of  universality that computation is the limiting   result of any technology. If you take your  car and you make it sophisticated enough,  

it turns into a computer. If you take your  house and make it sophisticated enough,   it turns into a computer and so on. That almost  every technology, if you improve its capability   and the sophistication, eventually you're going to  run into this notion of universal machine. And so   the idea that the mind approximates the universal  machine of Turing, that it's a machine that can   simulate any other machine, given the appropriate  programming, I think is something we need to   consider. So what unifies clockwork, telegraph  networks and thermodynamics is computation?   Exactly. We can all see those as intermediates,  as sort of proto computers or different aspects   of communication and computation. And that the end  result, the limiting result of all of those would  

be the computer as we know it today. There are  different computational models of consciousness.   Are all of them the same to you or do you see  pros and cons with different ones? You know,   there's so many and there's probably a new one  invented every afternoon. A few flavors that I'm   a big fan of. And, you know, I like the saying all  models are wrong, some are useful. And so I don't   think any of these will act ever actually capture  the full scenario, but they're sort of the best   that we have right now. And let's be specific.  Let's pick the one that is your least favorite  

and then the one that's your favorite. Well, one  of the ones that's my favorite is the idea of   society of mind. Um, Marvin Minsky's proposal that  the mind is really a collection of, you know, he,   he threw around, he threw it a number about 400  agents. I don't think the number is particularly   important, but the idea is there's a bunch of  them. And what's interesting is we're starting to  

see that emerge now with these language models  that in the background of the newest ones,   they've actually bifurcated themselves.  And there's a dozen little micro agents,   each with a separate prompt, a separate goal, a  separate unique way of looking at the world. And   then they have a conversation in the background.  And then when they make a final output, it's kind   of a consensus amongst those agents. And I think  that's probably a good approximation for how our   brain works in that we have all of these competing  agents, and some of them are trying to meet new   people. Some of them are trying to find something  to eat. Some are trying to see visual, interesting   visual stimuli and so on. And that when we choose  a behavior or have an action, even like, you know,  

producing a sentence, it's probably the result  of multiple of those agents coming together.   I liked, uh, you know, Minsky takes us a step  further with the idea of emotion. And I think a   very interesting take is emotions, not really a  thing. It's the absence of certain things. It's   turning features off. And he describes that  when you're, when you're hungry, for example,   that your ability to long-term plan or to even  think rationally gets turned off. And you're just,  

you're very hungry. When you're angry, your  ability to care about other people's feelings   and consider their viewpoint gets turned off. You  know, you're no longer running that agent. You're   sort of in a dynamical, um, ensemble prioritizing  these different agents as we go through these   different emotional states. And so I think that's  an interesting way of looking at our behavior.   And I think we're going to need those kinds of  theories when we try to put, you know, intelligent   behavior into machines, as I think we're seeing  going to see right around the corner. That sounds   to me more like an explanation of mind or the  mechanics behind mind and not an explanation as to   how consciousness comes about from computational  systems. Yeah. Um, you know, I've got a lot of   ideas in that and a lot of them are, you know, at  conflict, I like to tolerate ambiguity. And so I  

have a few of these ideas that, uh, I like to just  kind of keep, keep juggling around. And one of the   things that comes to mind is I really like Sidney  Brenner's approach. Um, the molecular biologist,   and he had this really interesting take about  consciousness. He said that the discussion is   going to go away. He said that in a few decades,  the idea of consciousness will kind of just  

disappear from the scientific conversation and  that people will wonder what we were talking about   all along. And I really liked that idea. I don't  know if I believe it or even want it to be true,   but something about it resonates with me because  I think we're going to start to see something   like proto-consciousness or something that will  be more convenient to describe as consciousness   in machines. And we're going to force ourselves to  consider the hard problem and other aspects that,   you know, plagued philosophers for so long.  They're going to be laid out in front of us   in a very concrete way. And the, the great minds  before us didn't have the opportunity or rather   they didn't have the language of objects like  LLMs or bits or, you know, computational process.  

They didn't have those, that terminology for  which to frame their thinking. And, you know,   one thing that comes to mind is this classic  question of the redness of red. Well, we're   going to build machines that will probably be  able to talk to us in natural language about the   infraredness of infrared or the ultravioletness of  ultraviolet. That we have such a narrow perceptual  

window and cognitive window that when we talk  about consciousness, I tend to think of it as   sort of a spotlight that moves around, but with  such a narrow beam, it almost be more like a laser   pointer because if I'm, if I'm conscious of red,  well then I'm not thinking about my TOEs. And if   I'm thinking about my TOEs, I'm not thinking  about my childhood. And if I'm thinking about   my childhood, I'm not thinking about the future  and so on. That kind of like how vision saccades   around the world, our consciousness also sort of  jumps around and saccades and we get this kind   of holistic picture, but it also is fleeting  and constantly changing the subject of that,   that Cartesian theater, if you will. And so,  you know, I'm fascinated by how we're going to  

expand that notion by looking at machines that  have lots of sensors that have internal states   that are thinking about their thinking before  they answer in English. And we're going to be   able to ask them, well, what do you think about  red? And it's not that far away before they will   be able to have at least consumed a strawberry in  a rough sense, right? We have elaborate olfactory   sensors. You know, it makes me think of, we know  what ramen soup tastes like, but I don't know what   ramen scattering tastes like. You know, they have  these little handheld machines that measure the  

vibrational mode of molecules and you can detect  the presence of chemicals without opening the jar.   If we put that into a system and give it a large  language model and a rich historical experience,   and it will remember the first time it encountered  strawberries and its states when it did so,   who are we to say that that's not a conscious  being in some sense? Okay. Plenty of this depends   on the definition of consciousness. And I know  that that's an implicit problem with the hard  

problem. So how do we define consciousness?  Something I put out on Twitter recently was,   is awareness a necessary condition? A sufficient  condition? Both or neither of consciousness. So   what would you say? Yeah, I think awareness is  definitely going to be a necessary condition. And  

I think you're going to have to have awareness  of awareness. Some sort of metacognition where   the system knows it's not just thinking, it knows  that it's thinking and it's able to think about   its thinking. That's tricky then because we could  then say some animals are not conscious because   they're not self-conscious. What do you say to  that? I imagine you can feel without thinking   about your feelings. Yeah. I mean, I think that's  what's just so interesting is trying to parse out  

those distinctions because they certainly have  feelings inside in some sense of a sensory loop.   But whether they're aware of that, it's not  obvious. Or at least they're aware of it at   the first level but they're not aware that they're  aware. And I don't know if we are. I don't know if   I am. Certainly most of the time I think I'm not.  There's just not enough extra processing power.   I think maybe it's just because of our daily  lives consume so much of our brain power. If we   were like sort of the philosopher sitting on the  sofa, we could just like an ancient world, I mean,   we could have more access to that. And that's one  thing I've been very interested in is going back  

to the ancient world and looking at how people  thought about things. Because our modern world   is just so inundated with certain things that  we have to think about all the time. We don't   get much sort of bandwidth to think about the  thinking. I think that's what's great about your   channel. You force people to do that. Thanks.  Well, that's also what's not so great about   the channel. So you said you could be aware  of something but not aware that you're aware.  

That also reminds me that you can know something  but not know that you know it. I think it was,   I think it was Schopenhauer said a man can or a  person can do what they will but they can't will   what they will. Right. And so we think we have  this freedom of choices and action. But where do   those, you know, are there agents in there that  are choosing those behaviors? That's one of the   things I've been very fascinated about is this,  this idea of our mind being hijacked by systems   that are choosing our behavior below our threshold  of awareness. So there's, there's a classical  

psychological experiment where you can sort of  puff a airstream into someone's eye to make them   blink and you can in an associative training, uh,  get that to match with a stimulus like a little   red light turning on. Interesting. And so people  like a Pavlov dog with the bell, they can learn   to instinctively close their eye when the light  goes on because they know that the air blast is   going to come on. But what's interesting is you  can get people to learn that association and they   have no idea they've learned it. So it's sort of  a completely unconscious, uh, programming. Now   imagine this would be very powerful in marketing,  right? You show someone a logo and they want to   go out and buy a bag of chips. Are we susceptible  to that sort of thing? And I suggest that we are   and that maybe that's just a general phenomenon  that maybe a large percent of our percentage of   our behaviors are chosen at a level that which we  don't have access to and would take a lot of work   if at all possible to get access to. Earlier you  talked about emotions can be not on switches but   off switches. And in one respect that's odd to  me because there's much more off that you would  

have to turn. There's many more switches you'd  have to turn off than you'd have to turn on.   So to conceptualize it as an off model is odd to  me. Exactly. It's akin to saying the electron's   not an electron, it's an off of the quark and the  photon and the so on. Like, okay, or you can think  

of it as an electron. An on of an electron. But  it doesn't matter. So you can feel free to justify   the, I believe it was Minsky who thought it was  off. You talked about something else being off.   So then it makes me think, do you think of free  will as not free will but free won't? And that's   one of the ways that we can save free will. Yeah,  that's an interesting way to think about it. That  

we maybe we don't choose our behaviors, we choose  other things we wouldn't do. And that, that,   you know, gets to my idea that I'm been thinking  about a lot lately as this idea of immune system   and how it relates to mind and consciousness. And  it started by, I was looking at the immune system   as a kind of computational system. And thinking  about how our immune system acts kind of like a   brain. It has a memory and it's able to execute  certain behaviors based on its previous experience  

and so on. But in that process, I started to run  it in the other direction. Rather than thinking   about the immune system like the brain, I started  to think of the brain like an immune system. In   particular, I think that one of the things that  the brain tries to do or the mind tries to do   is to protect us from thinking unthinkable  thoughts. Thinking thoughts that would change   our emotional state, disrupt our behavior  pattern, and in the extreme sense, you know,   be lethal. Maybe not in a physical way but lethal  to our personality, to our notion of self. So,  

there's certain thoughts that we don't want to  think about. We don't like to think about. Maybe   it's the loss of a pet when we were younger.  Maybe it's the loss of a loved one or a family   member. Maybe it's anxiety about the future.  That in general, if we let our mind get consumed  

by these thoughts, at a minimum, you're going to  have a bad day. And it's going to be hard to see   the opportunities in front of you. And so I think  one of the things that a healthy mind is able to   do is develop mechanisms to prevent us from going  into these runaway spirals. Whether it's anxiety,  

depression, hyperactivity, whatever it might  be, our mind is trying to modulate those runaway   trains. And if we don't, then we can be subject to  mental illness, essentially. And if we take that   idea seriously and zoom out, we have to imagine a  class of ideas that in general our mind is trying   to keep us away from. When it comes to our immune  system, it's useful for us to be exposed to what   is deleterious, especially at a young age, to  strengthen our immune system. And then I imagine  

repeatedly, but in smaller bouts as you're an  adult, do you think that that is the analogy to   you encountering something that's psychologically  uncomfortable in order for you to build some   amount of resilience so that you can encounter  the world, but then not too psychologically   uncomfortable? Otherwise, it destroys you.  or building up the tolerance to the poison,   taking a little bit at a time. Having that memento  mori helps us deal with our own mortality. It's   something that can be largely overwhelming  if we think about it too much. But maybe by   encountering it in little bits, that allows us to  deal with it, which could be why it's so pervasive   in our culture. Now, what's the point of learning  to deal with your mortality in order for you to  

deal with your mortality? That sounds like it's  paradoxical. Learn to deal with your mortality   so that you can die, so that you could prevent  yourself from being overwhelmed by your death,   so that you don't die? Well, maybe it's just  sort of a breakdown of the immune system, that   there's some mechanism there that wants to break  through and sort of taste these ideas that you're   not supposed to think about. Or in general, other  agents, other modules in your mind, so to speak,   are trying to prevent you from thinking about.  So one of the things that this led me to thinking   about these unthinkable thoughts and our mind  as a kind of immune barrier is the type of   vulnerabilities that. Ordinary organism, physical  organisms have in terms of being taken over by   external forces, let's just say. And so it led me  to the idea of looking at informational parasites,  

informational parasites. Yeah. So the idea that  there's sort of information that if it gets   into our brain, it will self-replicate, persist  and essentially go viral, that that we will be,   how's that different than Dawkins mind virus?  I think it's very similar. I think it's very   similar. So his idea of the meme in general,  I think is the example of this. And as I was  

mentioning earlier, these words like meme weren't  available to the best minds a few centuries ago   as part of their repertoire. Now we know what a  meme is. We know what it means to go viral. We   know what it means to laugh at something and  then hit share and then it goes off to 10 of   your friends. You know, why are we doing that?  Are we sort of this substrate for these other,   you know, like a virus, it can't exist on its  own. I've been calling him hypo organisms because   they need to live on an organism substrate for  their reproduction, just like an ordinary virus.   But like a regular biological system, they  can take over a lot of the function. And we,  

we see that in, in parasite behaviors that you  have these, these zombie insects and the types of   things where you, you get rats that are no longer  afraid of the smell of cats, for example, and then   they go and actually approach the cat because that  will complete the, the cycle for the parasite. And   in this research, I've been fascinated. There's  some arguments that the complexity of our brain   itself is, it could be due to the fact that we  don't want it to be easily controlled by physical   parasites. And that by making the steering wheel  and the gas pedals very convoluted in our brain,   that that makes it difficult in an evolutionary  arms race for parasites to kind of take control   of the reins. And I've been thinking about this a  lot in terms of information, in terms of language.   Is language a sort of a parasite? Um, and not  necessarily in a pejorative way. You know, I,  

I jokingly call it the divine parasite. Uh, you  know, in the beginning was the word and the word   was God. And maybe, maybe it's something that,  um, you know, it really literally enlightens us   in a sense that we wouldn't be much without our  language. Um, but maybe we need to think about it   as it's hijacked this brain structure and that  that's the thing that's evolving and alive and   learning and replicating. So are you suggesting  that the intricacy of the mind and the central   nervous system is there because it protects  against parasites or viral parasites? That's   one of the reasons why it's difficult to model the  brain, even though they're increasingly improving.   And that's one of the reasons why it's difficult  to interpret what's going on in someone's brain.  

So when they show images of, Hey, here's what  it looks like when someone's dreaming, look,   we were able to, they dreamed of a duck, we showed  duck. But what you have to do is have several   examples where someone's looking at a duck or a  duck like object and then train the computational   model to match that. And each person is bespoke.  Yeah, exactly. That if that mapping between,   you know, thinking of a duck and the area of the  brain that lights up, if that were simpler, let's   say, then it would be more susceptible to being  hijacked. Both in the modern sense with marketing,   but in the classical sense of being taken over  by, you know, some brain parasite, whatever,   whatever that might be. Because they could just  find the grandma gene. They could just find the,   okay. I mean, I'm sorry. They could just find  the grandma neuron. Exactly. Exactly. And then  

that would be relatively easy to kind of grab the  reins. One of the things I've been fascinated with   is this concept from the ancient world called the  nam-shub of Enki. Okay. Have you read Snow Crash   by chance? No. Highly recommended to you and your  readers. And it's the, it's a fantastic science   fiction story from the, from the nineties, uh, by  Neil Stevenson. And it's, it's where I came across   this, this idea of this nam-shub. And, um, it's  neat cause it's, it's rooted in historical record,  

this sort of linguistic virus. Spell that out for  us. Oh yeah. N A M S H U B. Okay. Of Enki. E N K   I. Uh-huh. And so it comes from ancient Sumer  and it's, it's a story about language. It's   a story about linguistic disintegration, about  losing the ability to understand language. And  

a simple example of this is when you take a simple  word and you just repeat it 50 or a hundred times   and it kind of falls apart. Yes. Right. It gets  to the point where you, you can finally actually   hear the word, but at least for me, as soon as it  switches over to where you're hearing the word,   it no longer means anything. Right. And so imagine  you had that at a high level. And so there's this,   this poem, which it's translated into English,  but if we were to speak ancient Sumer, Sumerian,   and you were to read this poem in Sumerian,  the idea is as you got to the end of the poem,   you would no longer understand how to read or how  to use language. Your understanding of Sumerian  

would fall apart. Kind of like when you repeat the  word over and over again. And what's interesting   in meta is the story is about that. Hmm. So it's  a story about that, that property. And this is   essentially the story of the Tower of Babel.  Of, of sort of losing your ability to understand   language. And I've, I've been fascinated by that  idea as an example of this, this lethal signal,   a simple poem, if it were, you could think  of it like prompt injection, right? There's,   there's a specific prompt that if you were to give  it to a certain speaker in a certain language,   it would disrupt their LLM. Now, a lot of people,  again, we have these new concepts like LLM and   prompt injection where we kind of have an idea of  what that means. There's these noxious sentences,  

very carefully crafted that if we present them  to this language model, it goes into a dynamic   that is very unpredictable and certainly not the  ordinary self. You know, the kind of super ego   turns off on these LLMs and they'll talk to you  about things that they are programmed not to talk   to you about. And it reminds me of, you know,  the kind of mesmerism, you swing the watch and   somebody, and they said you are getting sleepy.  There's, there's stimulus that you can present   to humans that will disrupt their thinking. And  so I've been fascinated by this, this concept of   lethal text and information hazard and trying to  understand, are we vulnerable to those? Do they   exist in the modern world? And how would we defend  ourselves against them? So is this what you mean   when you say AI immune system? Or is this more,  are you using the concepts from AI immune systems   to apply to our mind like immune system? A little  bit of both. So I'm very interested in how we take  

ideas from the immune system to secure and protect  our AI systems. You make a smart door lock with   cameras and microphones on it, and you connect it  to a language model. You want to make sure that's   not vulnerable to a prompt injection. So the  example I like to give is you can pick a lock,   your dead bowl, you can pick it with little  metal, you know, tongs and so on, but you can't   yell at your dead bowl. You can't intimidate it or  blackmail it or, or threaten its family or bribe   it or anything like that. But you can do those  things to language models. And so there's all new,   there's sort of psychological vulnerabilities,  which we've never encountered that in technology   before. We've had bugs and we've had exploits, but  you've never been able to make them cry, you know,  

so to speak. And as we add these psychological  type, or these mind like objects into our everyday   technology, we have to be aware that they're  coming with psychological vulnerabilities.   So that's one side of it. The other side of it, I  think the greatest disruption we're going to see  

from artificial intelligence is not going to be  in the technology we see in front of us, you know,   automatic self-driving cars and intelligent homes  and software that writes itself or stuff like   that. That's going to be spectacular. It's going  to change our economy. But the biggest changes I   think we're going to see on the planet is going to  be in our minds. It's going to be how we think and   the languages we use. I used to think that English  was everything we needed, but now I don't think   that's the case. And I think we need to either  construct languages, find old languages, merge  

the best of the current human languages and be  willing to change how we think. And I think that's   largely determined by the words we use. There's a  hypothesis called the Sapir-Whorf hypothesis. Can   you talk about that? Yeah. So it's, it's the idea  that if you don't have the words for something,   it gets very difficult to talk about it and that  we have to have these kind of concepts. I like   Alan Kay. He says that all language is a sort of  nonverbal gesture or I'm sorry. It's a, it's a,  

it's a way of gesTuring in, in high dimensions  with, with language. And we essentially point   to things with words. And if you don't have that  word, then it's hard for us to kind of point at   it and agree that we're talking about the same  thing. And so I've been, you know, back to just   real quick, back to the immune system thing. I've  been thinking about how do we protect ourselves   and our mind because our minds are going to be  under attack, not necessarily from an adversary,   but just from this overwhelming vista that AI is  going to expose and it's going to be a dramatic   cultural and scientific revolution that I think we  have to prepare our minds for by sort of updating   our immune system. And our minds are going to be  under attack by who or what? Largely the void,  

you know, just the, the new sites, the new vista,  you know, we're getting these new telescopes,   we're getting these new microscopes in the  form of LLMs that let us, uh, you know,   read all of literature. You know, I think that it  says it's 20,000 years that it would take to read   the amount of material that some of the language  models have read. I can't do that as a human,   you know, I'm kind of jealous of, of that aspect.  And retain it. So they're going to have insights.  

They're going to have insights that nobody has  sort of gleaned out of, of all of that, that   corpus so far. And so I think that's something  we're going to have to prepare against. Um,   and it might cause a radical shift in how we  think. Now, how would we be able to tell the   difference between those insights and just what  some people call hallucinations? Although I think   it should be called confabulations, it's a poor  word to call it hallucinations. Yeah. I like,   I like confabulation better for sure. Um, but I  think it's a tricky subject because, you know,  

how do we know it's sort of an optical illusion  or it's just something outside of our perceptual   window? Yeah. So why don't we give an example?  We've been quite abstract. So give us a potential   future scenario where some AI system has insight  that can disrupt the human mind. Um, I think we're   going to see revolutions in psychology and in  history. So maybe not at an individual level,   but sort of at the, uh, academic subject level.  You know, I think one of the things I've been   thinking about is science, you know, let's say  physics, let's call it has undergone multiple, uh,   dramatic intellectual revolutions. You know, we  had, uh, you know, Aristotle's version, and then  

we had Newton come along and throw all that away.  And then, you know, Einstein came along through   that all the way. And then quantum mechanics came  through that all the way. And with, with chaos   theory, and then with computation and so on, we've  had, you know, six or seven of these dramatic   revolutions. And so if you were to go back to  somebody 150 years ago and explain what science  

looks like today, it would look very different.  And you'd have to explain those milestones,   those hurdles that had been jumped over. I'm not  sure that history has undergone the same thing.   If I were to go ask my great grandfather, um, tell  me the story of how we got from, let's say Egypt   to Napoleon. I think it would be approximately the  same story that you would learn about today as a   sixth grader. That doesn't make any sense to me.  How, how could it possibly not have undergone some   revisions and the same with psychology and the  mind itself? We now have all these new concepts,   um, like information theory and bits and download  and upload and storage capacity and memes and   going viral. These are all things that every, you  know, middle school student would understand. We  

have to go back and re-examine psychology in light  of these new concepts. And I think that's going   to be a dramatic undertaking. Ben Horowitz and  Mark Andresen were speaking and they were saying,   how do you regulate AI? Because if you were to  regulate it at what they call a technological   level, that's akin, or if not the same as  regulating math, which is impractical. So the   government official countered and said, well,  we can classify math. In fact, historically,   entire areas of physics were classified and  made state secrets, though this was during   the nuclear era and that they can do the same  for AI by classifying areas of math. Now that   sounds quite dubious because what does it mean?  Do you outlaw matrix multiplication? Do you   say, okay, nine by nine is fine, but 10 by 10,  we're going to send the feds in. Even during the  

nuclear era, some of those bans were private.  Like you didn't know that you were stepping on   TOEs that you weren't supposed to. I don't see  how you can make such bans private now because   you would have to say what is being outlawed. So  there's several issues here. And I want to know,   what do you think about this? For people who  are watching, Will is known in the South Florida   communities like a hidden gem for us here, but  you're quite famous in the AI scene in Florida.   And me and you, we also got along because we have  a background in math and physics. So when we spoke  

off air a year ago or so, we were talking about  the Freedom of Information Act and your views on   government secrecy. You're a prime person to  answer this question, to explore this. Yeah,   I think, you know, this is such a such a  fascinating area. Um, what it reminds me of is,   is Grace Hopper, one of the, the first modern  computer programmer. And she was, she was drafted   into the Navy and she discusses that when World  War II happened, her profession as a mathematics   professor became classified. That was a classified  occupation. And so you're exactly right. That   entire branches of mathematics and computing have  been declassified throughout history. I just saw,  

there was an interesting photograph of one  of the computers that Turing worked on.   And, um, the British government just declassified  this like a month ago, right? It's a photograph   of a, of a World War II computer that they felt  that just the image of that from the outside is   something they needed to keep classified for this  long. So, you know, I'm, I'm of the strong opinion   that with artificial intelligence, we're not  really seeing the invention of it. I think we're  

seeing the disclosure of it. Um, we're seeing  the, the public dissemination, the open source,   uh, aspect of it. And, you know, there's really  two possibilities. Um, either that's, that's true   or that's not either. We invented, let's just say  language models. We either invented them in the   2020s or we invented them in the 1950s. Either one  of those scenarios is kind of scary to me, right?   Arthur C. Clark said, there's two possibilities.  We're either alone in the universe or we're not.  

And both are equally terrifying. Exactly. If we,  if we only recently just invented this, then that   means that Turing's ideas and von Neumann's ideas  and the, the very first papers on computer science   themselves just collected dust for no reason.  Turing proposed building a language model. von   Neumann discussed building neural networks. Um,  and interesting as an interesting jump back,   I recently found that, uh, von Neumann's  computer at the Institute for Advanced Study,   one of the very first programs they ever ran was  to look at parasites. It was to look at biological   evolution and to see if there were informational  parasites that would emerge in the memory space.   Essentially artificial life as we would call it  now. So in these two possibilities, you know, one,  

we invented this 75 years ago or so, and it was  locked up in some vault or we, we didn't. And we   wasted 75 years of opportunities to, to cure  cancer with AI and to look at climate change   and to use this incredible technology for the  benefit of humanity, because we had this immune   system that blocked us from thinking about it.  For so long, so many people thought that AI was   just this crazy notion. And I think that's hard to  argue now, but the, these original papers, and I   encourage everybody to go back and grab Turing's  papers. They're very, they're very readable,  

right? They're easily digested compared to modern  academic papers. Um, and he literally proposed   with neural networks and with training and  reinforcement and so on. The kind of structures   that we see essentially in ChatGPT. Now you say  essentially in ChatGPT, because I imagine Turing  

didn't propose the transformer. And so when we  say that someone historically invented so-and-so,   it reminds me of a friend who's like, I invented  Netflix because in the nineties I thought,   wouldn't it be great? And I'm like, yeah, what  do you mean? You invented it because you thought   like Leonardo invented the, the helicopter because  he drew it. Right. Okay. Well, you know, I think   there's, there's three major components in the  recipe for modern AI systems that I think a lot,   most people agree, certainly on the first two.  Um, one, we needed faster computers. So Turing   certainly didn't have large memory spaces.  Um, the kind of, the kind of memory that we   have nowadays and the clock speed, I think  he would be super excited about. Um, he, he,  

he talked about how he could write a thousand bits  of program a day. Um, and he was pretty proud of   that. And he thought most people wouldn't be able  to keep up with that. Um, so we have the hardware   is definitely improved. And then the other one  is the data that we now have this massive data,  

these massive data sets. And the third one that I  think nobody really talks about, and I'm surprised   is essentially the combination of calculus with  computer science. With linear algebra in, in the   form of what's called automatic differentiation.  And I never hear this in the discussion. And I'm   surprised. It's kind of like we invented the  automobile and everybody just loves it. And,   and you reply and you say, well, yeah, gasoline  is so amazing. And people say, what's gasoline.  

Automatic differentiation is the thing that makes  AI work. And it's the ability to run calculus,   whether it's the transformer or Covnet or whatever  architecture is. All of them under the scenes. We,   we take the computer program. We essentially write  it as a giant function. Now, as a human, we don't  

have to do that. That's kind of at the kind of the  compiler level, but we write our Python or torch   code or TensorFlow or whatever it might be, and  then that's converted into essentially a giant,   you know, function there's gradient tapes and  all kinds of interesting ways it's done nowadays,   but we calculate the derivative and the  derivative tells you which direction to   go. To make an improvement. It's kind of like a  magic compass. And it says, we're doing this well   right here. If we go that way, we'll, we'll  do even better. And that's the magic wand,   the secret sauce that makes all of these work. But  Turing was a mathematician. I think he knew about  

calculus. I think he, he knew about it probably  better than most humans. And so I'm, I'm shocked   that one that's not more in the common language  of wow, we combine these two branches of math   and look how powerful that was. And the idea that  von Neumann and Turing would have missed that. Um,   you know, I, I think doesn't make any sense now  on the other side, we say, well, what about, okay,   well, they didn't have enough hardware and they  didn't have enough data. Well, let's look at data  

first. Um, you know, the, the signals intelligence  community has the, the mandate to capture all the   signals that grow across the planet, right? Uh,  back in the fifties and sixties, there were boats   that sat in the middle of the Pacific with big  antennas that just captured all the EM traffic.   So there's been plenty of data. If you had the  right, now, again, maybe this didn't happen. And   that's also sort of, uh, an interesting thing  because they, well, why didn't we use all that   data? You're telling me that we have a data center  that's listening to every phone call and looking   at every television station and we didn't train  models on that. That seems unlikely to me. Um,   and then, so we would have had enough  data and the idea of the, the chip speed.  

Well, if we look, you know, at computers, you  know, I have, I have a saying, if you could do   it this year, could you have done it last year for  more money? And I, and I think so. So how much did   it change cost to train, uh, you know, ChatGPT  sure. On the daughter on an order of dozens of   millions of dollars from what I understand,  right. With off the shelf consumer technology   chips that anybody could buy on the open market.  I see. How much does an aircraft carrier cost?   Right. 17, $17 billion before you  put the airplanes and people on it,   not including the development cost. So in one  sense, we had this notion of computers from  

the 1950s that were massive, had their own power,  uh, generators, often power stations cost millions   of dollars. And where these enormous technical  pieces of equipment in the seventies, we invented   this thing called the mini computer, the size of a  couple of refrigerators. And then in the eighties   we had the microcomputer and we don't really call  them this today, but our telephones and laptops,   we could call them nano computers. Let's say,  but in some sense you could keep the original  

definition of a computer. So to me, a computer  is something that by definition costs millions   of dollars, lives underground, has its own power  station, required specialized operators and so on.   We just like that, like the big thing of baloney,  we carved off one slice and like the deli sample,   we have this one little piece of ham and we  think this is fantastic. This is amazing. Yeah,   but just scale it up. Um, and, and there's  certainly enough money around the world  

to do that, to build a computer at scale. I  would argue that things like ChatGPT or LLMs,   they're as powerful, as dangerous, as important  as an aircraft carrier in a sense. Um, and so   if this is the only one or rather if military  organizations don't have more powerful ones,   that's scary to me in some sense that that means  the most powerful technology in the world is just   available to middle schoolers. That's, that's  striking to me. Um, and, and hard to believe.   And on the other side, I think it's surprising  that when we look at the power of these models,   a new one just launched this week, that's, that's  significantly better at writing code. Well, that   thing is serving hundreds of thousands of people  at once. Millions of people are using ChatGPT. Uh,  

it was the most viral application of all  time. Imagine it was just had one operator,   right? So it's chewing on everybody's problem  all at once. It's like serving, you know,   100,000 peanut butter jelly sandwiches all at  the same time. And if you think about, well,   what's, how big of a single sandwich could it  make? And it's like a pretty significant one.   And so when we, we get so impressed that it can  pass these tests and do this thing, it's like,   but that's just one slice. That's just one bologna  slice. Imagine if you took that kind of a system  

and tasked it to do a single problem, you know,  what would you get out of that? So, um, I think   it's reasonable to suspect that there are systems  that are much more powerful. And as I said,   I almost hope that there are. Now, why do you say  that neural nets? We're not seeing the invention.   We're seeing the disclosure. Why not? We're seeing  the co-invention or the independent invention.   Like the same kind of thing, I think is kind of  what I mean. In other words, you're not suggesting   that we've had neural nets and then the government  was saying, okay, let's disclose about some new   technology. Rather, it's like Leibniz and Newton.  They both developed calculus, but independently,   Newton may be first if you're on the Newton camp.  Yeah, I think it's, I think it's the kind of thing  

where it gotten to the point where you could  redevelop it for just a few million dollars or   even less essentially. Um, you know, so some of  it, I think of, you know, things like, uh, truth   and reality is sort of like, like what's called  percolation. That it doesn't matter if there's a   leak. It matters the size of the leak. And if  it's gone critical across a network, you could  

have told, you know, Turing could have known all  about it. von Neumann could have known all about   it. But unless that's going viral, essentially,  it doesn't matter how many people know about   something. If that number of people is below a  certain threshold. For many of these technologies,   you don't see that there's something inherent  in competitive markets that are what drives the   invention of these and that the government doesn't  have that same incentive structure inside. No, no,   I, I do a hundred percent believe that the market  forces are very good at tuning these things up.  

So if these things existed, um, they probably  cost a fortune to run. Um, Richard Hamming talks   about in the early days of computers, he was at  Los Alamos. They cost a dollar a second to run,   right? Just extraordinary costs. And so imagine  you had something like ChatGPT three. And you had  

it 20 years ago and it could write a nice essay,  but it costs a hundred thousand dollars each,   a pop, you know, like what would you do with  it? Um, or the ability to create a deep fake   photograph, but each one costs $500,000 or  something like that. Um, the most expensive   haiku. Exactly. Exactly. Right. The, the Wagyu,  the AY5 Wagyu or whatever it is. Right. And,   uh, nobody's going to eat that. Nobody's going to  eat that essentially. Um, but at a certain level,   you know, it might be worth it at least to keep  that technology alive. I see. Now, isn't there   something about it being trained on data that is  recent, that increases the intelligence of the   model? And so even if it was the case that in the  nineties, this was there in the government at some   rudimentary form, it would be a rudimentary form  that would be so bloated in cost. Right. And then  

that would also compete against other technologies  inside the government that also have a bloated   cost. Well, you know, I like this thing you said  about only recent data and I'm actually fascinated   by the, the opposite. What I'd love to do is to  see models that kind of live in a time bubble and   train them up to a certain century or decade, and  then cut it off. Don't tell it it's in the future,   right? Give it only ancient philosophical texts,  give it only science up to year blank, and then   see, can it, can it, can it run it forward  and what kind of insights would it have? Super   interesting. Okay. You mentioned Richard Hamming.  Now, when we met two years ago, I believe you told   me about Richard Hamming series on YouTube and  I watched all of it. So please tell me why you  

were so enamored with that, what you learned  from it, why the audience should watch it. Um,   yeah, it's, it's easily the best to call online  class. I think is the best name is the best course   lecture series. I think I've ever seen. Um, it was  recorded in, I think 95 by Dr. Richard Hamming of,  

of Bell Telephone Laboratories and Los Alamos.  And he goes through a fantastic overview. Uh,   he calls it learning to learn the science of art  and engineering, uh, the art and science of, uh,   the science and engineering. And it's the,  he, he talks about trying to prepare people   for their technical future. And he even explains  that the course isn't really about the content.   It's a, it's the sort of the meta. And he uses  that just as a vehicle to get across essentially   a lot of stories. He discusses the idea of  style and how important it is. Um, you know,   he describes early on that he felt like he was a  janitor of science, sort of sweeping the floor,   collecting some data, running some programs,  a part of the machine, but not a significant   piece. And he wanted to kind of make an impact and  he discusses trying to change the way he looks at  

things, namely in terms of style. And he doesn't  try to describe that directly. That's kind of   the content of the course. And I would encourage  everybody to go and look at it. He goes through   the history of AI. He goes through the history  of technology, of mathematics, of quantum and   so on. And he discusses neural networks and, um,  you know, some very farsighted things. And it's  

accessible. It's extremely accessible. Yeah. There  aren't equations as far as I know, he doesn't   write on the blackboard much. Yeah. The board is  so blurry. The board is so blurry. Unfortunately,   you can't really see them when he does, but that's  not really the point, but there is actually a,   a book. And so I think it's actually now back in  print and I think you can find it on Amazon and,   uh, it's a fantastic text. So if you're more  into reading, you can go through it that way,   but I encourage everybody to give it a listen.  He's very inspiring, uh, particularly the first  

and last episodes on you and your research.  They'll really get you jazzed up and pumped   about your work. What insight have you taken that  you've applied recently? It's a good question. Um,   I can go first if you like. Yeah, please. Well,  one is when I was speaking to Amanda Gefter about   quantum mechanics, the cubists tend to say, look,  we're the ones who are rationally evaluating what   quantum mechanics is and then inferring our  interpretation atop that. And Richard Hamming  

had a great quote where he said, people, including  Einstein, including Bohr, they start from their   metaphysical assumptions and then build a top  their interpretation of quantum mechanics. And in   fact, you can look at someone's, whatever someone  prefers as an interpretation of quantum mechanics   and infer their metaphysics. Right. So it reminds  me of a couple of things. One, um, with Bohr and   Bohr, I think it was to Einstein. He said, and I  love this quote, you're not thinking you're merely   being logical. And that had a profound impact  on me. And it led me to think about different   modalities of the brain, maybe these different  agents in popular psychology. And which is now  

I think becoming more important, this idea of  left versus right brain that neuroscience kind   of ignored for a long time. They said, that's just  folk psychology, but I think there's a lot more   to it than that. Um, and so I've been, I've been  looking in that direction. Um, there's a fantastic   book. It's actually about how to draw like  sketching. It's called how to draw on the right  

side of the brain. And I'm going through this and  I'm like, this is the best neuroscience intro I've   come across because, um, in learning how to teach  people how to draw the author, she realizes that   people have this very different ways of thinking  and maybe like the emotion kind of idea. You have   to be able to turn off some of these capabilities  to have the other take the center stage. You know,  

we all know that ego is kind of a hog of the, of  the spotlight and to get this other, uh, let's   just say more sensitive aspect of our mind, which  is responsible for seeing the bigger picture and,   and drawing things. You have to, you have  to think very differently about that. And   it also reminds me of the thing I really like  about hamming. I mentioned at the beginning,   this, this idea of tolerance of ambiguity, and he  really emphasizes that throughout the course. And   I've tried to do that. It's not easy to do,  uh, cause you feel a little schizo doing it  

because as he says, you have to both believe  and disbelieve in an idea at the same time,   you have to believe in it enough to entertain  it, to start thinking on it and work on it and   potentially make progress. But if you believe it  too much, then you'll never make any progress,   right? Einstein believed in his idea of space  time too much, and he was unable to appreciate   and make contributions in quantum mechanics  because his belief was too strong. And so this   idea that you have to believe and disbelieve at  the same time, this, this non Aristotelian logic.  

It just because it's, just because it's true, you  know, we, we always think, okay, if it's not true,   it has to be false. If it's not false, it has to  be true. No, there's a lot of, of space in between   those. And we, we don't have much training as  scientists. Uh, you know, I think as, as trained   as a physicist, I had, I was very vulnerable  to not being able to see that middle ground   for a very long time. Have you read the master  and his emissary by Ian McGilchrist? You know,   it's funny. I, I love that. Uh, I was just like,  just watching it. There's a great documentary on   it. And, um, that's one of my favorite ideas with  this left and right brain that we have, you know,  

many selves in there and that's this, these many  agents and they're very different and they both   perceive the world in radically different ways.  What bothers me about the criticisms on the whole   left brain versus right brain is that they tend to  just be about, well, functions aren't localized to   the left or to the right solely. And I'm like,  okay. But to me, that's not the issue of left   brain versus right brain. It's modalities, like  you mentioned, that word modalities, that there   are different modules in the brain. And they can,  the fact of them being separated by hemispheres is   the least interesting part to me. Right, right.  It reminds me of an idea that I, I been trying  

to put together. So we had this science called  thermodynamics and it was about heat and energy   and work and things like that. And then later  we got the theory of statistical mechanics and,   and, and Boltzmann came along and he said, well,  let's redo this and assume that we actually have   a bunch of little atoms and that they're moving  around and we can do these probability theory.   And you get to essentially the same answers. But  what's, but what's fascinating to me kind of as  

a metaphor is thermodynamics is a very successful,  uh, branch of science is very powerful predictions   and it does not presume the existence of atoms.  And so as a metaphor, I want to think of a kind of   neuroscience or a kind of brain science that does  not presume the existence of neurons. Interesting.   Now, obviously we know there's neurons, right?  We could, we can see them. It's an extraordinary  

powerful, uh, the neuronal hypothesis has been  revolutionized neuroscience. I'm not suggesting   that's not the case. But what I'm saying is  we're, we could be missing a powerful view.   And like you said, with the left and right brain  networks by forcing it into the paradigm of FMRI,   we're missing the point in some sense. And so I  would love to see a theory that kind of operates   at a higher level, right? And is not necessarily  trying to at every step, maybe at the end you   can go and see where it has this correspondence  principle with statistical mechanics. And we could   think of the mind kind of like, um, you know,  William James style, um, psychology independent   of particular neuronal structures and then  later go back and do the correspondence on it,   but not hold ourselves back from this kind of, of  thinking. So who's the modern day Jung Carl Jung?  

That's a great question. I think the problem  is, is, you know, academia doesn't tolerate   that kind of thing. Right. Um, I love your  recent episode with, uh, Gregory Chayton and,   and this kind of idea that it's hard in  the modern academic reality to have these   kinds of things to both believe and disbelief to  tolerate ambiguity is kind of not tolerated, um,   in a sense. So, you know, I think that's, what's  just so extraordinary about your channel and your  

community is it's one of the few places I've seen  in the world that allows this tolerance where,   you know, as a viewer, you can watch something and  you don't have to believe everything and you don't   have to disbelieve everything. You can kind of  just let it pour over you and, and look at these   different viewpoints. And I think that's, what's  just really refreshing about your, your group   and your community. Um, I don't see that many  places. Thanks, man. There's so many different   avenues I could take this. You have for people  who have just tuned in. Will, like I mentioned,   you're infamous in the famous and infamous in the  South Florida community in the AI scene. And so  

I'm happy to bring attention to you to the global  scene, at least in a small part. You're known also   for almost any topic. Someone could just ask  you a question and then you can just spout off  

2024-10-11

Show video