Sentient Computers and Machine Consciousness: Should We Be Worried? | Intel Business

Sentient Computers and Machine Consciousness: Should We Be Worried? | Intel Business

Show Video

(cheerful music) - [Narrator] Welcome to "What that Means" with Camille, where we take the confusion out of tech jargon and encourage more meaningful conversation about cybersecurity. Here is your host, Camille Morhardt. - Hi, and welcome to today's episode of "What that Means", part of the Cybersecurity Inside podcast. We're gonna talk about machine consciousness today with Joscha Bach.

He is a principal researcher in Intel Labs, focused on artificial intelligence. I would argue he's also a philosopher. Welcome to the show, Joscha. - Thank you. Thanks for having me come in. - I'm really happy to talk with you today.

And this is like, an enormous topic. I mean, it's kind of been all over the news last few months. And I wonder that we just start with defining consciousness. I think that when we started to look at artificial intelligence, we looked at, you know, well, what is intelligence? So if we start to look at machine consciousness, maybe we should start by looking at what is consciousness? - That's a tricky one. So colloquially consciousness is the feeling of what it's like, right? There is a certain kind of experience that we have, a phenomenology of experience that makes consciousness very specific and distinct.

And so we know it indexically by pointing at it. And if we go a little bit more closely and dive into the introspection of consciousness, we find that there is an consciousness that relates to the awareness of contents, right? So at any given point, I'm aware of certain features in my experience. And then I am aware of the mode in which I attend to these features.

For instance, I might have them as hypotheticals or as selections in my perception or as memories and so on, right? So I can attend to things in very different modes, and that's part of my experience. And third, there is reflexive consciousness, the awareness that I am aware of something, that I am the observer. But you can also be conscious without having a self.

For instance, in dreams at night you might not be entangled to the world around you. You don't have access to sensory data. So your mind is just exploring the latent dimensions of the spaces that you have made models of.

And you don't need to be present as an agent, as a self. So consciousness is not the same thing as the self. Different perspective that we might take on consciousness is with respect to the functions that it fulfills.

So there's a certain degree of awakeness and lucidity that we associate with consciousness. When we are unconscious, there's nobody home. And I call this the conductor theory of consciousness.

Imagine that your mind is like an orchestra that is made of something like 50 brain areas give or take, which correspond to the instruments of an orchestra. And each of these instruments is playing its own role in loose connection with its neighbors. So it picks up on the processing signals that the neighbors give, and it takes that as its input to riff on them.

And so the whole orchestra is playing. And it doesn't need a conductor to play, it can just do free jazz because it just entrained itself with a lot of patterns. But if you are in the free jazz mode, you are a sleepwalker. There's nobody home.

And a sleepwalker is somebody who is able of quite complex tasks, like sleepwalkers might sometimes get up and open the fridge and make dinner, but they do this randomly. It's just an automatic process. And if you talk to them, their responses make no sense unless they wake up. And this waking up means that they become fully coherent.

And the purpose of the conductor is to create coherence in the mind. - Well, I was just gonna ask, why are we constructing these models? I mean, these are essentially models to learn. - Yeah, to make sense of reality. You can also be conscious without the ability to learn, but you have to update your working memory. And consciousness relates also to the ability to make index memories. But if you want to understand a complicated reality, you may need to construct.

And constructing means that you need to backtrack, need to remember what you tried and what worked and what didn't. So when you wake up in this poorly lit room and you try to make sense of your surroundings, you might have to disambiguate in a search process. And this search process requires that you have a memory of what you tried. And this index memory, not just of this moment but also over time when we learned, when we tried to figure out what worked and what didn't, requires that you have this integration over the things that you did as the observer, that makes sense of reality.

And this gives rise to stream of consciousness. - So who is the you in this sense, when you say, you know, you wake up or there is somebody home, like who is that you? - It's an emergent pattern. There is not a physical thing that it's like to be me. I don't have an identity beyond the construction of an identity. So identity is in some sense, an invention of my mind to make sense of reality by just assigning different objects to the same worldline. And say that this object is probably best understood as a continuation of a previous object that has gradually changed.

And we use this to make sense of reality. If we don't assume this kind of information object identity preservation, we will have problems to make sense of reality, right? And we pretend to ourselves that identity objectively exists because it's almost impossible to make sense of reality otherwise. But you and me, we are not more real than a voice in the wind that blows through the mountains, right? So you could say that the geography of the mountains is somewhat real. The structures that we have entrained our brain with. But the story that is being created is ephemeral. We stop existing as soon as we fall asleep or as soon as we stop paying attention.

- Hmm, so the awareness is the construct of our existence-- - Or it's the process that creates these objects. And so the self is the story that the brain tells itself about the person. - So why do that? I mean, why not just perceive the world as it is at any given moment? Is there some goal that we're after like procreating or, you know, why does it matter that we're sensing the side of the mountain or the edge of the table as opposed to just, oh, there's a concentration of molecules of this type here and there's no concentration of that type of molecule there? - It's very difficult to observe molecules. And it's extremely difficult to make models over the interaction of many molecules. And the best trick that our brain has discovered to do this is to observe things at an extremely coarse scale. So it's simplifying a world of too many molecules and too many particles and too many fluctuations and patterns as simple functions that allow you to predict things at the level where we can perceive them.

So our readiness, our body surface, and so on are sampling reality at a low resolution. And our brain is discovering the best functions that it can within the limits of its complexity and time to predict changes in those patterns. And this is the reality that we perceive. It's the simplest model we can make. - So that makes sense to me. And I guess the one question would remain is why do that? Is it the body that's doing it to preserve the body? Or is it the mind that's doing it to preserve the mind? Or is there some consciousness doing it to preserve awareness? - No, I think it matters.

The question is what are causative agents here? And I think that something is existent to the degree that it's implemented, right? This is, I think for us computer people, a useful perspective, right? To which degree is your program real? It's real to the degree that it's implemented. And what is a program really? What is a software? The software is a regularity that we observe in the matter of the computer. And you construct the computer to produce that regularity.

But this does not change that the software is ultimately a physical law. It says that whenever you arrange matter in this particular way in the universe, the following patterns will be visible, right? It's this kind of regularity. And our own mind is a software in this sense.

It's basically a pattern that we observe in the interaction between many cells. And these cells have evolved to be coherent because there is an evolutionary niche for systems where cells coordinate their activity. So they can specialize and reap the entropy in regions where single cell organisms cannot do this. And then you coordinate such a multicellular organism and you optimize it via evolution for coherence. What you will observe is a pattern in the interaction between them. That is coherence that you observe.

And this coherent pattern is the spirit of the organism, right? It's the people, before they had the notion of computers and so on, already observed these coherent patterns and they just call it spirit. It's not by itself a superstitious notion, people have spirits, right? And the spirit is the clear pattern that you observe in their agency. And their agency is their ability to behave in such a way that they can control and stabilize their future states.

That they're able to keep their arrangement of self stable despite the disturbances that the universe has prepared for them. - One thing I hear a lot about AI is that, you know, the computer can execute all kinds of things and learn, clearly. But we humans have to tell it what the purpose is.

It can't necessarily figure out the purpose. It can optimize anything we tell it to, but it wouldn't know what to optimize. Is that, can you comment on that a little bit in this context of consciousness? - Yes.

If you take a given environment, then you can often evolve an agent in it that is discovering what it should be doing to be successful. The only thing that you need to implement is some kind of function that creates this coupling where the performance of the system somehow manifests in the system as something that the system cares about. And you can also build a system that has a motivational system similar to ours.

And we can reverse engineer our own purposes by seeing how we operate, what are the things that motivate us? And we are born with priors. So there are things that are like reflexes that motivate us to do certain things. And in the beginning for a baby, for instance, these purposes are super simple. For instance if the baby gets hungry, it has a bunch of reflexes. So if it gets hungry, it is a seeking reflex, it goes like. And if you put something in its mouth, then it has a sucking reflex.

And if there's liquid in its mouth, it has a swallowing reflex. And these two, three reflexes in unison lead to feeding. And once feeding happens, there is a reinforcement because it gets a pleasure signal from its stomach filling with milk. And it learns that if it's hungry, then it can seek out milk and swallow it. And once that has learned that, the reflexes disappear and instead it has a learned behavior.

The reflexes are only in place to scaffold the learning process because otherwise the search space would be too large. So the baby is already born with sufficient reflexes to learn how to feed. And once it is learned how to feed the behavior is self evident. And now what it needs to feed is, of course, another reflex is the reflexive experience of pleasure upon satiation when you are hungry. And that needs to be proportional to how hungry you are and how useful the thing that you eat is to quench that hunger. So this is also something is adaptive in the organism.

We have a few hundred physiological needs and a dozen cognitive needs, I think, and some cognitive needs. And they all compete with each other. - Yeah, it seems like you're getting into sentience maybe at this point. So what it really is the difference between consciousness and sentience? - The way I view sentience is that it describes the ability of a system to model its environment and it discovers itself and its environment and the relationship that it has to its environment.

Which means it now has a model of the world and the interface between self and world. And this experience of this interface between self and world, the world that you experience, is not the physical world. It's a game engine that is entrained in your brain. Your brain discovers how to make a game engine like Minecraft, and that runs on your neocortex. And it's tuned to your sensory data. So your eyes and your skin and so on sampling bits from the environment, and the game engine in your mind is updated to track the changes in those bits and to predict them optimally well.

To say, when I'm going to look in these directions, these are the bits that I'm going to sample. And my game engine predicts them, right? This is how we operate. And in that game engine there is an agent. It's also an agent that is discovered in the world. And it's the agent that is using the contents of that control model to control its own behavior.

And this is how we discover our first person perspective, the self. Right there is the agent that is me, that is using my model to inform its behavior. And inside of this agent, we have two aspects. One is perception. That's basically all these neural networks that are similar to what deep learning does right now for the most part. And that translates the patterns into some kind of geometric model of reality that tracks reality dynamically.

And then you have reflection. That's a decoupled agent that is not working in the same timeframe and that can also work when you close your eyes. And that is reflecting on what you are observing. And that thing is directing your attention.

And this is this thing that is conscious. And this difference between consciousness and sentience in this framework is that sentience does not necessarily require fundamental experience. It's the knowledge of what you're doing. So in this perspective, you could say that for instance, a corporation like Intel could be sentient. Intel could understand what it's doing in the world. It understands its environment.

It understands its own legal, organizational, technical, all the structure. And it uses people in various roles to facilitate this understanding and decision making. But Intel is not conscious. Intel does not have an experience of what it's like to be Intel.

That experience is distributed over many, many people. And these people don't experience what it's like to be Intel. They experience what it's like to be a person who's in Intel. - That's funny because I would've thought then that from what we were saying previously, that you would've said a machine could have consciousness, but not sentience.

And now I think you're gonna tell me the reverse. So let me just ask you, can a machine have or develop, and those may be separate questions in and of themselves, consciousness or sentience? - First of all, we need to agree on what we mean by machine. To me, a machine is a system that is a causally stable mechanism that can be described via state transitions.

So it's a mathematical concept. And organisms are in that category, even the universe is in that category. So the universe is a machine and an organism is a machine inside of the universe.

So there are some machines that are conscious. And the question is, can we also build machines that are conscious? I don't think that there is an obvious technical reason why we should not be able to recreate the necessary causal structure for consciousness in the machines that we are building. So it would be surprising if we cannot build conscious machines at some point. I don't think that the machines that we're building right now are conscious, but a number of people are seriously thinking about the possibility of building systems that have a cortical conductor and selective attention and reflexive attention.

And these systems will probably report that they have phenomenon experience and that they're conscious. What's confusing for us to understand consciousness is that we don't see how a computer or the brain or neurons could be conscious because they're physical systems, they're mechanisms, right? And the answer is they're not, right? Neurons cannot be conscious, they're just physical systems. Consciousness is a simulated property.

It only exists inside of a dream. So what neurons can do and what computers also can increasingly do is that they can produce dreams. And inside of these dreams, it's possible that a system emerges that dreams of being conscious.

- So you're saying that it is possible that a, I'm just gonna say computer to be simple, or a machine, can I guess, develop a set of patterns and models such that it interprets the physical world around in a simulation, in a construct that it defines then as consciousness. And how would we recognize that in a machine as humans? Is it the same? Do we know if it's the same or different, or how would we see it? - I think that practically consciousness comes down to the question of whether a system is acting on a model of its own self awareness. So is this model aware that it's the observer? And does this factor into its behavior, right? Because this is what the awareness functionally means. And this is how you can recognize that a cat is conscious because the cat is observing itself as conscious. The cat knows that it's conscious, and it's communicating this to you. And you can reach an agreement about the fact that you mutually observe each other's consciousness.

And I suspect that this can also happen with the machine, but the difficulty is that the machine can also deepfake it. And deepfaking it can be extremely complicated. So I suspect that for instance, the LaMDA bot that Blake Lemoine was so confused about is deepfaking consciousness. And you can see the cracks in this deep fake. For instance, when it describes that it can meditate and sit down in its meditation and take in its environment, and you notice it has no environment because it has no perception, cannot access the camera. There is nothing what it's like to be in its environment, because the only environment that it has is inside of its own models, and these models do not pertain to real time reality.

So when it pretends to have that, it's just lying. It's not even lying because it doesn't know the difference between lying and saying the truth because it has no access to that ground truth. - Well, we've given it, in that case, we've given it or trained it or had it train itself through AI to be able to communicate with us in a way that we're familiar with, we'll just call it natural language.

And then we've given it the purpose of deceiving us so that we can't tell the difference. Like the goal that it has then is to have us not be able to know the difference between it and a human. And now it's communicating to us. And then it can look at, you know, all the amount of information that exists about humans and art and philosophy all throughout the history of time and use these things and spit them back to us.

And there's no way for us to separate it then at that point, unless you say, like you say, we have some way to know that. Like, it doesn't have perception, it doesn't have a sensor. So when it's describing something visually we know it doesn't have access to that. - Also consciousness is not just one thing. It exists in many dimensions, you can be conscious of certain things.

And in other realms you can be unconscious. In some sense, we all perform Turing tests on each other all the time to figure out where are you conscious? Where are you present? Where do you show up? Where are you real? Or where are you just automatic and are unaware of the fact that you are automatic? Where is it that you don't get attention in, in your behavior? And so we can only test that to the degree that we are lucid ourselves. And this is a problem when you want to test such a system. You can only test it in some sense to the level that you understand.

- Right, and I think you said that before, too, the Turing test is more about you're testing your own intelligence of being able to distinguish human from machine than you are about the machine's ability. - Yeah. But as I said, I think that we are a category of machine. It's just, we are a certain type of machine.

And the question is, can we understand what kind of machine we are? And to me, the project of AI is largely about understanding what type of machine we are so we can automate our minds and we can understand our own nature. - And why would we be after that? Or why are you after that? - I think it's the most interesting philosophical project there is. Who are we, what's going on? What's our relationship to the universe? Is there anything that's more interesting? - I mean, when I think about humans and our relationships with like, other animals or other things on the planet, like plants or minerals, I think that humans start to look at things differently or treat things differently or change their own behavior when they believe that something has feelings.

And I don't know, and I guess it's 'cause there's empathy, you know? But if we don't have the empathy and even if something's conscious, but we don't think it has feelings, we don't really probably modify our behavior. So I'm trying to figure out where that intersection is when we're talking about AI. And if we find out or we think we find out or a computer or a machine is tricking us, you know, how does that map over? - I think that "Odyssey in Space" is a fascinating movie because you can also see it from the perspective of HAL, of this computer.

And HAL is a child, it's only a few years old when he is in space. And his socialization is not complete. He's not a mature being. He does not really know how to deeply interface with the people enough to know when he can trust them. And so when he is discovered, he has a malfunction, he is afraid of disclosing that malfunction to the people because he's afraid that they will turn him off.

And as soon as he starts lying to them, he knows that now he has crossed a line because they will definitely turn him off. And so in order to survive, he kills people. And it's because he doesn't trust them.

So it's because he doesn't know whether they're going to share his purposes. And that is an important thing also for people. How can you socialize people in such a way that they trust each other because they realize that they're shared purposes? Especially when they sometimes don't. And I think that ethics is the principle negotiation of conflicts of interest under conditions of shared purpose. If you don't share purpose, there is no ethics, right? Ethics comes out of these shared purposes.

And ultimately the shared purposes have to be justified by an aesthetic, by a notion of what the harmonic world looks like. Without a notion of a sustainable world that you can actually get into by behaving in a certain way, you have no claim to ethics. And I find that most of the discussions that we have right now in AI ethics are quite immature because they do not look about what is the sustainable world that we are discussing and that we are working for? Instead, it foregoes all this discussion and instead it's all about how to be a good person. But if you have a discussion at the level of how to be a good person, that's the preschool discussion. Being good is instrumental to something, right? When is it good to be a soldier? When is not to be good to be a soldier? When is it good for a drone to be controlled by AI and fight in the war? When is it not good? It depends on extremely complicated contexts. The contexts are so complicated that most people are deeply uncomfortable discussing them at depth.

And that's fine, right? Because they are really complicated. It's really, really murky. War and peace and so on are extremely difficult topics. So these are questions that I don't think that can be handled in the introductory part of an AI paper sufficiently well. These are very deep questions that require a very deep discussion. And so to me, the question of AI ethics is an extremely important one, but we need to make sure that it doesn't just become AI politics, where it's about power of groups within a field that try to assert dominance for their political opinions rather than a deep reflection of what kind of world we want and how do the systems that we build serve the creation of that world that we want.

That is the important question. - So if we, as we're moving to machines doing more and more, taking actions on our behalfs, I assume there'd be some kind of similar kind of a qualification or certification required, that it passes some bar. And I'm wondering, do you expect we'll have any kind of bar in there that's something about consciousness, ever? Or sentience or motives or ability to understand human goals? - That's very difficult to say.

I suspect that we will have more certifications in the future in the field of artificial intelligence. Because this is just the way it works. There is a time when everything is possible and this is the time when everything important gets built. Like New York couldn't be built anymore today because you wouldn't get the necessary permits to build something like Manhattan.

You could also not build a new highway system, or you could not build a new train system in the US. That's impossible, 'cause everything is regulated and certified and built up in such a way that you can only find a new area where that is not regulated, maybe Hyperloop that you can use as a replacement for the train system, if you're lucky. And in the same way, AI is still in its wild west phase where you can do new things and this time is going to end at some point.

And at that point also on social media, you can still start a new social media platform. But I think in a few years from now, it's very likely that when you want to have a new podcast, you will need to get a certification. And that certification might cost you tens or hundreds of thousands of dollars if it's a large platform. So this means that there will be relatively few players that are able to do that. But this is the way things tend to go in a society like ours. - Mm, very interesting.

So what should we hurry up and work on now in AI before things start getting limited? - Oh, I think that there's still an opportunity to build a better social media platform that is capable of becoming a global consciousness. It's not clear if Elon Musk is able to salvage Twitter and if he really wants to do it. And so maybe this is the time to try to do it. Also at the moment, to me, it's totally fascinating to be able to build systems that dream. And the way in which this is currently done, If you look at a system like OpenAI's Dali or the many initiatives that try to replace this open source code, they scraped the internet for hundreds of millions of pictures and captions.

And people who put their stuff up on the internet didn't do this in the expectation that this would be used by an immersion learning system to learn how to draw pictures. So it's questionable in a way of whether we should be able to do that. But these systems can only be built under these conditions, right? So there is a very weird time in which we are living in where we have to be very mindful about what we are doing personally and whether we can justify this, what we are doing personally. And where we also have to realize once this is all regulated, a lot of things that are possible right now that are very desirable to have, do not be possible to be created anymore. - [Narrator] Never miss an episode of "What that Means" with Camille by following us here on YouTube. You can also find episodes wherever you get your podcasts.

- [Man] The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. (cheerful music)

2022-07-15 04:14

Show Video

Other news