David Chalmers | The Metaverse, Virtual Worlds, and the Problems of Philosophy
What's up everybody. My name is, Demetri Kofinas, and you're listening to Hidden Forces, a podcast that inspires investors, entrepreneurs, and everyday citizens to challenge consensus narratives, and to learn how to think critically about the systems of power shaping our world. My guest in this week's episode is, David Chalmers. David, is University Professor of philosophy and neural science and co-Director of the Center for Mind, Brain and Consciousness at New York University.
He's probably best known for formulating the so-called "hard problem of consciousness." And I've been a fan of his since as long as I can remember. So when, David, told me that he would finally be coming out with his long anticipated book on the subject of technophilosophy and virtual worlds, I knew that it would be the perfect opportunity to have him on the podcast.
David's central thesis in his new book and that we explore in this conversation is that virtual realities like the so-called metaverse are genuine realities. And that we may in fact already be living in a virtual world, which I know will sound crazy to some people. But if you've heard any of my past podcasts with, Patrick Grim, on mind-body philosophy, or Jim Holt, on the philosophy of science, you will know that at the very least, based on what we already know or think we know about the material structure of the physical world, our perception of that world is a highly embellished simulation of what's really going on. We perceive only a fraction of the information that's available to us. And from that, our minds create what are effectively elaborate maps full of colors and smells and feelings. The feeling, for example, that the ground beneath my feet is solid, or that there are clear physical boundaries between my body and your body or between this microphone and the table in front of me.
After all classical mechanics, Newtonian physics, tells us that the world is made of atoms and atoms are almost entirely made up of empty space. So how is it that the world, myself included, can feel so solid when there's almost nothing there? This is just one of the questions that we contemplate in today's episode as part of a broader conversation about the nature of reality, consciousness, and the knowability of the external world. What is knowable and how do we know what we think we know about the world around us? And the second part of our conversation, we spend our time trying to imagine not only what it would be like to live in a simulated world, but whether or not it would be possible to live in such a world without becoming a slave to someone else's reality and whether or not so called web three protocols could play a role in helping people live freer and better lives in the metaverse. For both existing subscribers and anyone new to the podcast, we've moved our premium subscription off of Patreon and onto Supercast, which you can now access directly through the Hidden Forces website at hiddenforces.io and get immediate access to the second part of my
conversation with, David, and every other guest in a single, unified episode, which will be delivered directly to your podcast app of choice every week, or whenever we publish anything new. Subscribers to our Super Nerd tier will also have access to all of our episode transcripts and the all new intelligence reports, which are the cliff notes of the Hidden Forces podcast, formatted for easy reading of episode highlights, with answers to key questions, quotes from reference material and links to all relevant information, books, articles, etc, used by me to prepare for each and every conversation. And with that, please enjoy this deeply thought provoking conversation with my guest, David Chalmers. David Chalmers, welcome to Hidden Forces. Thanks. It's a real pleasure to be here.
No, the pleasure is all mine, David. You and I have been in touch for like five years, about you coming on the podcast. Yeah. We were going back and forth. And I think I said any day, now I'm going to have a book coming out. And then how about then? And you know these books take a little well to write.
That's correct. I knew I wanted you one from the very beginning. Many of our early episodes were actually science, tech and philosophy based.
And you were obviously one of the people I wanted to have on, because I've always been a huge fan of yours. I probably first saw you on video when you were part of the documentary feature in the third Matrix, you were one of the philosophers they brought on. They brought on, Cornel West, and some other folks. And I was always interested like I'm sure many other young people are interested in philosophy. I was interested in these sort of hard questions, these existential questions.
And of course, you are most famously known for posing the quote, "Hard problem of consciousness," which I'd love to talk about today. It's certainly going to come up as part of our larger conversation. For people who may not be familiar with you, what's kind of the basic David Chalmers' bio? Yeah. Well, I'm a philosopher interested in the mind and in reality. I actually started out a math and science geek. I got partway through a PhD in mathematics before getting just totally obsessed by problems of the human mind and the problem of consciousness.
These seem to me to be the most fascinating, most difficult unanswered questions in science and in the universe. How on earth is it that the brain gives you consciousness? And at least at that point, it seemed to me that the best way to approach this, taking the big picture perspective, was through philosophy. So I ended up switching into philosophy and I did a PhD in philosophy and cognitive science where I thought about artificial intelligence, I thought about the mind, but especially I thought about consciousness.
Ended up publishing a book back in the mid 90s, now called, "The Conscious Mind", was all about the problem of consciousness, the hard problem, as you said. And for a lot of my career, I've focused on the mind and on consciousness. And it's been very exciting as a rich and robust science of consciousness has begun to develop. But lately, maybe over the last two decades, but especially the last few years, I've gotten really interested in reality. I mean, philosophy's all about the relationship between the mind and reality.
How do we experience and what's out there? And yeah, thinking about actually movies, like The Matrix, is an amazing way to think about reality. And yeah, when The Matrix came out in 1999, that provided this wonderful concrete way of raising so many issues about reality, like how do you know you're not in the Matrix right now? How do any of this is real? And yeah, I ended up writing something for the Matrix website in the early 2000s. They produced that documentary for, I think, back then it was, "The Ultimate Matrix DVD Collection", with a few philosophers talking about philosophical ideas in the Matrix. And then all that added up to basically led on a path to this book I've now just called, "Reality+: Virtual Worlds and the Problems of Philosophy", which is basically using the idea of you have virtual worlds, like the Matrix, what computer generated realities to think about the technology, which is coming and to think about the way that all that connects to some real big issues in philosophy.
Well, maybe we'll have a chance to talk about the Matrix and your involvement in that film and what you thought of it then, what you think of it now in the later part of our conversation. But just hearing you talk about it makes me think about not just how ahead of its time it was. But also that towards the end of the 1990s or maybe early 2000s, there was an interest in these kind of surreal films, like Vanilla Sky. And I can't quite remember when Inception came out, but there was this period where people were interested in it, but still interest in the simulation hypothesis. And in this larger question of what is, the nature of our world? Can we really know what the nature of our external reality is? Are we living in reality? What is reality? These questions seem to be more in the minds of people today than they were at any previous point in my lifetime.
I'm curious if you would agree with that and if so, why you think that might be? Yeah. Well, these questions have really peaked. I mean, you can find versions of these questions, even in very ancient philosophical traditions. The ancient Chinese philosopher, Dzogchen, said, how do I know that I'm not a butterfly dreaming that he's Dzogchen? How do I know that I'm not dreaming right now? Descartes in the 1600s, Renee Descartes said, how do I know an evil demon isn't fooling me into thinking all this is real when none of it is real? But yeah, the version of this question we ask these days is, how do we know we're not in a simulation? How do we know we're not in a virtual reality? I mean, in a way it's kind of a contemporary analog of those old questions.
But computer technology has suddenly made that question much more real because we now, actually, have the technology to develop simulated worlds, virtual realities, and so on. This actually starts to appear in science fiction. You get versions of this idea, even in the 1960s, very soon after the computer is invented. In the early 1970s, the German movie/TV mini series, World on a Wire (Welt am Draht) by Rainer Werner Fassbinder, came out.
And that's often regarded as the progenitor for many of these simulation movies to come. But you're right, in the 1990s, okay, we got The Matrix, but we also got, the 13th Floor. And I think Inception was a few years later, maybe around 2010.
But now these days, the idea is just almost taking for granted as part of popular culture. You can go to every second episode of Black Mirror, does something with computer simulations and the relationship between simulation and reality. Maybe The Matrix made this idea universal to the point. And now people just riff on this idea in so many different ways. Do we upload ourselves to a simulation? Does someone capture someone and put them into a simulation? Or we got to, there's one where people go on a bunch of dates in a simulation to see if they're going to be compatible.
So yeah, now, I think partly, it's because VR technology is now all around us. There are these headsets, for example, the Oculus Quest, which has become very popular that a lot of people are using. Augmented reality technology is coming.
We got this big movement into the metaverse, of course, that all the corporations are pushing. So I think that makes the issues come alive as relevant and growing out of current technology. I mean, it's no longer just science fiction, soon it's going to be science fact.
Do you feel like some of it has to do with the fact that we're in one of those phases in society where the world around us is changing faster than in previous periods and so people feel sort of ungrounded? And as a result of that, they are reaching out, not just with interest in the simulation theory, but also there's something else, David, that I've noticed, which is that, I mean, you're a philosopher, but there was a time when being a philosopher was a more easily categorizable thing. Philosophers did philosophy. They existed in sort of academic departments where other people like them sat around and talked about these larger existential, largely impractical concepts. Things that might be interesting intellectually, but didn't really have practical significance. But it seems that today, philosophy has increasingly moved out of the classroom and into the world. And more and more people are practicing philosophy.
And also the philosopher himself or herself has really been unshackled from the academic department. And more and more people are either philosophers from academia who have podcasts or who are out in the world or people who have one way or the other taken on many of the qualities of a philosopher and are, again, engaging with the public. And I think, again, some of that is the fact that the technology has changed.
And so people can feel the need to engage with this type of material because of the practical relevance of AI and VR and stuff like this. But I think also, and this is what I'm asking you, it seems to me like we are in this period of existential confusion and existential angst. And so people are trying to fall back to first principles. Yeah. I mean, philosophy got started as this pretty practical discipline.
Socrates is one of the most famous philosophers of the ancient Greek era, did his philosophy in the town square, talking to people, haranguing people, getting into arguments with everybody. And the philosophers and the scientists were more or less continuous and philosophers would think about, Plato put forward one of the first great models of government. So it was also very political, very practical.
Maybe for a few years, in the 20th century, philosophy went into an ivory tower mode where it was mostly academic. But actually, even Isaac Newton, he thought of himself as a philosopher when he came up with physics. So many disciplines have been spun out of philosophy. So it's a long tradition of philosophers being very publicly engaged. And I do think we're getting back to that now. Even if I compare philosophy now to philosophy 30 years ago, when I was doing my PhD.
Back then, people thought, yeah, well, academic philosophers were a bit suspicious of someone trying to engage the general public. Whereas now, I think it's regarded, well, if we have a lot to say, yeah, the world right now is changing fast in so many ways. Maybe it's social movements like riots say for, LGBT people, new ways of thinking about social justice, maybe it's science, all the amazing stuff that's coming out of say quantum mechanics or genetic engineering, but especially technology. Technology is moving so fast. This is the place right now, where everything is moving fast.
In particular, information technology. It's a new paradigm, every 10 years, whether it's the internet, the smartphone, the social media, the so-called metaverse, this all changing very fast. And it obviously interacts with the structure of society. The internet has brought so many great things, it's built communities and so on, but it's also led to a lot of fragmentation. So I think, yeah, we need to reflect very hard on the technology we are dealing with.
And I think philosophy has a very central role in reflecting on that technology. And that's what I'm trying to do in this book, actually, is something I call techno-philosophy, a two-way interaction between philosophy and technology. Using philosophy to shed light on pressing issues about technology, but also using technology to shed light on some very traditional questions in philosophy. I think it goes both ways. How does that compare to neuro philosophy of using, what we know about the brain to ask questions about what we know about existence in a philosophical sense? Yeah. I got the word techno-philosophy as a kind of an adaptation of the idea of neuro philosophy, which is first put forward in the 1980s by the philosopher, Patricia Churchland, who is very big on the role of neuroscience.
She says to do philosophy properly, we have to understand the brain. We can think philosophically about neuroscience, but we can also use neuroscience to shed light on some of the most important questions in philosophy. Like what is the mind and its relationship to the body? How does the mind construct reality? She was saying, pay attention to neuroscience and that will help solve these philosophical problems. Well, I want to do the same thing, but with technology playing the central role.
We can think philosophically about technology. And when we do that, thinking about technology, thinking, for example, about virtual realities, that can actually help us address a number of very central philosophical questions about reality. What is reality? How can we ever know about it? What is it to live a real good life in reality? And I think, actually, thinking about just as, Churchland, thought that thinking about neuroscience can help us address philosophical questions, I'd like to think that thinking about technology can do the same. What's interesting about your book is that, while it's obviously, I think, at heart about these deeper existential questions of the nature of existence, how do I know what I think I know? What does it mean to know something? What is reality? What is the nature of reality? What makes something real? There is a fundamentally different ethical, moral interpretation of this world in your book than say in a film like The Matrix. In The Matrix, there was a clear sort of moral framing around, I suppose, well, at least that was my interpretation of the early films.
The latest film has changed our thinking around that. But the early films really did make it feel very much like a battle between the machines and the humans. And while the last film did show that they were both one, they both needed each other to exist in this world, there was this clear ethical framing that clearly you don't want to live in The Matrix. That if you have to choose, you're better off choosing to live in the real world, right? Wasn't it Cypher that said that if he has to choose between the real world and the Matrix, he choose the Matrix. And he was obviously an evil character.
He was a villain in the movie. I want to think about that as we discuss your book. So maybe to structure this conversation properly, lay out for me, again, in more detail, what the central thesis of your book is and how relates to what the simulation hypothesis is and what that is? Yeah.
So the central thesis is, virtual reality is genuine reality, that what happens in objects in a virtual world, they really exist. If I'm in a virtual world, like the Matrix, interacting with tables and chairs and those things, they really exist. They're real entities. They may ultimately be digital entities. Virtual reality is a digital reality, but it's no less a genuine reality for all that. I also want to say, yeah, we could be in a simulation, like the Matrix.
That's the simulation hypothesis. I don't say that we are, I think we can't know for sure, it would be indistinguishable, but I think there's at least a significant probability that we're in such a simulation. And at this point, many people will go, "My God! That's terrible. If we're in a simulation, nothing is real. My whole life is meaningless."
But actually, what I want to say is, no, that's the wrong reaction. If we're in a simulation, this is all still real. Your life is just as meaningful as it was before. So think of it as a bad news, good news combination. Bad news, we might be in a simulation. Good news, if we are in a simulation, then life is not so bad.
You can still lead a meaningful life. And you're right, that the Matrix, this is opposed to the philosophical spin on simulations given by The Matrix. The Matrix gives you the impression that to be in a computer simulation is very bad, it's awful, it's a dystopia.
We have to escape. What I want to say is, I agree that the Matrix in the movie is bad, but I don't think it's bad because it's a simulation. I don't think it's bad because it's virtual. I think it's bad because it's a kind of a prison that the machines have constructed to lock people in and control them and deny them their possessions and deny them their rights and their autonomy, and so on. That can be bad, even quite independent of being in a simulation. Look at, here's another movie, The Truman Show.
Jim Carrey, and The Truman Show is basically, he's not in a virtual world. He's in a physical world, but his life is totally controlled by this film crew. And yeah, a lot of this stuff is not happening to him, genuinely. It's all scripted. We want to say, okay, that's not real.
That's very bad what they're doing to him. So that's not because it's virtual, it's not virtual. It's because they're manipulating him and deceiving him.
And likewise, we could have a virtual world that was quite unlike the Matrix, say one that we enter ourselves and choose to build our life in as people are already sometimes beginning to do with virtual worlds. And then we could exert our autonomy. Actually, a movie with a more positive spin on virtual worlds is the recent movie, Free Guy.
Have you seen that one? I haven't, but my wife wants me to see it. It's with one of our favorite actors. Yeah. Yeah. Ryan Reynolds, play is a non-player character in a video game.
And at one point, he starts interacting with some of the player characters who come in and he briefly has an existential crisis. Like, does this mean none of this is real? And then actually another character says to him in some of the wisest, philosophical words I've heard in a movie, "Come on, I'm sitting here with my best friend, trying to help get him through a tough time. If that's not real, I don't know what is."
Wow! That's pretty profound. Yeah, it is. And then they said about say, "Okay, this is our reality.
We deserve rights. We deserve respect." And they start protesting in favor of civil rights for these AI creatures. And they take the attitude, all this is real and we've just got to make it better. So that brings us back to the hard problem, David, and not the hard problem of speaking to a philosopher like you, which is a hard problem in and of itself because you actually raise a number of things that I want to tease out. And when I'm referred to the hard problem in this case, I'm saying, really how do we know, first of all, what gives rise to the sense that we are conscious that we are alive, this experience that we all seem to share, but that we can't quite know for sure whether other people share it, know that we share it, but it's really impossible to know what's going on inside someone else's conscious experience? And so what constitutes the type of conscious experience that would make someone a genuine person in the way that reality would be genuine that we're talking about? But before we go down that whole rabbit hole, I actually want to focus in on one of the things that you said that I think is interesting.
And I also want to point out, I totally agree with your point around what makes the Matrix dystopian. And I think at the core of that is the fact that you are being controlled. You're being controlled by another external agent, someone or something with designs on your life. And that will lead us to a conversation that I do want to have, which is, is that an inevitable outcome of living in a world where you increasingly move up the stack, so to speak, and you have less and less contact with base reality, to the extent that we can even agree on whether or not something like that even exists ontologically, or if reality is itself, just relative relationships? Again, "it from bit", and then lots of interesting conversations to have there. But in terms of the simulation, let's first of all define what we're talking about because you're describing a computer simulation, but the say same sorts of philosophical tools that we use to open ourselves to the possibility that we are living in a computer simulation are the same that we would use to open ourselves to the possibility that we are living in a universe created by a deity, by either an Old Testament God, or let's say a Pantheon of Greek gods or anything like that. And it's also in some sense, though, not exactly how we would arrive at the realization that we are in a sense already living in a simulation.
I mean, reality, we perceive only a small portion of the physical world as it is. We take that in and then our brains don't just fill in all the gaps, they also create a sort of illusion, a sort of simulation of the world so that we can navigate it. This is what science tells us. So what do we mean when we talk about simulation? Well, when I talk about simulation, I mean, the idea that reality is generated by a computer process.
It's very important that it's computer simulation. We already had other forms of simulation. We have the imagination, we can imagine things.
That's a form of simulation. We can dream. Dreams are a form of simulation. And that's why this connects so well to those ancient questions.
Like how do you know you're not dreaming right now? But the contemporary version of this is the computer simulation. So when I talk about the simulation hypothesis, I always mean the idea that all this, this reality around us, all of our experiences of external reality are generated by computer process. It's got to be computer generated.
It's also got to be immersive. It's got to be something you experience all around you, and it's got to be interactive. You make a difference to what's going on in the simulation.
These are actually the three core criteria of virtual reality technology. It's computer generated environments that are immersive. You experience them in three dimensions all around you, and they're interactive.
You interact with them. If you meet those criteria, then you have a computer simulation. One way to connect that is just a very practical virtual reality technology that's coming. But the other way to connect that is, yeah, could we ourselves be in such a giant simulation already? And you're right, this raises the question of simulations within simulations. Within simulations, could we be in a level five simulation, a level 10 simulation? And you asked, does this mean that our lives are somehow under someone else's control? I guess what I want to say is just because we're in a simulation, it doesn't follow that our lives are under someone else's control. It could be that people just set up a simulated universe at the beginning, set up some laws of physics, set up some initial conditions, and then they just let their simulation run.
Now, it could be that they're interfering with us constantly and say, "Let's get them to do that. Let's make that guy write a book on simulation now." Sometimes I worry that's happened to me.
But mostly they could just be setting it up and letting it go roughly the way that we happens in a non-simulated world. But you're totally right, this does connect to ideas about theism and about gods, because exactly the same issues come up again for non-simulated worlds. If you think they might be a God, the question is, is God controlling our action or did God allow us free will? One way of doing it is God just set it up and then lets us do our thing. On the other way of doing it, God is in constant contact, manipulating us, manipulating our environment. And as before, I'd say, yes, simulations could go either way. Yeah, the creator of a simulator really is a kind of God of the stimulation.
If we're in a simulation, our simulator created all this. They're all powerful. They may be able to control all of this. They're all knowing, they may know what's going on.
That's some of the properties of a traditional God. Does that mean that they're there by controlling us? Well, I think it depends on whether you've got like a hands-on God or a hands-off God, likewise. Is it a hands-on simulator who's controlling everything that happens to you the way that Truman gets controlled in the Truman Show or sometimes The Matrix characters get controlled? If so, then I would say that's bad, whether it's a simulator or a God in a non-simulated world. But if it's a simulation, I guess, lets us get set up and then lets us go, lets us build our lives. And as far as I can tell, that's no worse.
That's not a power with the world where God sets things up but then gives us free will. So again, I think it's not the simulation that makes it good or bad, it's like you've got this technology. It could be used in good or bad ways for good or bad lives. Right. So actually, here's a great place to draw some distinctions between the ethical and the epistemic.
Ethically speaking, it sounds like what you're saying is, it doesn't really matter what the physical nature of the world is. And I use that term loosely, but I think you know what I mean? The substrate. The underlying stuff it's made of. Yeah.
Right. The nature of my reality is, what matters more are certain things like, do I have free will for example? Am I in charge of my own life? That's what constitutes a genuine experience. Because, again, to the point I made earlier, science tells us that we are, in some sense, living in a simulation. That is what our world is.
Yeah. We perceive it all through our brains, if we take a materialist point of view. And then looking at the epistemic point, I want to ask this question, which I think is central. And we'll flip between these two sort of polls of ethical and epistemic. How do we know anything about the external world? This has been a problem since the very beginning. You mentioned Descartes, people who have been worshiping all sorts of deities and spirits since as long as we know, trying to understand, I think it seems to me, trying to make sense of their internal reality with what they see in the external world and trying to bridge that gap and trying to make sense of it.
I mean, we haven't really made a lot of progress in all those years. So how is it that we can even make any kind of statements or are we just practicing engaging in, do you think, a very innate practice of trying to make sense of a world that we can never truly, fully, concretely know? I think it may be that we can't know everything about the external world, but I certainly think there are some things that we can know. I mean, I think modern science has been pretty successful at giving us some kind of knowledge of the external world.
Now there are some things that might not tell you, it might not tell you, are we in a simulation or not? It might not tell you, for example, that underlying stuff that our reality is made of. If there's a reality that contains ours, it may or may not tell us about that. But I think at the very least, one thing we get from science is what I call structural knowledge of the external world. Like the equations by which things interact and evolve, also the relationship between the mind and the world.
We get quite a lot of knowledge about that. What we don't get so much is knowledge of reality in itself. You mentioned Kant before, Kant distinguished the realm of appearances and the realm of things in themselves. Noumena and phenomena.
Yeah. Something goes along with noumena for things in themselves and phenomena for appearances. And Kant said, you can never know about the ultimate nature of the thing in its self. So maybe I can know there's a cup there, at least, of a level of appearances, but what is the nature of the cup in itself? Kant says you can't know. And in a way, the simulation ideas related to this, on my view, I can know there's a cup here. I just don't know whether at the underlying level, it's made of bits in a simulation or it's made of atoms in a non-simulated universe.
That's kind of a question about the cup in itself. And maybe I can't know that, but I can know. I can still know there's a cup here. I can still know it has a certain structure that it will behave in a certain way. So I know about the cup at the level of science, which is kind of connected to, Kant, level of appearances. But the way I put it in the book is we know about the structure of things, we just don't necessarily know about the substrate that it's made of.
So even if we can't know that we're in a simulation, Descartes and others would use that to say, therefore, we don't know anything about external reality and nothing that follows. I think we can still know about the structure, the equations. For example, that govern external reality, we just don't know about the substrate that they're made of. So that's, like Kant, it's accepting some element of the skepticism as there are some things we can't know, at the same time as saying, well, there are other things that we can know. Well, I absolutely love that framing because I think it captures really the key distinction.
In terms of structure, science allows us the ability to make sense of the world to be able to make probabilistic statements, which reveals something about the structure of our world. But we can't fundamentally know with any degree of certainty, the nature of that structure, right? That's what you're saying? Yeah. We can't know the substrate. We can't know the substrate that it's embedded in.
We know the equations. Then at one point, Stephen Hawking asked, what puts the fire in the equation? What's the underlying stuff that these equations are describing? And science will tell us a lot about the structure more and more equations, but it doesn't tell us about the stuff underneath the equations. Is it a computer or is it bits? Is it consciousness? Is it something else? Science may not tell us about that, we may need philosophy. What are some of the most compelling theories about what ontological reality is? You talk about the "it from bit" hypothesis in the book. Do you think that, that's the most compelling interpretation or thesis that reality at bottom is informational? That's one idea.
I definitely find appealing that, yeah, the world is basically the interplay of bits. This is an idea that some physicists take seriously and that obviously connects to the simulation hypothesis quite well. Because if we're in a simulation, our world is an interplay of bits.
But there's always this question, what are the bits made of? Are they running on another computer? Is there a substrate for the bits? If there is, then that leads to what I call the "It from bit from it" idea. And then we have to know, okay, what are the "Its"? What's the next level? Alternatively, we can stop and say pure bits. The world is an interplay of pure bits and the bits aren't made of anything more fundamental.
And I like that. That's a more austere metaphysics for the universe. It's just, the universe is pure information, not made of anything. Simply, it's a very attractive and beautiful view. The question is whether it makes sense. Is that like describing the world as a relational at bottom? Is that- Very close to that, yeah.
There's this general idea that I call structuralism. The world is basically like a... It's all about the structure of how things interrelate and interact with each other. And one very natural way to put that is, yeah, the world is basically just a set of relations between things. What are those things in themselves? Kant, would say, we'll never know, but the pure structuralists would say, there is no thing in itself, there's just relations between things. Do you ever think about, when you were talking earlier and you mentioned "It from bit", and then you said maybe, it from bit from it? From it.
Yeah. The bits are referring to something. Right.
Exactly. And it made me think about how we always bump up against whether it's in science or philosophy, the starting point. That human beings, I feel like, always need a starting point when they think about something, a beginning. And I wonder if that's also because the larger observation here is that we're limited by our language for how we can think about things.
So even when we think about information, an informational universe, we are limited by how we think about what information is. And also, just the limitations of our knowledge. So how do you think about that, the fact that we're limited, that we have these predisposed ways of looking at the world? A classic example of where this comes up is when you compare quantum to classical mechanics and the challenge that people have to understand the world in a sort of quantum sense or what our data tells us about the structure of the world. If it's a quantum world versus a classical world, how do you think about that? And how do you incorporate that into your thinking about these issues? I don't know. I guess I assume that it's still in some sense early days in science and philosophy, we're not even close to final answers about these questions. So it's no doubt true that we're naive in all kinds of ways, we're predisposed by evolution, by the structure of our brains, to be limited about these things.
I just take the view that we need to do the best we can to figure out the world. Where we can, we need to try and shake off some of our assumptions, like testing them and using some of these wild ideas, like, could we be in a simulation? Could it be this or that? Is one way to shake off some of these ideas. But yeah, I fully believe that when the history of say, science and philosophy is written, the year 2022 is not going to be put forward as, "My God! This is when they had all the answers." This is going to be very early in the play out of all these things. I hope, actually, that one of these days we might develop super artificial intelligence beings, which are much smarter than us at everything.
They might even be better than us at philosophy. So maybe once we've actually developed some super intelligent AI, they will actually have insights on this problem that go beyond any insights we could hope to have. And for example, maybe an intelligent enough AI will be able to solve the hard problem of consciousness for us. Well, I've had number of episodes where we've discussed this. One with, David Weinberger, comes to mind where in the introduction, I posed a question that in a world where machines can give better answers to every problem than humans can, then what is left to contemplate and will philosophers basically displaced by a class of priests whose job it is simply to give you a sense of meaning around the decisions that are made for you? And so it's also a way of saying that the last people that'll lose their jobs are the philosophers. Once the philosophers lose their jobs, then we know that we're really in trouble.
What do you think the role of a philosopher would be in such a world where machines are the more competent deciders of every major problem and that our job is simply to follow their instructions? Do you think human beings can live in such a world? How would we find meaning in such a place? I don't know. Look, I think if machines are better than humans at everything, then this question arises for almost everything, not just philosophy, but science and engineering and even social interaction. Well, I guess, my point about true, but my point with philosophy is that philosophy precedes all of those other things, right? Philosophy grapples with the questions that not just, we haven't been able to answer, but arguably, some of which, I think, I personally don't see how we'll ever have an answer to because of the fundamental questions about the nature of the existential world, how can we know what we know? What is knowable? Etc.
I think, at least, in the short term over the next say, 100 years or so, philosophers actually may very much be in demand because we're going to have to answer some of these questions. For example, our AI systems conscious, we wonder, do they actually get to have legal rights? What kind of recognition should we give them? That's partly a philosophical question. Or another one, we may be faced with the choice about uploading ourselves. Maybe we want to live forever. So we'll upload our brains to a computer. But then we worry, will I still be there at the other end? Will that be conscious? Will that be me? That's a philosophical question.
We're going to need philosophers as consultants to think about that. Or should we enter virtual reality? To address these philosophical questions, we'll need philosophical thinking. Of course, that doesn't tell us who's doing the philosophical thinking. Yeah, humans at first, but once we have machines in the loop, it may be that the machines are actually better at philosophical thinking than humans are, in which case, yeah, maybe we'll have machine philosophers to work through all this for us. Yeah, I guess, one where I would like to think about it though, is that we don't need to make it humans versus machines. One possibility, there'll be an integration of humans and machines.
I mean, technologies are already extending our minds, our own cognitive capacities with the internet, mobile technology, and so on. Eventually, AI may extend our minds. We may be able to upgrade our own minds by surrounding it with technology, by replacing some of the biology with technology. I mean, I would like to see a world where we or our descendants are still somehow at the leading edge of say, science or philosophy, and everything else because it's not that we've created super in intelligent beings, is that we, ourselves, become the super intelligent beings. And maybe that's a version of this future where something recognizably human is still doing the science and the philosophy. Well, you've made that point with your term techno-philosophy, right? And the more that as computers advance and artificial intelligence becomes more intelligent, that provides us insights and allows us to do better philosophy.
Yeah. I think it's already happening. Technology provides already some very useful insights into philosophy, even in the early days of the computer.
I think, Alan Turing, reflected on the possibility of machine consciousness and machine thinking. And this helped philosophers to think about human consciousness and human thinking. Maybe these involve computer like processes. So yeah, this has been happening for a while, but yeah, once machines start really doing the philosophy for themselves, maybe that will be techno-philosophy on steroids will solve a philosophical problem, feed it to a machine and see what it says. So that actually raises a really interesting question, which I kind of touched on very early on, which is that at what point can a computer completely fool a human being? And I don't just mean like a very simple Turing test.
I mean, in the most advanced Turing test in the world where you actually don't know whether that other individual is human. And further to the point, even if you knew that they weren't human, that you're fully invested in the human qualities of that machine so much so that you hesitate to do it any harm, you want to treat it just like a human being. How do we grapple with such a society? Do we just by default end up treating machines just like human beings, because we can't know anything about their internal experience, and so we simply have to assume that they're conscious? I know that people like Tononi have put forward these ideas of integrated information theory and that as more information is integrated in particular ways, it gives rise to consciousness. How do we think about something like that? Yeah. I mean, this is already beginning to happen to a limited extent with the AI systems we have already.
I once met the robot, Sophia, who is embodied in humanoid form, with a human face that's designed to express many emotions. And she was just running some basic chatbot software, but still interacting with this robot in humanoid form. It was hard not to have the sense that you're interacting with some kind of conscious being. We just automatically attribute consciousness to things that look like a human, that have a face, and so on.
That was a powerful sense. And that was just a very primitive AI system. Of course, we're not yet at the point where we have AIs that have standard human capacities for all kinds of things. I mean, Sophia's conversational capacity was very limited, but it's going to happen eventually that we AIs as sophisticated as human beings. Perhaps, for example, even simulations of human beings. You simulate my brain well enough on a computer.
In principle, you might have a being that behaves a lot like me. And then the question's going to arise, are those beings conscious and do they deserve rights of their own? I guess I'm inclined to think that AI beings like that will eventually be conscious. It will feel like something from the inside to be them. They will have the subjective experience of perceiving, of thinking, of feeling, of acting.
And to me, that's what consciousness is all about, subjective experience. And I also think that once you have a being which is conscious, which actually subjectively experiences the world, then it has moral rights. It's not okay just to totally exploit them or destroy them or make them suffer. So I do think at a certain point, once we create AIs that are conscious, their lives will matter. They're going to deserve serious rights. And I think for that reason, we're going to have to be very cautious about creating artificial creatures if we want to be moral.
I mean, do we want to suddenly have a world where all these other beings, for example, have the right to vote, have the right to work, have rights equal to our own? It's going to be a mess. There's no question it's going to be a mess. But these are questions we're going to have to confront. So what does that mean about where consciousness or how consciousness arises? How do you think it arises? If you think that artificially generated beings that are sufficiently intelligent can be conscious, what do you think consciousness is? Boy, that's a hard problem, as they say. Well, that's why I brought you here.
No one knows right now what consciousness is. I myself am actually inclined to think it may be a fundamental feature of the world in the same way that space and time and mass and charge are taken by physicists to be fundamental features of the world. I think if you try and reduce consciousness to something else, some more basic underlying process, that will never work. Could it consciousness just be information integration? Well, it looks like in principle, all that information integration could go on without consciousness.
There's gap in the explanation. So I've been led to the view that consciousness can't be reduced to anything simpler than consciousness, it's just a fundamental feature of the universe. But that doesn't mean you can't understand it scientifically. We have an amazing science of space and time of mass and charge that's based on taking those things as fundamental. Right, the structure.
We can learn things about the structure of the world, but the nature of that structure, whether that nature is consciousness is something that we can't know is your point. Well, I think we can know. Yeah, we can certainly know the equations of consciousness as it were. For example, what physical processes are most directly correlated with consciousness? Is it simple information? Is it complex information? Is it biology? That's what the science of consciousness is now working on, relations between consciousness and the physical world.
And it's still early days, but it's getting somewhere and it will eventually get further. But yeah, it doesn't tell us exactly what consciousness is made of. Yeah. We talked about this whole idea that you can't know the basic structure of reality. Well, one proposal about the basic structure of reality is all reality is actually made of consciousness. You can take this idea from Kant of, the thing in itself, we can't know the thing in itself.
Shortly after Kant, came along the idealists like Fichte, and Schelling, and Hegel, arguably, who thought somehow the mind itself played a central role in making up reality. This corresponds to what I call, in my framework, the "It from bit" from consciousness view. Maybe what the world is ultimately made of is consciousness underlying that whole structure of reality. That's just one other speculation.
It may or may not be the true speculation, but that's a metaphysics that integrates consciousness and reality. I'm curious to follow the line of your thinking further on that in the second part of our conversation, David, because I do think it is the most interesting question. You also raise this thought experiment of uploading our consciousness to the cloud that many people think or hope won't simply be a thought experiment, but actually will happen. It makes me think about how, when I was a kid, and I'd watch Star Trek, I always wondered when they got beamed down to a planet. I just wonder, how does that work? You know what I mean? Does that mean that who you are is sort of all the different particles that get beamed down? I guess, Gene Roddenberry, would've felt that way.
I think he was a materialist. But I do want to ask you about that because I've always found it somewhat dubious, a dubious proposition to imagine that with any degree of certainty, say that we can somehow move our consciousness into the cloud. You've had some interesting conversations about this and you have this one interesting idea about how we could gradually do that.
And maybe if there's no point at which you flicker off, then you know for sure that it's you. But I would be curious to ask you about that. And to discuss further the kind of original part of this conversation, the original subject in this conversation around simulations and the metaverse and also how it has shown up in popular society, the power and political dynamics of all of this. And I'm going to leave our listeners with one thing, to the point about generations, that Keanu Reeves said in an in interview. I don't know if you saw it, David, he was speaking to an interviewer and he said that he was recently at a director's house and he was having dinner and there was a little girl at the table. I don't know if you have kids, but the little girl hadn't seen The Matrix obviously.
And she didn't know who Keanu Reeves was. So the director said to Keanu, she said, "Just tell my daughter, just explain to her what it is." And he goes, "Well, there's this guy and he is living in this world and it's not the real world.
And he's asking himself, is it the real world?" And the girl says, "Why? Who cares? Why does it matter?" And Keanu was stunned by this question. "Well, you don't care. You don't care if it's real?" She goes, "No, what does it matter?" And I thought that was really fascinating. It was also fascinating because Keanu's response to that was, wow! Awesome.
And the new Matrix movie sort of in a way, tries to reconcile, tries to come to peace with that possibility. So I'd like to discuss this, the cultural and political implications of what we're describing in the second part of our conversation, David, as well as everything else that I teased. For anyone who is new to the program, Hidden Forces is listener supported.
We don't accept advertisers or commercial sponsors. The entire show is funded from top to bottom by listeners like you. If you want access to the second part of today's conversation with David, as well as the episode transcripts and intelligence reports, head over to hiddenforces.io and check out our
episode library where you can also become a premium subscriber today. David, stick around, we're going to move the second half of our conversation into the subscriber overtime. All right.
We're back. Okay. How are you doing, David? Cool. Doing great. How do I know it's you now? Right? I don't know.
This whole time you've been intermediated by multiple layers of... No, I actually sent in my deep fake, Dave Chalmers, to do this interview. Trained it up on a bunch of my work and ran it through an AI system, added in a video mockup and yeah, the deep fake's doing okay.
So here, you're bringing up something that I want to address directly. And it's not just the deep fake. Obviously the deep fakes are a serious issue. It's raising a lot of concern for people who either work in the journalist profession or who work in politics. For example, how do we know, are we going to get to a point soon where the President of the United States could deliver and address the nation that could be just as easily faked by a machine? How do you think about the practical implications of what we've talked about today? Not just the sort of utopian versions, but the very dystopian possibilities and how they could manifest, not just in some distant future, but in your lifetime? Does it give you concern? Are you concerned about it? Yeah.
Well, all this is coming, it's happening. I mean, the tech companies are very heavily invested in virtual reality technology, in augmented reality technology. Just a few months ago, we had, Mark Zuckerberg, rebranding Facebook, the corporation, as Meta as a statement of his ambitions to build the so-called metaverse. And the metaverse basically is, it's a universe of virtual worlds. It's an ecosystem of virtual worlds where people will spend time, they'll work, they'll play, they'll communicate, they'll socialize.
And people see this as the successor to the internet. It's the so-called immersive internet and internet that we experience in 3D all around us. So yeah, this is basically the virtual world. If this ambition pans out, the virtual world will become something at the center of all of our lives, where we spend more and more time. And in the book, Reality+, my line is basically, I don't say it's going to be a utopia, I don't say it's going to be a dystopia, what I do say is life in virtual worlds can be meaningful.
When good things happen, they really happen. When bad things happen, they really happen. There'll be room for the full range of the human condition, from the wonderful to the awful. We've already seen that with the internet.
It's produced amazing things, it's produced awful things, and I fully expect that with the coming metaverse. But that said, there are obvious reasons for concerns and things to worry about. Right. Not least the fact that is this metaverse going to be run by corporations like Facebook or Meta? If I'm right, that virtual worlds are genuine realities, well, whoever creates a virtual world is like the God of that virtual world.
They're all powerful. All knowing. We already worry about privacy and manipulation with social media.
But yeah, wait till it's not just social media, but social worlds. Think of the possibilities for manipulation, for privacy violation, for monetization. Yeah, those things just go through the roof, I think, once you're you're hanging out in virtual worlds controlled by corporation.
So I, at least, hope that there'll be a route, to what people sometimes call, the open metaverse. A metaverse of worlds, which at least to a significant extent or user controlled and user governed where people can build their own worlds and their own communities in a way that's not ultimately controlled by corporation for the purposes of monetization because it is obviously dystopian potential there. Yeah. Well, you mentioned that the even the internet led to bad stuff, but our lives on the internet weren't really controlled until pretty recently.
I see there, behind you, you have, "The Age of Surveillance Capitalism". We had, Shoshana Zuboff, on. For listeners who didn't hear that episode, I think it was episode 79 where we described exactly this. In fact, one of the things that I was thinking about when you mentioned Facebook's rebranding to Meta, the subtitle of that should be, if you thought Facebook was bad, just wait. Right. Because, if those same practices are implied in an immersive world, well then you've lost total control of life.
And that brings us back to exactly what I think is the central issue here, which is control. And that in The Matrix as well, the issue was control. The primary issue was control that the machines controlled humanity, that humanity didn't control its own fate. So I'd love to hear more of your thoughts around how do we build that kind of an open metaverse? What types of technologies play a role there? Do distributed ledger technologies, things like Blockchain and other protocols that allow for something analogous to "decentralize trust," and I put that in quotation marks, are those things part of the solution? I mean, how much have you thought about these sorts of things? Yeah.
I'm not an expert on Blockchain technology. I'd be surprised if it was a solution in its own right. Because the way that these ledgers get used, all is a matter of kind of the ecosystem of social norms and existing finance, and so on, in which it's embedded. I don't think blockchain automatically gives you a solution. What it may do, what it may help with is providing the infrastructure for a solution.
I think it's very important that the metaverse be like the internet in the sense that it's not run on a corporate foundation. The internet has got fairly open protocol that were agreed on prior to corporate ownership that were adopted. And that made sense. And it's still built on that foundation. And yes, corporations control corners of the internet, but the internet itself is run on standards.
They're not under corporate control. I very much hope-- Though it's become more centralized. Yeah. And so that brings us back to the issue of control and how do we avoid that as we add layers to the stack? Yeah.
Over the years, the internet has become more platform centric and things are controlled at the level of the platform. And I fully only expect this is going to happen with the metaverse. There's going to be metaverse platforms for Meta. Apple is building their own google. May well build their own. The question is whether there'll be some kind of open infrastructure that underlies all of that, or whethe