A.I. Future: Utopia or Apocalypse? Ten Visions For Our Future | Kai-Fu Lee

A.I. Future: Utopia or Apocalypse? Ten Visions For Our Future | Kai-Fu Lee

Show Video

What's up everybody? My name is Demetri Kofinas, and you're listening to Hidden Forces, a podcast that inspires investors, entrepreneurs, and everyday citizens to challenge consensus narratives, and to learn how to think critically about the systems of power shaping our world. My guest in this week's episode is Kai-Fu Lee. Dr. Lee currently serves as CEO of Sinovation Ventures, a leading Chinese technology venture

firm, and was formerly the president of Google China, and a senior executive at Microsoft, SGI, and Apple. He's the New York Times bestselling author of AI Superpowers, and is out with a new book titled, "AI2041: Ten Visions for Our Future," which provides a foundation for today's conversation about the future of our world, what it's going to look like, and the challenges and opportunities that such a world will create. In the first part of today's episode, Kai and I discuss a number of key technologies that he believes will play a pivotal role in transforming our lives over the next 20 years, such as artificial intelligence, and quantum computing, how these technologies work, and their application in fields such as autonomous driving, and predictive analytics. Most of the first half, however, is spent on the subject of deep learning, which is a subset of machine learning, which itself is fundamental to many of the transformative technologies that we speak about today. In the second half, which is available to our premium subscribers, our conversation turns to the subjects of deep fakes, and autonomous weapons, job displacement, as well as digital currency, and how to think about money, and the type of post scarcity world that Kai-Fu believes we are progressively moving towards as the technologies we discus today are increasingly integrated into our applications, devices, and systems.

Again, a fascinating discussion that fits right in with the type of content that you have come to expect from this podcast. If you enjoy the first half of today's conversation, I encourage you to take the leap and become a premium subscriber if you aren't already. There's no commitment, you can cancel at any time, and the entire library of premium content going all the way back to episode one becomes instantly available to you, including the overtimes, transcripts, and rundowns depending on your tier. So without any further ado, please enjoy yet another incredibly informative and engaging episode of Hidden Forces with my guest, Kai-Fu Lee. Kai-Fu Lee, Welcome to Hidden Forces. Well, thank you.

Thanks for having me. It's great having you on. I thought we started this recording 15 minutes ago, and I forgot to press the record button.

It's funny we're talking about the most advanced suite of technologies and this is a clear example of both a very rudimentary technology and one which works fine, if not for the mistakes of the operator. So, congratulations on what is your second book. Your first book was AI Superpowers. I would describe this book as primarily a nonfiction book that uses fictional stories as thought experiments meant to both help readers understand the most important technologies that you expect to play a role in our lives over the next 20 years, and to convey how these technologies could change our lives, our identities, and our conceptions of the world in ways that are very difficult for us to imagine.

Do you think that's a fair description of the book, and how would you describe it? Yeah, actually, I couldn't describe it better myself. That's exactly right. What I would add is that the reason I wanted to use the fictional stories is to make technologies more understandable and even entertaining to people who might otherwise find high tech like AI to be intimidating.

And because AI is so important, people should understand it, and telling stories is the best way for people to understand, and that's why I have a coauthor who is a well-known science fiction writer. And he writes the stories and I create the what technologies are doable map, and he writes stories from that, and that hopefully delivers the goal. How did you go about choosing the scenarios for the book, and were there any scenarios that you had to leave off the table? Well, choosing the scenarios is really brainstorming that considers the many technologies I wanted to cover. And they needed to be covered in a sequence from easy to hard.

And I also wanted to connect the technologies to different industries like healthcare, education, and so on. And my partner, he wanted to place it in 10 different countries. So, it was a fun puzzle where we tried to play with these pieces until everything fit. I think we managed to get everything in because I think some of the things we left out are really just didn't have space for it, because it can't just be about AI because many other things are also important. Quantum computing, blockchain, drug discovery, and energy materials, climate are all important.

So, we included all of that in the book. One technology we just couldn't get in that I thought was quite important are related to gene editing and CRISPR. And we just couldn't fit that in the story because that had to be a major part of the story, our ability to edit our own genes. It couldn't just play some secondary role. And we just ran out of space, and we had to leave that off the table. It's really not AI, per se, but it is related to AI.

Yeah, I noticed that. I was going to ask you about it. You did get to fit in autonomous weapons, which is something I do want to ask you about. And you did cover the medical applications of artificial intelligence and precision medicine. So, that's something that we'll also discuss as well. What are some of the misconceptions that you think people have around what AI is, and how do you think most people think about what it is, and what is it? How would you describe it? Right, so I think the first misunderstanding is that people think programmers program AI the way that people think if then else rules, and it's true that once upon a time 40 years ago or so, 30 years, perhaps for some people, this type of rule-based approach was predominant in AI, but it was around 30 years ago, but really was not became dominant until the last five or 10 years, a new subfield of AI called machine learning became the dominant sub sector of AI.

Machine learning does not work at all with human micromanaging decisions. The way machine learning works is that it takes a large amount of data and figures out how to make decisions from the data. And the human doesn't really have an opportunity to micromanage and set the rules.

The human just says, "Here's the goal, here's a bunch of data," and let AI figure it all out. The advantage of taking such an approach is that as you have more data the decision making becomes more accurate. And that's why we see AI beating people as more and more data are gathered running on powerful computers training these smart machine learning models that eventually outperform people when trained on enough data. Yeah, that's super interesting, and that's something I want to discuss too, because it has implications for the social implications of AI, which is, for example, while there's a model for learning in these systems, the systems don't necessarily generate or certainly they don't generate models that we can inspect and understand.

And so, they arrive at decisions in a way that almost feels like oracular, magical, and how do we incorporate such systems in our existing frameworks and regulatory models that rely on accountability, and the ability to extrapolate from an event an explanation. That's something that I definitely want to discuss with you at some point. So, there are 10 stories in the book as we discussed. When I was preparing for this conversation there were certain ones that really resonated with me that I wanted to cover. But as I started going through them I realized, "Okay, well, these are actually structurally in order on purpose because they lead the reader by the hand." And there's a lot of aspects of AI that are the AI complex, so to speak, not necessarily deep learning, that are present in all these different stories.

So, it isn't that just one covers one aspect. The opening story, I think, is the one that is most, maybe relatable isn't the right word, it certainly is relatable in a way. But it's also the most believable because it's the most proximate to the world we live in today. And it revolves around a family in Mumbai who is signed up for this deep learning enabled insurance program. And as part of that the family uses a series of applications intended to improve their lives in a way that is extensively concordant with the objective function of reducing their insurance premium. What was your goal and telling this story, and what did you want readers take away from it? Right.

Well, people are concerned about large companies with a lot of data, and the example in this story is a company that is even larger than the likes of Google and Facebook because they produce social networks and eCommerce as well as insurance. So, it's combining several internet giants into one and that allows it to learn from much more data from each individual, thereby providing excellent results on the AI meaning they can optimize insurance premiums for individuals. They can help people pay less premium, which means for health insurances is to help people get sick not as much, or don't get in a very sick because those costs a lot of money for the insurance company, and a lot of pain for the individual. So, the second point related to that, that I wanted to get across is that even when the owner of AI appears to be highly interest aligned with the individual who buys the policy things can go wrong. Because when people look at why does Facebook or YouTube show me things that make me angry, or frustrated, or violent, or waste so much time, and people can explain as the documentary social dilemma has very well done is that, hey, YouTube, and Facebook want you to spend more minutes.

And that helps them make money. But they don't care the quality of the content you see. So, they keep showing you things that you'll keep clicking.

And that's what causes the addictive behavior and the regret after you watch so much. But I wanted to present in the golden elephant a story where the insurance company and you want the same thing, which is don't get sick as much be healthy. So you don't have to get insurance company to pay for your illnesses. And yet, when it appears to be so interests align, and well meaning, still things could go wrong. So I wanted to get that point across.

So this is really awesome. I read the first book that I had ever read on AI was a number of years ago. It was called super intelligence. And I learned quite a bit about all the things that could go wrong with AI. Some of them very heady sort of out there theoretical, but some of them very concrete related to the objective function in this case. What are some of the ethical and technical challenges associated with implementation? With implementing this type of learning function, or this targeted outcome to more and more of the machines and applications that we interact with every day? Well, first is just the awareness.

Often the engineers aren't aware that they're building in technologies that essentially brainwashes us or causes us to see things and think in certain ways. That's how powerful these algorithms are. That's right because the engineer is thinking, "Hey, I work for a large internet company. They want to program the content so that users click more. Seems completely reasonable because people for offline grocery stores want people to come in the store and buy more and hang around more.

And they give people coupons and whatever enticement, so it seems like a completely normal and commercial thing to do. But what is also people are missing is that AI is so powerful that when you tell it go do one thing maniacally focused on that, it will do so to such a degree of optimality and perfection that it can cause other bad things to happen. So, to fix this, I think first, the engineers and the product managers and CEOs have to realize why the powerful weapon they've got, and they have to build it carefully considering multiple factors, not just how much money they make, how much we click, but also is it showing quality content? Is it showing bias content? And how to control the quality of content, reduce fake news, etc. So I think the awareness and then building in the processes and the tools that will help the products not to have some of the negative side effects. Well, this is the part of AI that I have always found the most interesting and fascinating because it grapples with philosophical questions and concepts that people have been dealing with for millennia.

And there are no good answers to these questions. Or rather, there are good answers, but there aren't any answers that we can definitively point to and say, "Aha, that's the right answer." And so what is the ethical dimension of this, which is that there are no universal ethics that we can empirically point to and say, "These are the correct ethics for society." So, one, how do we as a society go about deciding? First of all, do we have to make decisions about who we can and trust with making these decisions? And then second, how do these individuals make the right decisions? Then how do they go about constructing the right objective function? And then how, and now I'm reminded of an episode we did on philosophical mathematics where we looked at Wittgenstein and the challenge of logical clarification of thought.

How do you translate? How do you take what your intention is and properly instantiate that code so that you actually get the effect that you want? Understanding that this is very complex. I mean, this is, I certainly can't appreciate how complex it is, I can only imagine. Yeah, there are many things we can do.

There's no way that it can be a perfect answer. Even prior to AI, even humans we make lots of errors in bias, and unfair decisions, and we don't explain ourselves well, and people do things to cause other people to get addicted, too. So, let's take a step back and not assume that without AI anything's perfect. I think we should at least deliver a decent experience. So, often there, I think, awareness, education of the engineers and CEOs who work on these projects.

Secondly, I think there should be regulations. Just like there are regulations against certain child pornography, there's regulations against sending out pyramid schemes and chain letters. So, things like that should be used to prevent extremely bad behavior for the companies that own all this data, they have to be responsible. Third, I think there can be tools that will catch problems. So, if an AI researcher is trying to train a new model for Facebook or something, and then the person didn't use enough women training data, so the algorithm may become biased against women. Then AI should be able to detect that and say, "You can't launch that unless you fix the data fairness problem, or the data distribution problem."

So, technology can be used. And then lastly, I think there can be social and market mechanisms that are essentially working to become a watchdog for companies that misbehave. So, for example, maybe there can be a metric of how much fake news is in every social media, or how much deep fakes they let through their software, and scores are published on a monthly basis. So that forces the companies to behave well. And if they don't do a good job, there could be regulations.

There could be fines, there could be audits, just like there are financial audits. So, these things could take decades to all be figured out. But I think once there's awareness, then there are people in public and private sector who will try to fix these problems. Yeah, I mean, I think that's a great point, which is that this is a work in progress, and you will be working along... Researchers, and developers, and engineers will be working alongside the evolution of these systems I think where concerns around the initial setting of conditions are how would I describe it? Our most dystopian is when we think about the long term trajectory of AI, and do we end up creating something that ends up either malevolent or benevolent reasons replace humanity? You mentioned deep fakes.

I want to talk about that, because that's the next story in the book. Before we go there, I have one last question. And maybe we'll get into more details on this in the overtime. And it has to do with attacking these systems. Do you envision ways in which attackers would attempt to either attack the objective function itself trying to mess with the objective function? Or the input data in order to as another way mess up with the output? How do you see ways in which these systems could be vulnerable to attack? Yeah, AI security is yet another new field that's different.

Just like when we had PC, the malware usually mess with our Windows registry and the things stored in our computer disabled it. When our mobile often is in there to steal the money. And then with AI, there are a number of ways that the bad people can get in.

One is just by poisoning. So, it feeds the wrong training data so that you train something that ends up doing something you don't want, unable to recognize something that you should, or always let these bad people go through in facial recognition or something. So, that's one possibility. Another is when is AI being run you find this fragility, and then you do something with the inputs that it's being tested to trick it because it's never seen anything like that before.

So we've seen people who've put some tapes on a stop sign causing autonomous vehicles to no longer recognize it as a traffic sign yet all human drivers would recognize it. So, because AI is only as good as all the training data it's ever seen. So, most training AI have seen stop signs that are far or near with paint coming off, maybe with some snow on it, but they've never seen stop signs with cross tapes on it. So, it could get confused.

So, that's another approach. I'm sure there will be others. So, I think we have to really be careful and start the research ahead of time before the bad guys take advantage of all these AI holes.

So, you actually have a chapter in the book called "The Holy Driver." Yes, right. Which is about autonomous vehicles, and it's actually super interesting because it adds a dimension to the implementation of such technologies that I hadn't considered, which is the holy driver, which is human intervention. So, how advanced is autonomous vehicle technology today because I think some people... Well, I think, a few years ago, people were surprised at how advanced it was. Maybe they didn't realize just how far we'd come.

But maybe today, I think today, there are a lot of people that might have the opposite point of view, which is they think it's more advanced than it actually is. So, how advanced is it at present? And if you could summarize how autonomous vehicles work given without having to repeat everything we've already talked about how AI works, or how deep learning works? Sure. So, autonomous vehicles take a bunch of inputs and uses that to decide how to manipulate the steering wheel, the brake, and the accelerator, and so on. And the input it gets are input from the many cameras that they put in an autonomous vehicle, and also a number of other sensors, such as LIDARs and other types of sensors that sense the condition around you.

It tries to make out is that the shape of the thing coming at you? And then on top of that, it has deep learning trying to recognize from all these subjects, which are pedestrians, which are cars or trucks, and which are stop signs and traffic lights, and sky and cloud? And then based on that it's further trained on how to continue to follow the road and the brake when the car in front of you stops. So, it's a lot of complex issues. It takes a human some 40, 50 hours to learn how to drive. So this is not something- Some people never learn.

That's right. Yeah. So, AI may not see all the data of all the permutations. So that's why today in constrained environments AI already drives extremely well.

So, environments like inside a warehouse, drive a forklift. That is AI can do so much better than people. In fact, if there aren't people around, they can drive really fast, and even with lights off they can often still run in the warehouse. We actually have invested in such a company.

And then as you go to more complex scenarios in buses AI can still do a decent job because that's in a fixed route and fixed stops. Trucks on highways AI can do a great job because highways are natural for AI because there aren't a lot of strange things happening with crossroads, and pedestrians, and all that. It doesn't have to worry as much about them. AI can also do better in well-lit conditions without a big storm or snow or something like that. So, AI already drives better than people in the scenarios I told you about. Some of which people think is hard driving on the highway.

Some people are scared of that. But AI is actually quite good at it because it's a relatively constrained environment. But then when we get to driving in the night with heavy snow in a downtown with pedestrians walking about. In that case, I would not ride in any autonomous vehicle today because they're just too many longtail things that will take much longer for AI to collect enough data or do simulations to train on. So, the approach that the industry is taking is gather a lot of data, start from simpler environments. For example, when you drive a Tesla autopilot is used with humans still supposedly watching over and stopping when the AI makes a mistake.

And it's doing things like summoning when you park the car, and it's all at the same time it's gathering data to make it work better and better. And when it improves, Tesla will update your software then it gets better. So, that I think is generally how autonomous vehicles will work. Start with constrained environments, gather a lot of data, improve, then go to the less constrained environments, and eventually reach a complete replacement for humans. Yeah, two interesting conceptual observations. One has to do with something you said very early on when we were talking about how most people conceptualize AI, which is a lot of people still probably older people think of AI in very atomic terms.

They might think of it in terms of the interface, the robot, the entity. When people think about in this context in terms of autonomous vehicles, maybe the old way of thinking about it was KITT from Knight Rider, the car itself. But of course, in this case, the lessons learned by individual cars are not learned by that car. It's learned by the entire computational system that informs each of those individual pieces of hardware. Number one, I just wanted to reemphasize that point.

But then I have a question, which is that is one way of thinking about intelligence or defining intelligence the ability to work well in environments of uncertainty, or environments that are unpredictable? Is that how one way we can think about what an intelligent machine is? Well, I think that's one of the expectations of intelligence, and there are many other aspects. Being able to analyze chess moves seems like a perfectly reasonable measure of intelligence until a computer has learned to do that. So, what you just described is something that, yeah, humans can do and AI cannot yet do dealing with unseen environments. But someday, AI may be able to do that. So, I think we can probably list 30, or 40, things that we used to think made humans intelligent.

And for the past 10 years, AI has been overtaking humans in some of these tasks, and yet others are still quite a ways to go. So, one last question on autonomous vehicles because this raises a lot of the social, ethical, political considerations that we touched on earlier. But perhaps in more immediate ways. One thing we didn't talk about, we talked about a little bit about the black box problem, which exists also here, which is in terms of how do you manage liability? Who's to blame? And also, how do you determine how a decision was made? And if you can't determine that, well, then how do you assign blame? Or how do you just go about thinking about that problem? That's one.

And then there's the so called trolley problem, which brings us back to the issue of ethics, and the designing of objective functions. So, one, maybe you can let our listeners know what the trolley problem is. And then more to the point.

Again, how is this an example of for a technology that we're dealing with it right now, how do we go about regulating the space? How do we go about managing this problem, which is that these systems are going to result in casualties, specifically, and bad outcomes. And something I didn't ask you before is should the metric of success simply be that the outcomes result in fewer bad outcomes than they would under the conditions of a human operator? If so, how much better should the outcomes be? In other words, how much value do we assign to the ability to simply understand why something happened as opposed to simply getting a better outcome, if that makes sense? Yeah, it absolutely does. These are very complex issues. I think from a... First, the trolley problem is the case where a trolley moves by default in a particular direction. The driver has to push it to in order to move into another direction.

So, the trolley problem describes the dilemma that the driver faces when it's inevitable the trolley would hit, let's say, two people, if that car were to kept going straight, and then but if the driver pulls the lever and moves it to another track, it would only hit one person. So, is it morally right or wrong to pull the lever because if the driver doesn't pull anything it's not his or her responsibility, but two people will die. If you pull the lever, you save a life, but then you're deliberately killing that person who would otherwise not have to die. And that person may be crossing the tracks thinking is perfectly safe because the trolley car wasn't supposed to go that way.

That's the ethical moral problem. I think the autonomous vehicle problem is akin to that because computers AI will be making decisions on left turn, right turn, fast, slow, that will have lives that ride on that decision. So, I think the programming per se is not by rules. So, you don't really say if there are two younger people, one older people then do this and do that.

No human will have to program that. Think of it as more like programming, get people from place A to place B, and on the average have the fewer lives lost. That's probably more or less the direction humans would give it.

So, because the systems are deterministic or because the deterministic function is so complex that it isn't open to our understanding. Do you foresee us getting to a point with these systems where you start getting really bad outcomes, people panic, they don't understand why they're getting the bad outcomes. They can't really investigate them and understand why they're happening. And so, how do we as a society, and as governments wrap our arms around those eventualities? Yeah, this is a really tough one. First of all, there are going to be some people who will think it's totally unacceptable for machines to ever make decisions that cause human casualty. And if the majority of people think that then AI will never get off the ground, then we should just stop all the work.

So assuming that we as a human race accept that sometimes not intentionally because intentionally it would be an autonomous weapon. But in order to deliver a goal that is proper and helps humans overall, there are lives lost anyway. But under what conditions? Can you allow an AI product to launch even though it leads to, let's call it unintentional, basically casualties? So, I think, as a society we do have to detect, debug, and try to fix each casualty.

But it won't be if then else rules, did you break the law? If you did go to jail kind of situation. But rather programmers will have to go back and see if they have to gather more data and things like that. And as far as the level explainability, I think it will be possible with some more research for AI to approximate human description of why something happened because humans aren't perfect explainers either. So, the AI decision is certainly way too complex for us to understand.

But it can tell us the most prominent five reasons are A, B, C, D, E, enough for people to understand and assess whether this is a horrendous mistake or not. Yeah, humans are horrendously bad at providing honest explanations. They're very good at coming up with explanations for why they did something after the fact. That's right. Yeah, it reminds me of an episode we did with Jonathan Haidt, where he wrote the book, The Righteous Mind. And I think he said in that book that decisions come first or something that effect.

Decisions come first, strategic reasoning second. So, we reason after the fact. It's just kind of scary when you think about it. So, you mentioned autonomous weapons, those are wrapped or rolled into a story called Quantum Genocide dealing with quantum computing as well. And the story is this Unabomber-esque, Ted Kaczynski-esque character. First of all, tell us a little bit about the story and then use that as an opportunity to explain what quantum computing is because I did one episode on quantum mechanics.

I took an online class of Leonard Susskind from Stanford. And I still don't understand quantum mechanics. And so, quantum computing is like one step removed.

And I think it's so intimidating for people. Maybe you can help bridge that divide for most of us today. Sure. The story Quantum Genocide really covers two important technologies.

One is a quantum computing, and these mad scientists use quantum computing to break the security Bitcoin wallets and stole money from all the Bitcoins around the world. And he used this Bitcoin that he stole, tens of billions of dollars to create autonomous weapons, which are these very tiny drones that we can hold on our palm. And these drones will recognize any face from a list of faces, and attempt to assassinate them.

And then this Unabomber-like terrorist believes the world has gotten to a terrible state that it's in because of all the elites. So the elites are the political figures and business leaders, etc. So, he proceeds to eliminate them through assassination. So that's the two sets of technologies. Quantum computing is different from what's called classical computers in the sense that it is not binary, but it holds all possibilities open.

So, instead of programming a computer to be everything is yes or no, and make decisions or run code. Quantum Computing has many so called qubits, which could take on any value. And they're all tried simultaneously. So, when you have a 4,000 qubits computer, you actually essentially have a super smart computer that can try all the permutations of 4,000 things. So, imagine you're trying to still run an AI algorithm or you're trying to in the case of stealing Bitcoins, you're trying to guess the password.

So think of it as you're trying to guess the password, but you're trying all the permutations of all the possible bit of a password at the same time and outcomes the answer. So, that's the power of quantum computing. And every time you add a bit, the computer doubles in its capabilities. So, something we couldn't imagine before, and it's also very suitable to model things in the real world because of the relationship with quantum mechanics. So, you can use it to simulate your body, and what happens to your body when a drug is introduced to your body. And then you can test the efficacy of drugs, potentially one day without having to do much clinical trials anymore because you're simulating the human body.

And you can similarly simulate the world, simulate effects of different types of climate control. So, it's really directly simulating the world while a classical computer is an arbitrary tool that is much more limited. So, that's the power of quantum computing. Let me stop here. See, if you want to go to autonomous weapons.

Before we go to autonomous weapons I'm curious, how difficult is it to actually get quantum processing to a place where it has a meaningful impact on all these other technologies, including AI? What are the bottlenecks to doing that? How realistic is it that we get to such a place in the next 20 years, which is the scope of your book, or even 40 years, or 50 years? Yeah, this is one of the most uncertain predictions in the book, which is when will a 4,000 qubits quantum computer work? And the reason I picked 4,000 is that's approximately what it would take to break a Bitcoin wallets because that's arguably one of the first highly lucrative applications even though this malevolent one. The Road to 4,000 qubits you can project based on the progress of the scientist in the last few years, we've gone from a few qubits to tens of qubits. Now, we're in the low hundred.

I think we're about 200 qubits or so. So, we have have seen improvement. So, some optimists would say, "Hey, we're..."

IBM is saying they think they can improve, they can add quite a bit a year based on the IBM roadmap, they'll be at 4,000 qubits in probably 10 to 15 years. So, that's where the estimate comes from. They also acknowledge, though, that the difficulty of adding more qubits is all about managing stability because these qubits are super conducting material that is highly unstable. So building a large quantum computer is really scientists figuring out engineering ways to maintain stability. So, based on the best of my research because this is not my area of expertise most people who work in quantum believe that 20 years is a reasonable timeframe in which to build a 4,000 qubit system. So, I rely on the expert estimate.

I wouldn't say it's 100% accurate prediction, but it's their rough estimate seeing how much progress has been made, how much time it would take to stabilize 4,000 qubits. So, Dr. Lee, I'm going to move the second half of our conversation into the overtime. Like I said, there are 10 stories in the book. I think we covered maybe four so far.

We'll do our best to cover the rest. But there are certain ones that I'd absolutely want to cover. As I said, autonomous weapons folds into this chapter on quantum genocide, which we're going to cover right when we get back to the other side of this conversation. Your last chapter is also fascinating. I think it's titled post scarcity, or no, it's something along those lines, but it deals with post scarcity world.

That's right. Yeah. And that also touches on money. And how do we think about money in such a world, which I found fascinating and very relevant to our listeners.

There's also a chapter on happiness optimization, which will allow us to, I think, to dig deeper and cover some areas that maybe we didn't get around objective function and data privacy also applies to issues of blockchain technology. We didn't really cover job displacement. I think that's super important, and that's something I would really like to discuss. And also deep fakes and VR, AR, as well because these are all technologies that while quantum computing is something that most people... It hasn't really pervaded our lives yet, deep fakes, and in general, fake news, so to speak is a problem that we're all dealing with now.

A lot of people have used VR headsets, AR, incredible, mind blowing technologies that really show you just how far we've come. So, these are where I'm focusing my attention. For anyone who is new to the program. Hidden Forces is listener supported. We don't accept advertisers or commercial sponsors.

The entire show is funded from top to bottom by listeners like you. If you want access the second hour of today's conversation with Kai-Fu as well as the transcripts and rundowns to this episode, and every other episode we've ever done head over to hiddenforces.io and check out our episode library or subscribe directly through our Patreon page at patreon.com/hiddenforces. There's also a link in the summary page to this episode with instructions on how to connect the overtime feed to your phone so that you can listen to these extra discussions just like you listen to the regular podcast. Dr. Lee, stick around, we're going to move the second part of our conversation into the

subscriber overtime. For more information about this week's episode of Hidden Forces, or if you want easy access to related programming, visit our website at hiddenforces.io and subscribe to our free email list. If you want access to overtime segments, episode transcripts, and show rundowns full of links and detailed information related to each and every episode, check out our premium subscription available through the Hidden Forces website or through our Patreon page at patreon.com/hiddenforces. Today's episode was produced by me and edited by Stylianos Nicolaou.

For more episodes, you can check out our website at hiddenforces.io. Join the conversation at Facebook, Twitter, and Instagram @hiddenforcespod or send me an email. As always, thanks for listening. We'll see you next week.

2021-09-22 17:26

Show Video

Other news