Amjad Masad: The Cults of Silicon Valley, Woke AI, and Tech Billionaires Turning to Trump

Amjad Masad: The Cults of Silicon Valley, Woke AI, and Tech Billionaires Turning to Trump

Show Video

It does sound like you're, like, directly connected to AI development. Yes. You're part of the ecosystem. Yes. And we benefit a lot from when it started happening.

Like it was almost, a surprise to a lot of people, but we saw it coming. Yeah. You saw AI coming. Saw it coming. Yeah.

So you know, this recent AI wave, you know, it surprised a lot of people. ChatGPT came out November 22nd. A lot of people lost their mind. Like, suddenly a computer can talk to me. And I was like, the- AI wasn't into it at all. Really? You know what, Paul Graham, one of my closest friends and sort of allies and mentors.

He's a big Silicon Valley figure. He's a writer, kind of like you, as, you know, he writes a lot of essays, and he hates it. He thinks it's like a midwest, right? And it's just like making people write worse and making people think worse.

Worse or not think at all, right? I think as the iPhone has done, as Wikipedia and Google have done, yes. We were just talking about that. The, you know, the iPhones, iPads, whatever, they made it so that anyone can use a computer, but they also made it so that no one has to learn to program. The original vision of computing was that this is, this. This is something that's gonna give us superpowers, right? Jake Licklider, the head of DARPA while the internet was developing, wrote this, essay called The Man Machine Symbiosis.

And he talks about how computers can be. An extension of ourselves, can and can help us grow what we can become. You know, there's this marriage between the type of intellect that the computers can do, which is high speed arithmetic, whatever. And the type of intellect

that humans can do is more intuition. Yes. But, you know, since then, I think the, the sort of consensus has sort of changed around computing, which is, you know, I'm sure we'll get into that, which is why people are afraid of AI is kind of replacing us. This idea of like, computers and computing are threat because they're directly competitive with, with humans, which is not really the the belief I hold. They're extensions of us. And I think people are learning

to program. And this is really embedded at the heart of our mission at Replit is what gives you superpowers. Whereas when you're just tapping, you're kind of a consumer, you're not a producer of software. And I want more people to be producers of software.

There's a book by dog. Not Hofstadter, Rushkoff, Douglas Rushkoff. It's called Program or Be Programed. And that if you're not the one coding, someone is coding you, someone is programing you these algorithms and, you know, social media, they're programing us, right? So too late for me to learn to code though. I don't think so. I don't think so. Okay.

I can't balance my checkbook, assuming they're still checkbooks. I don't think there are, but let me just go back to something you said a minute ago. That the idea was originally as conceived by the DARPA guys who made this all possible, that machines would do the math, humans would do the intuition.

I wonder, as machines become more embedded in every moment of our lives, if intuition isn't dying or people are less willing to trust theirs. I've seen that a lot in the last few years where something very obvious will happen. And people are like, well, I could sort of acknowledge and obey what my eyes tell me and my instincts are screaming at me.

But you know the data. Tell me something different. It's like my advantages. I'm, like, very close to the animal kingdom. That's right. And I just believe in smell. And so. But I wonder if that's not a

result of. The advance of technology. Well, I don't think it's inherent to the advance of technology. I think it's it's it's a cultural thing, right? It's how, again, this vision. Of computing as a replacement for humans versus an extension machine for for humans.

And so, you know, you go back, you know, Bertrand Russell wrote a book about history, philosophy and history of mathematics and like, you know, going back to the ancients and, Pythagoras and all these things. And you could tell in the writing, he was almost surprised by how much intuition played into science and math. And, you know, in the sort of ancient era of, advancements in logic and philosophy and all of that. Whereas I think the culture today is like, well, you got to check your intuition at the door.

Yes. Yeah. You're biased, your intuition is racist or something. And you have to this is bad.

And you have to be this, like, you know, blank slate and like, you trust the data. But by the way, data is you can make the data say a lot of different things. And notice we can just ask a totally off topic question.

It just occurred to me, how are you this well. I mean, so you grew up in Jordan speaking Arabic in a displaced Palestinian family. You didn't come to the US until pretty recently. You're not a native English speaker. How are you reading Bertrand Russell? Yeah. And how like, what

was your education? Is everyone in is every Palestinian family in Jordan that's well-educated? Like what? What kind of like yeah, a Palestinian diaspora is, like, pretty well-educated. And you're starting to see this generation, our generation of kind of, who grew up are starting to, sort of become more prominent. I mean, in Silicon Valley.

You know, a lot of C-suite and VP level executives, a lot of them are Palestinian origin. A lot of them wouldn't say so, because there's still, you know, bias and discrimination and all that. But I wouldn't say there are. Palestinians they want to say. And they, you know, they're called Adam. And some of them, some of the

Christian processing especially kind of blend in. Right. And but there's a lot of them over there. But how did you so how do you wind up reading? I assume you read Bertrand Russell in English. Yes. How did you learn that you didn't grow up in an English speaking country? Yeah. Well, Jordan is kind of an English speaker. Kind of. Is that straight.

So, you know, it was it was a British colony. I think one of the, you know, I, you know, the, the Independence Day like happened. Like 50s or something like that. Or maybe 60s. So it was like pretty late in the. You know, British, sort of empires history that Jordan saw being a colony. So there was a like a lot of British influence.

I went to, so my father, my father is a government engineer. He didn't he didn't have a lot of money. So we lived a very modest live, kind of like middle, lower middle class. But he really cared about education.

He sent us to private schools. And in those private schools, we learned kind of using British diploma. Right. So I GCSE A-levels, you know, that's. Are you familiar

with. Not at all. Yeah. So, so, you know, part of the is that a British you know, colonialism or whatever is like, you know, education system became international. I think it's a good thing. There British schools everywhere.

Yeah, yeah. Schools everywhere. And it's good education system. It gives students a good level of freedom and autonomy to kind of pick the kind of things that are interested in. So I, you know, went to a lot of math and physics, but also did did like random things. I did child development, which I still remember.

And now that I have kids I actually use. And in high school you do. That and high school and I, I love. What does that have to do with the civil rights movement? What do you mean? Well, that's the only topic in American schools. Really? Yeah. Well, yeah. You spend 16 years learning about the civil rights movement so everyone can identify the Edmund Pettus Bridge, but no one knows anything else.

Oh, God. So sorry. My dad with my kids, no doubt. That's me.

That's so interesting. So when you when did you come to the US. 2012. And now you've got $1 billion company. That's pretty good. Yeah. I mean, America's amazing.

Like, I just love this country. I, it's given us a lot of opportunities. I just love the people. Like, everyday people. I like, just talk to people. I was just talking to to my driver, which she was like you know, I'm so embarrassed that I didn't know who Donald Carlson was.

Okay, good. That's why I live here. Yeah. I was like, well, good for you. I think that means you're just like, you know, you're just living your life. And she's like, yeah, I'm, you know, I have my kids and my chickens and my whatever. I was like, that's great.

It means you're happy. Means you're happy. Yes. But so I've started to grow.

I'm sorry. Digress. I was referring to all these books. I'm like, you're not even from here. It's incredible. So it back to I, and to this question of intuition, you don't think that it's in its inherent. So, in other words, if my life is to something governed by technology, by my phone, by my computer, by.

All the technology embedded in, like every electronic object. You don't think that makes me trust machines more than my own gut? You can choose to. And I think, a lot of people are being guided to, to do that. But ultimately, you're giving away a lot of, freedom.

Yeah. And and, you know, there's a there's a it's not just me saying that there's like, a huge tradition, of hackers and computer scientists that, kind of started, ringing the alarm bell, like, really long time ago about, like, the way things were trending, which is, you're more centralization, less, you know, diversity of competition in the market. And you, you have, like one global social network as opposed to many. Now it's actually getting a little better.

But, and you had a lot of these people, you know, start, you know, the crypto movement. I know you were at the Bitcoin conference recently and you told them, CIA started Bitcoin. They got really angry on Twitter. But I don't know that. Yeah. But until you can tell me who Satoshi was, I have some questions.

What? Actually, I have have a feeling about who stole us, but that's a separate company. No, it's, let's just stop right now, because I can't. I will never forget to ask you again. Who is Satoshi? There's a guy. His name is Paul LaRue.

By the way, for those watching, you don't know what Satoshi is. The the pseudonym that we use for the person who created Bitcoin, but. We don't know. What's amazing, you

know, is this is this thing that was created. We don't know who created it. He didn't he never move the money. I don't think maybe.

There was some activity here and there, but like there's like billions, hundreds of billions of dollars locked in. So we don't know who the person is. They're not cashing out as, like, pretty crazy story, right? Amazing. So Paul LaRue. Yeah. Paul was, you know, crypto hacker in Rhodesia, before Zimbabwe.

And he created, something called encryption for the masses, M4. And was one of the early, by the way, I think Snowden. Used the M4 as part of his, as part of his hack. So he was one of the people that really, you know, made it so that, cryptography is accessible to more people. However he did he did become a criminal. He became a criminal

mastermind in Manila. He was really controlling the the city, almost, you know, he he paid off all the cops and everything. He was making so much money for. So from so much criminal activity, his nickname was Toshi with an L. And so there's, like, a lot of, you know, circumstantial evidence.

There's no, like, cutthroat evidence. But I just have a feeling that. He generated so much cash, he didn't know what. To do with it, where to store it. And on the side, he was building Bitcoin to be able to store all that, all that cash. And around the same time that Satoshi disappeared, he went to jail. He got he got booked for,

for all the crime he did. He recently got sentenced to 20 or 25 years of prison. I think judge asked him, like, what would you do if you if you would go out? And he's like, I would built an async chip to mine Bitcoin.

As a look, you know, this is, strong opinion, loosely held, but it's just like, there's. So he is currently in prison. He's currently in prison. Yeah.

In this country or the Philippines. I think this country, because he was doing all the crime here. He was selling drugs online, essentially. We should go see him in jail.

Yeah, yeah. To check out. Sorry. I'm sorry. I just had to get that out of you. So I keep digressing.

So you see, I and you know, you're part of the AI ecosystem. Of course. But, you don't see it as a threat, you know? No, I don't see how this threat at all. And I think and

I, you know, I heard. Some of your, you know, podcast with Joe Rogan or whatever, and you're like, oh, we should nuke the data centers. And, I'm. Excitable. Yeah. On the basis of very little information. Well, actually. Yeah. Well, actually, you tell me,

what is your theory about the threat of air? Hey, you know what? I always I want to be the kind of man who admits up front his limitations and his ignorance. And on this topic, I'm legitimately ignorant, but I have read a lot about it, and I've read most of the alarmist stuff about it. And the idea is, as you well know, that the machines become so powerful that they achieve a kind of autonomy. And they, though designed to serve you, wind up ruling you. Yeah. And,

you know, I'm, I'm really interested in, Ted Kosinski's writings. His two books that he wrote, obviously, is to say ritually, I'm totally opposed to letter bombs or violence of any kind. But Ted Kozinski had a lot of provocative and thoughtful things to say about technology. It's almost like having live and help, which, you know, people make a lot of money. They all want to have live and help. But the truth about

living help is, you know, they're there to serve you, but you wind up serving them in inverts in I kind of species of that, that's the fear. And I don't want to live. I want to be a slave to a machine any more than I already am. So it's just kind of that simple. And then there's all this other stuff, you know a lot more about this than I do, and you're in that world. But yeah, that's my concern.

That's actually a quite valid concern. I would like decouple. The existential threat concern from the concern. And we've been talking about this of like like us being slaves to to the machines. And I think you Ted Kaczynski's, critique of technologies actually is one of the best. Yes. Thank you. Yeah.

I wish he hadn't killed people, of course, because I'm against killing. But I also think it had the opposite of the intended effect. He did it in order to bring attention to his, to his thesis and ended up obscuring it.

Yeah, but I, I really wish that every person in America would read book, not just the his manifesto, but the book that he wrote from prison, because they're just so at the least they're thought provoking and really important. Yeah, yeah. I mean briefly and. We'll get to existential risk in a second. But, he talked about this thing called the power process, which is he thinks that is intrinsic to human happiness. To to struggle for survival, to go through life. As a child, as an adult, build up yourself, get married, have kids, and then become the elder and then die.

Right? Exactly. And he thinks that modern technology kind of disrupts this process and makes people miserable. How do you know that? I read it, I'm very. Curious. I like, I read a lot of

things, and I just don't have mental censorship in a way like I can. I love, I'm really curious. I'll read anything. Do you think. Being from another country has helped you in that way? Yeah. And I also, I think just my childhood, I was. Like always different.

I was when I had hair, it was all red. It was bright red. And my, my whole family is kind of, or at least half of my family are redhead. And, and, you know, because of that experience, I was like, okay, I'm different. I'm comfortable being different. I'll be I'll be

different. And, and you know, that just commitment to not worrying about anything, you know, about conforming or like it was forced on me that I'm not conforming just by virtue of being different and. And being curious and being, you know, I'm good with computers and all that. I think that carried me through life. I just, I like, I get, you know, I get like almost a disgust reaction to conformism and like, mob mentality. I couldn't agree more.

I had a similar experience. Childhood. I totally agree with you. We traveled to an awful lot of countries on this show, to some free countries, the dwindling number and a lot of not very free countries, places famous for government censorship. And wherever we go, we use a virtual private network of VPN and we use express VPN. We do it to access the free and open internet.

But the interesting thing is, when we come back here to the United States, we still use ExpressVPN. Why big tech surveillance? It's everywhere. It's not just North Korea that monitors every move its citizens make. No, that

same thing happens right here in the United States and and Canada and Great Britain and around the world. Internet providers can see every website you visit. Did you know that they may even be required to keep your browsing history on file for.

Years and then turn over to federal authorities if asked. In the United States, internet providers are legally allowed to and regularly do sell your browsing history. Everywhere you go online, there is no privacy. Did you know that? Well, we did, and that's why we use ExpressVPN.

And because we do, our internet provider never knows where we're going on the internet. They never hear it in the first place. That's because 100% of our online activity is routed through Expressvpn's secure, encrypted servers.

They hide our IP address, so data brokers cannot track us and sell our online activity on the black market. We have privacy. ExpressVPN lets you connect to servers in 105 different countries. So basically you can go online like you're anywhere in the world. No one can see you.

This was the promise of the internet in the first place. Privacy and freedom. Those didn't seem like they were achievable, but now they are.

ExpressVPN. We cannot recommend it enough. It's also really easy to use. Whether or not you fully understand the technology behind it.

You can use it on your phone, laptop, tablet, even your smart TVs. You press one button, just tap it and you're protected. You have privacy. So if you want online privacy and the freedom it bestows, get it.

You can go to our special link right here to get three extra months free of ExpressVPN. That's expressvpn.com/tucker Express expressvpn.com/tucker for three extra months free. So Kaczynski's thesis that struggle is not only inherent to the condition, but an essential part, yes, of your evolution as a man or as a person. And the technology disrupts that.

I mean, that seems right to me. Yeah. And I actually struggled to sort of, dispute that, despite being a technologist. Right. Ultimately.

Again, like I said, it's like one of the best critique, I think we can spend the whole podcast kind of really trying to tease it apart. I think ultimately where I kind of defer and again, it just goes back to a. Lot of what we're talking about. My views on technology as an extension of us is like, we just don't want technology to be. A thing that's just merely replacing us.

We want it to be an empowering thing. And what we do at Replit is we empower people to learn to code, to build startups, to build companies, to become entrepreneurs. And, and I think you can, in this world, you have to create the power process. You have to struggle.

And, yes, you can. Yeah. This is why I am also, you know, a lot of technologists talk about UBI and universal basic, right? Oh, I know, I think it's all wrong because this just goes against human nature. Thank you. So I want.

To kill everybody. Put them on the door. Yes. Yes. So I you know, I don't think technology is inherently at odds with the power process. I'll leave it at that.

We can go toe to toe. Existential threat. Yeah. Of course. Sorry. Boy, am I just aggressive. I can't believe I

interview people for a minute. I think we had dinner last night, and we. It was awesome. It was one of those dinners fast, but we we had about.

400 different threats. Yes. Amazing. So that's what's what's out there. I know, I'm sort of convinced of it. My make sense

sense to me, and I'm kind of threat oriented anyway, so people with my kind of personality are like, sort of always looking for, you know, the big bad thing that's coming, the asteroid or the nuclear war or the AI slavery. But I know some pretty smart people who very smart people who are much closer to the heart of AI development, who also have these concerns. And I think a lot of the public shares these concerns. Yeah. And the last thing I'll say before soliciting your view of it, much better informed view of it, is that there's been surprisingly and tellingly, little conversation about the upside of AI.

So instead it's like this is happening and if we don't do it China will that may I think it's probably true. But like why should I be psyched about it. Like what's the upside for me. Right.

You know what I mean. Normally when some new technology or, or huge change comes, the people who are profiting from, like, you know, what's going to be great? It's going to be great. You're not going to ever have to do X again. You know, you just throw your clothes in a machine and press the button and they'll be clean. Yes. I'm not hearing any of that about it. That's a very astute observation.

And yeah, I'll exactly tell you why. And to tell you why, it's like a little bit of a long story, because I think there is a organized efforts to scare people about AI. Organized. Organized. Yes. And so this this starts with, mailing list in the 90s is a transhumanist mailing list called The Scorpions.

And these extra pins they make out of extra year or something like that. But they, they believe in the singularity. So the singularity is a moment of time where, you know, AI is progressing.

So fast, or technology in general progressing so fast that you can't predict what happens. It's self, a self evolving. It just all bets are off.

You know, we're entering a new world. Where you just can't predict it. Where technology can't be controlled.

Technology can't be controlled. It's going to remake remake everything. And those people believe that's a good thing because the world.

Now sucks so much. And we are, you know, we are imperfect and unethical and all sorts of irrational whatever. And so they really wanted to, for the singularity to happen. And there's this young guy on this, list. His name is Lazarus Koski. And he claims he can write this AI and. He would write, like.

Really long essays about how to. Build this AI suspiciously, he never really publishes code. And it's all just prose about how he's going to be able to build AI anyways. He is able to to fundraise.

They started this thing called the Singularity Institute. A lot of people were excited about the future, kind of invested in Hampshire. Thiel most famously. And he spent a few years trying to build an AI again, never. Published code, never published any like real progress. And then came out of it saying that not only you can't build AI, but if you build it, it will kill everyone.

So he kind of switched from being this optimist. You know, singularity is great to like. Actually, I will for sure kill everyone. And, and then he was like, okay, the reason I made this mistake is because I was irrational. And the way to get people to understand that I was going to kill everyone is to make them rational. So he started this blog called Less Wrong. And Less Wrong.

Is it like walks you through steps to. Becoming more rational? Look at your biases, examine yourself. You know, sit down. Meditate on the other rational decisions you've made. And try to correct them.

And then they start this thing called center for Advanced Rationality or something like that. CFR and they're giving seminars about rationality. But the.

Intention seminar about rationality, what's that. Like? I've never been to one. But my guess would be if you were to talk about the biases. Whatever. But they have also like weird things where. They have this almost struggle session like thing called debugging.

A lot of people. Wrote blog posts about how that was demeaning and it cost psychosis. In some people. 2017, that community. There was like collective

psychosis. A lot of people were kind of going crazy. And this all written about it on the internet. Bugging. So that would be like kind of your classic cult technique. Yeah. Where you have to strip

yourself bare, like auditing and Scientology or. Yes, it's very common. Yes. Yeah. So yeah, it's a constant in cults. Yes.

Is that what you're describing? Yeah. I mean, that's what I read on these accounts. Yeah. You know, they will sit down and they will, like, audit your mind and tell you where you're wrong and and all of that. And and it's, it's cause people used to stress on young guys all the time. Like, talk about how going into that community has caused some huge distress. And there were, like

offshoots of this community where there were suicides, there were murders. There were a lot of really dark and deep shit. And the other thing is, like, they kind of teach you about rationality. They recruit you to high.

Risk because if you're rational, you know, you're a group, are all rational. Now we learn the art of rationality. And we agree that AI is going to kill everyone.

Therefore, everyone outside of this group is wrong and we have to protect them. AI is going to kill everyone. And but but it also they believe other things. Like they believe that, you know. Polyamory is rational.

And everyone that. Polyamory? Yeah. Like like, you can have sex with multiple partners essentially. But they think that's I mean, I think it's, it's certainly a natural desire if you're a man, to sleep with more and different women, for sure. But it's rational in the sense how like, you've never met, happy polyamorous long term. And I'm doing a lot of them.

Not a single one. So it might be self-serving. You think, to recruit more impressionable people into. Yeah. And their hot girlfriends. Yes. Right.

So that's rational. Yeah. Supposedly. And so they, you know, they convince each other of all these, your call like behavior.

And. Yeah, the crazy thing is, like. This group ends up being super influential. Because, you know, they recruit a lot of people that are interested in AI and the AI labs and the people who are starting these companies were reading all this stuff. So Elon. You know, famously read a lot of Nick Bostrom is, as I can have an adjacent figure to the rationality community.

He was part of the original mailing list. I think he would call himself a, you know, rational part of the rational community. But he wrote a book about AI and how AI is going to, you know, kill everyone, essentially. I think he moderated his views more recently, but originally he was one of the people that are kind of banging the alarm. And,

you know, the foundation of open AI, was based on a lot of these fears, like, Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they, you know, a group of people. I don't think everyone at OpenAI really believed that. But, you know, some of the original founding story was that.

And they were recruiting from that community. Yeah, so much so when, you know, Sam Altman got fired recently. He was fired by someone from that community.

As someone who started with effective altruism, which is another offshoot from that community, really is, so the AI labs are intermarried in a lot of ways with this, with this community. And so it it ends up they kind of, you know, borrowed a lot of their talking points. But by the way, a lot of these companies are great companies now.

And I think they're cleaning up house. But there is I mean it's I'll just use the term it sounds like a cult to me. Yeah. I mean, it has the hallmarks of it in your description. Yeah.

And can we just push a little deeper on what they believe? You say they are transhumanists. Yes. What is that? What is it? I think they're just unsatisfied. With, with human nature. Unsatisfied with the current ways were constructed.

And that, you know, we're irrational. We're unethical. And so they they start, they long for the world where we can become.

More rational, more ethical by transforming ourselves either, by merging with AI via chips or what have you, changing our bodies. And, like, fixing fundamental issues. That they perceive with humans via modifications and merging with machines. It's just so interesting because. And so shallow and silly. Like a lot of those people I have known are not that smart, actually, because the best things.

I mean, reason is important and we should and in my view, given us by God. And it's really important in being irrational is bad. On the other hand, the best things about people, their best impulses are not rational. I believe there is no rational justification for giving something you need to another person. Yes, we're spending an inordinate amount of time helping someone for loving someone. Those are all irrational.

Now, banging someone's hot girlfriend. I guess that's rational, but that's kind of the lowest impulse that we have, actually. Well, wait. To hear about, effective altruism. So they think our natural impulses that you just talked about are indeed irrational. And there's a guy, his name is Peter Singer, philosopher from Australia.

The infanticide guy. Yes. He's so ethical. He's for killing children. Yeah. I mean, so their philosophy is utilitarian. You know, utilitarianism is that you can calculate ethics.

Yeah. And you can start to apply it and you get into really weird territory. You know, if you know, there's all these problems, all these thought experiment, like, you know, you have, you know, two people at the hospital. Requiring some organs of another sort. Person that came in for a regular checkup. You, or they will die.

You're basically. You're supposed to kill that guy, get his organ and put it into into the other two. And so it gets. I don't think people believe that, per se. I mean. But they. But but there are so many problems with, there's another belief that they have but can I say that belief or that conclusion grows out of the core belief, which is that your God, like a normal person, realizes? Sure. It would help more people if I killed that person and gave his organs to, you know, a number of people. Like, that's just a math

question. Yeah, true. Yeah. But I'm not allowed to do that because I didn't create life. I don't have the power. I'm not allowed to make decisions like that. Yes, because I'm just a silly human being who can't see the future and is not omnipotent because I'm not God. Yeah, I feel like all

of these conclusions stem from the misconception that people are gods. Yes, is that sound right? No, I agree, I mean, lot of the you know, I think it's, you know, there I add roots to this fundamentally unsatisfied with humans and maybe, perhaps hate. Hate. Yeah. The humans.

Well, they're deeply disappointed. Yes. I think that's such a. I've never heard anyone say that, as well, that they're disappointed with human nature. They're disappointed the human condition.

They're disappointed with people's flaws. And I feel like that. See, I mean, on one level, of course. I mean, you know, we should be better. And but that we used to call that judgment, which we're not allowed to do, by the way. That's just super judgy.

Actually, what they're saying is, you know, you suck. And it's just a short hop from there to you should be killed, I think. I mean, that's a total lack of love.

Whereas a normal person, a loving person says, you kind of suck. I kind of suck, too. Yes, but I love you anyway. And you love me anyway. And I'm grateful for your love, right? Right.

That's right. Well, they'll say you suck. Join our arts community. Have sex with us. But can I just clarify? Or these these aren't just like, you know, support staff at these companies, like, are there? So, you know, you've heard about. SBF and. FDX.

Yeah. They had a what's called a molecule. Yeah. Right. They're all having sex with each other. Like just give in now I just want to be super catty and shallow, but given some of the people they were having sex with, that was not rational.

Hahaha. No person would do that. Come on now. Yeah. That's true. Yeah.

Well, so, you know, yeah. You know, it's, what's what's even more. Disturbing, there's, you know, another ethical component to their philosophy called long termism. And this comes from the effective altruists sort of branch of rationality. Long termism.

Long termism. And so what they think is in the future, if we did, if we made the right steps, there's going to be a trillion humans, trillion minds. They might not be humans, that might be AI, but they're. Going to be trillion minds. You can experience utility. You can experience good things, fun things, whatever.

If you're utilitarian, you have to. Put a lot of weight on it. And maybe you discount. That sort of like discounted cash flows. But you still, you know, have to posit that, you know, you know, if there are trillions, perhaps many more people in the future, you need to value that very highly, even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety because they think that AI.

Because they're rational, they arrived. And we can talk about their arguments in a second. They arrived at a conclusion that AI is going to kill everyone. Therefore effective altruists and rational community, all these branches, they're all kind of focused on AI safety because. That's the most important thing, because we want a trillion people in the future to be great. But, you know, when you're assigning sort of value, that high, it's sort of a form of Pascal's Wager.

It is sort of, you can justify anything, including terrorism, including, doing really bad things. If you're really convinced that AI is going to kill everyone, and the future holds so much value, more value than any living human today has value. You might justify really doing anything.

And so built into that. It's a dangerous framework. But it's the same framework of every genocidal movement. Yes. From, you know, at least the French Revolution to present. Yes. A glorious future justifies a bloody present.

Yes. And look. I'm not accusing them of genocidal intent, by the way.

I don't know them, but I. But those ideas lead very quickly to the camps. I feel kind of. We're just talking about people just generally.

I'd like to talk about ideas about things, but if they were just like, you know, silly Berkeley calls or whatever, and they didn't have any real impact on the world, I wouldn't care about them. But what's happening is that they were. Able to convince, a lot of billionaires of these ideas.

I think Elon maybe changed his mind about. At some point he was convinced. Of these ideas. I don't know if you gave him money. There was a story at some point it was dryer that he was thinking about. It, but, a lot of other billionaires gave them money, and now they're organized, and they're in DC lobbying for AI like regulation. They're, they're behind the air regulation in California, and actually profiting from it.

There was a story in, part where's where the, you know, the main sponsor, Dan Hendricks behind SB 1047. Started a company at the same. Time that certifies the safety of AI.

And as part of the bill, it says. That you have to get certified by third party. So there's there's aspects of it that are.

Kind of let's, let's profit from it. By the way, this is all allegedly based on this article. I don't know for sure.

I think. Senator Scott, Wiener was trying to do. Right thing with the bill. But he was listening to a lot of these cult members, let's call them.

And, and they're very well organized. And, also a lot of them still have connections to the big air labs, and, some of them work there. And they would want to create, you know, a situation where there's no competition in AI regulatory capture, per se.

And so. I'm not saying that these are all like. The direct motivations. All of them. Are true believers. But, you know, you might you. Might kind of infiltrate this

group archive director in a way that benefits these, corporations. Yeah. Well, I'm from DC, so I've seen a lot of instances where, you know, my bank account aligns with my beliefs. Thank heaven. Yeah, yeah, it's kind of happened.

Winds up that way. It's funny. Climate is the perfect example. There's never one climate solution that makes the person who proposes it poorer or less power for ever. Not one we've told you before about. Hello.

It is a great app that I am proud to say. I use my whole family use as its for daily prayer and Christian meditation, and it's transformative. As we head into the start of school and the height of election season. You need it. Trust me, we all do.

Things are going to get crazier and crazier and crazier. Sunrises are to imagine even what is coming next. So with everything happening in the world right now, it is essential to ground yourself.

This is not some quack cure. This is the oldest and most reliable cure in history. It's prayer. Ground yourself in prayer and scripture every single day.

That is a prerequisite for staying sane and healthy and maybe for doing better eternally. So if you're busy on the road headed to kids sports, there's always time to pray and reflect alone or as a family. But it's hard to be organized about it.

Building a foundation of prayer is going to be absolutely critical as we head into November, praying that God's will is done in this country, and that peace and healing come to us here in the United States and around the world. Christianity obviously is attack under attack everywhere. That's not an accident. Why is Christianity the most peaceful of all religions under attack globally? Did you see the opening the Paris Olympics? There's a reason, because the battle is not temporal. It's taking place in the unseen world. It's a spiritual battle, obviously.

So try Halo, get three months completely free at Halo. That's halo.com/tucker. If there's ever a time to get spiritually in tuning, ground yourself in prayer. It's now. Halo will help.

Personally and strongly and totally sincerely recommended outcomes. Tucker. I wonder, like, about the core assumption, which I've had up until right now, that these machines are capable of thinking.

Yeah. Is that true? So let's g through their chain of reasoning. I think the fact that it's. It's a stupid call flag thing, or perhaps. Actually a cult does not automatically. Mean that there are arguments. That's right. That's exactly right.

I think it does. You you do have to kind of discount. Some of the arguments because because it comes from crazy people. But the arguing, the. Chain of reasoning is that. Humans are general intelligence.

We have these things called brains. Brains are computers. They're based on purely physical phenomena that we know of their computing.

And if you agree that humans are, computing and, therefore we can build. A general intelligence in the machine. And if you agree up until this point, if you if you're able to build a general intelligence in the. Machine, even if only at human level. Then you can create a billion copies of it.

And then it. Becomes a lot more powerful than any one of us. And because. It's a lot more powerful than any one of.

Us, it would one to control us or I would want it would not care. About us, because as such, it's more powerful, kind of like we don't care about. And we'll step on our hands, no problem. Right. Because these machines are so

powerful, they're not going to care about us. And I sort of get off the train at the first chain of reasoning. But every one of those steps I have problems with, the first step is. The mind is a computer. And, you know, based on what? And the idea is.

Oh, well, if you don't believe that. The mind is a computer, then you're believing some kind of, woo spiritual. Thing. Well, you know, you well, you have to convince me you haven't presented an argument. But, but but but the idea that, like. Speaking of rational.

But. Yeah, this is what reason looks. Like, right? The idea that we have a complete description of the. Universe anyways is wrong, right? We don't have a universal physics. We have physics of the small things.

We have physics of the big things. We can't really cohere them or combine them. So just the idea that you, being a materialist is sort of incoherent because we. Don't have a complete description of the world.

That's one thing. That's a side argument I'm not going to. No, no. No, it's it's a very interesting argument though. So so you're saying as someone who I mean, you're a science, you're effectively a scientist. You just state for viewers who don't follow this stuff like the limits of our knowledge of physics.

Yeah. So, you know, we have essentially two. Conflicting theories of physics. These systems can't be married.

They're not a universal system. You can't juice them both at the same time. Oh, well, that suggests, a profound limit to our understanding of what's happening around us in the natural world. Does it? Yes, it does. And I think, this is,

again, another error of. The rationalist types is that just assume that, you know, we so much more advanced in our science than we actually are. So it sounds like they don't know that much about science.

Yes. Okay. Thank you. Thank you. I'm sorry to ask.

You to pass. Yeah, that's not even the main crux of my argument. There is, a, philosopher. Slash mathematician slash scientist. Wonderful.

His name is, sir, Roger Penrose. I love how the British kind of give the sir title. Someone who's accomplished. The.

He wrote this book called The Emperor's New Mind. In the in the, in it's based on, you know, the emperor's new clothes. The idea of the idea that, you know, the emperor is kind of naked and and, in his opinion, the argument that the mind is a computer is a sort of consensus argument that is wrong. The emperor's naked. Eye is not really an argument. It's an assertion. Yes, it's an assertion that is fundamentally wrong.

And the way he proves. It is very interesting. There is in mathematics. There's, something called girdles incompleteness theorem.

And you know, what that says is, there are statements. That are true. That can't be proved in mathematics.

So he constructs. Girdle constructs, like a number, system where he can start to make statements about this number system. So the. You know, he he. Creates a statement that's like. This. Statement is unprovable in system

F for the where the system is f, the whole system of F. Well, if you try to prove it, then that. Then that seed me becomes false. But, you know, it's true because it's unprovable in the system. And Roger Penrose says, you know, because we have this knowledge that it. Is true by looking at it.

Despite, like, we can't prove it. I mean, the whole feature of. The sentence is that it. Is unprovable. Therefore our.

Knowledge is outside of any formal system. Therefore, yes, the human brain. Is. Or like our mind is understanding something. That mathematics is not able to give. Give it to us to describe. To describe.

And I thought I the first time I read it, you know, it read a lot of these things. Like, what's the famous you were telling me last night? I'd never heard it. The Bertrand Russell self canceling assertion. Yeah. It's like this statement is false. It's called a liar. It's called a liar. Paradox.

What? Explain why that's just that's going to float in my head forever. Why is that a paradox? So this statement is false. If you.

If you look at a statement and agree with it, then it becomes true. But if it's true, then it's not true. It's false. And you do this, the circular thing and you never stop. Right? It's a broke logic in a way.

Yes. And Bertrand Russell spent his whole, you know, big. Part of his life writing his book, Principia mathematica. And he wanted to really. Prove that mathematics is complete, consistent, you know, decidable, computable, all that.

And then all these things happened. Girdles and complete. Theorem.

Turing. The inventor of the computer. Actually, this is the most ironic piece of science history that nobody ever talks about. But.

Turing invented the computer. To show its limitation. So he invented. The. Turing machine, which is the ideal representation of a computer that we have today. It's all computers are Turing machines.

And he showed that, this machine, if you give it a set of. Instructions. It can.

Tell whether those set of instructions. Will ever stop, will run and stop. Or it will completely stop or continue running forever. It's called the halting problem. And this makes this proves that mathematics have, undecidability. It's not fully decidable or computable.

So all of these things were. Happening as he was writing the book. And, you know, it was. It was really depressing for. Him because he he kind of went. Out to prove that, you know, mathematics is complete and all of that.

And, you know, this caused kind of a major panic. At the time between mathematicians and all of that, it's like, oh, my God. Like, our systems are not complete. So it sounds like the deeper you go into science and the more honest you are about what you discover, the more questions you have. Yeah. Which kind of gets you back to

where you should be in the first place, which is in a posture of humility. Yes. And yet I see science used certainly in the political sphere. I mean, those are all dumb people. So it's like, who cares? Actually, Kamala Harris lecture me about science. I don't even hear it. But so also some smart people like, believe the science. The assumption behind that demand is that it's complete and it's knowable and we know it.

And if you're ignoring it, then you're ignorant, willfully or otherwise. Right? Well, I my view of. Science, it's a method. Ultimately it's a method anyone can apply. It's it's democratic. It's decentralized.

Anyone can apply the scientific method. Including people who are not trained. But in order to practice the method, you have to come from a position of humility. Yeah, but I don't know. That's why I'm using this method to find out. And I cannot lie about

what I observe. Right. That's right. And today, you know, it's you know, capital S science is used to control and it's used to. Propagandize and like. You know, in the hands of. You know, just really people shouldn't have just dumb people with, you know, pretty ugly agendas. But.

But we're talking about the world that you live in, which is like unusually smart people who do the stuff for living and are really trying to advance the ball in science. And I think what you're saying is that some of them, knowingly or not, just don't appreciate how little they know. Yeah. And, you know, they could through. This chain of reasoning for this.

Argument. And, you know, none of those are at minimum, you know, a complete, and like, you know, they don't just take it for granted if you even doubt that. The mind is a. Computer. You're you know, I'm sure a lot of people will call me heretic and will call me like, you know, all sorts of names because. It's just dogma.

That the mind is a computer. That the. Mind is a computer is dogma. And.

Technology. Science. That's so silly. Yes. Well, I mean, let me count the ways the mind is different from a computer. First of all, you're not

assured of a faithful representation of the past. Memories change over time, right? In a way that's misleading and who knows why. But that is a fact, right? That's not true of computers. That's right, I think. Yeah. But how are we explaining things like intuition? Yeah. An instinct. Those are not.

Well, that is actually my question. Could those ever be features of a machine? You could argue that, neural networks. Are sort of intuition machines, and that's what a lot of people say. But neural networks, you know. And maybe I will describe them, just for the audience. Neural networks.

Are inspired by the brain. And the idea is that you can connect a network of small little. Functions, just mathematical functions.

And you can train it by. Giving it examples. I could give it. A picture of a cat. And if it says, you know, let's.

Say this network has to say yes. If it's a cat, no. If it's not a cat. It's, you know, to give it a picture of a cat. And then the answer is no, then it's wrong.

You adjust the weights based on the difference between the picture and the answer. And you do this, I don't know, a billion times. And then the network encodes features about the cat. And this is literally. Exactly how neural networks work is as.

You tune all these small parameters until there's some embedded. Feature detection of, you know, especially in classifiers. Right.

And this is not intuition. This is. Basically.

Automatic programing the way I see it. Right? Of course, is that we can write code manually. You can go to our website, write code. But we can, generate algorithms, automatically via machine learning. Machine learning essentially discovers.

These algorithms. And sometimes discovers, like. Very crappy algorithms.

For example, like, you know, you know, all the pictures that we gave it of a cat had grass in them. So it would learn that grass equals cat. The color green cat. Yes. And then you give it one day a picture of a cat without grass and tails are like, what happened? All it turns out it learned the wrong thing.

So, because it's obscure what it's actually learning. People interpret that as as intuition. Because it's not, the algorithms are not, it's explicated. And there's a lot of work now.

On trying to explicate these algorithms, which is great work from companies like anthropic. But, you know, I don't think you can call it intuition just because it's obscure. So what is it? How is intuition different? Human intuition. We don't, you know, for one, we don't require.

A trillion examples of cat to learn a cat. Good point. You know, a kid can learn, language with very little examples.

Right now when we're training these large language models like ChatGPT. You have to give it the entire internet. For it to learn language.

And that's not really how humans work. And the way we learn is like, we, combine intuition and some more. Explicit way of. Learning.

And I don't think we've figured out how to do it with machines just yet. It. Do you think that structurally it's possible for machines to get there? So. So, so, you. Know, this this chain of reasoning.

You know, I can go through. Every point and present present arguments to the contrary, or at least like present doubt. But no one is. Really kind of trying to deal with those doubts. And, and, my view is that I'm not holding these doubts, you know, very, very strongly, but my view is that we just don't have a complete understanding of the mind. And you can't you at least can't use it, to argue that, kind of machine that.

Acts like a human but much more powerful can kill us. All. But, you know, do I think that, you know, I can get really powerful? Yes, I think I can get really powerful, can get really useful. I think functionally can feel like its general AI is ultimately a function of data. The kind of data that we put into it. It's the functionality. Is based on the data.

So we can get very little. Functionality outside of that. Actually we don't get any functionality outside of that data. It's actually been proven that these machines are just the function. Of their. Data. The sum total of what you put in. Exactly. Again.

Garbage out. Yeah. The cool thing about them is they can mix. And match different functionalities that they learn from. The data set looks a little bit more general. But let's say we collected all data of the world. Collected everything that we.

Care about, and we somehow fit. It into a machine. And now everyone's building these really large data centers. You will get a very highly capable, machine that will kind of look general, because we collected a lot of economically useful data and will start doing economically useful tasks. And from our perspective, it will start to look general.

So I'll call it functionally AGI. I don't doubt we're sort of had it in some direction like that. But but we haven't figured out how these machines can actually generalize and can learn and can use things like intuition for when they see something fundamentally new outside of their data distribution.

They can actually react to it correctly and learn it efficiently. We don't have the science for that. So because we don't have the understanding of it yet on the most fundamental level, you began that explanation by saying we don't really understand the human brain.

So like, how can we compare it to something because we don't even really know what it is. And there are a couple of, there's a, there's a. Machine learning scientists.

Francois. Chalet, I don't know how to pronounce, French names, but I. Think that's his name. He, he he took a sort of an IQ like test. You know, where you're rotating shapes and. Whatever, and, an entrepreneur put $1 million.

For anyone who's able to solve it using. AI and all the modern. Eyes that we think are super. Powerful couldn't do something that. Like a ten year old kid could do. And it showed that, again, those machines are just functions of the data. The more you throw a. Problem that's novel at them, they really are not able to do it.

Now again, I just I'm not fundamentally discounting the fact that maybe we'll get there, but just the reality of where we are today. You can't argue that we're just going to put more compute and more data into this, and suddenly it becomes God and kills us all because because that's the argument. And they're, you know, going to DC and they're going to all. These places that are springing up regulation, this regulation is going to.

Hurt. American industry. It's going to hurt startups. It's going to make it hard to compete. It's going to give China tremendous.

Advantage. It's going to really hurt us based on these flawed arguments that they're not actually battling with these real questions. It sounds like they're not. And what gives me pause is not, so much the technology. It's the way that the people creating the technology understand people. So I think the wise and correct way to understand people is as not self-created beings, people did not create themselves.

People cannot create life. As beings created by some higher power. Who at their core, have some kind of impossible to describe spark a holy mystery.

And for that reason, they cannot be enslaved or killed by other human beings. That's wrong. There is right and wrong that is wrong. You know what's a gray hairs? That's not a gray area because they're not self-created? Yes. Right.

I think that all human action flows from that belief and that the most inhumane actions in history flow from the opposite belief, which is people are just objects that can and should be improved. And I have full power over them. Like, that's a really that's a totalitarian mindset. And it's the one thing that connects every genocidal movement is that belief. So if it seems to me, as an outsider, that the people creating this technology have that belief. Yeah. And you don't even have to be

spiritual to have that belief. Look, I, well, you. Certainly you certainly don't. Yeah, yeah. So I.

Think I think that's actually a rational conclusion, but I. 100% agree. I'll give you one interesting anecdote, again from science.

We've had brains for. Half a billion. If you believe in evolution, all that we have had brains for half a billion, years. Right? And we've had kind of a human, like species, for, you know, you know, half a million years, perhaps more, perhaps a million years. There is a moment in time.

40,000 years ago. It's called the Great Leap Forward. Where we see culture, we see religion. We see drawings.

We see we saw like very little of that before that tools and whatever. And suddenly we're seeing this, this Cambrian explosion of culture. Right. And pointing to something larger than just like daily needs or the world around them.

Yeah, and it's not it. We'r

2024-08-02 14:53

Show Video

Other news