Transcribe your podcast
[00:00:07]

Welcome to Tucker Carlson show. It's become pretty clear that the mainstream media are dying. They can't die quickly enough, and there's a reason they're dying, because they lie. They lied so much it killed them. We're not doing that. Tuckercarlson.com comma, we promise to bring you the most honest content, the most honest interviews we can, without fear or favorite. Here's the latest. It does sound like you're, like, directly connected to AI development. Yes. You're part of the ecosystem. Yes.

[00:00:34]

And we benefited a lot from when it started happening. Like, it was almost a surprise to a lot of people, but we saw it coming.

[00:00:41]

And you saw AI coming?

[00:00:43]

Saw it coming, yeah. So, you know, this recent AI wave, you know, it surprised a lot of people when Chat GPT came out in November 22.

[00:00:52]

Yeah.

[00:00:53]

A lot of people just lost their mind. Like, suddenly a computer can talk to me, and that was like the whole thing.

[00:00:58]

Yeah, I wasn't into it at all, really. It was terrifying.

[00:01:02]

Paul Graham, one of my closest friends and sort of allies and mentors, he's a big Silicon Valley figure. He's a writer, kind of like you, you know, he writes a lot of essays, and he hates it. He thinks it's like a midwit. Right. And it's just like making people write worse, making people think worse, worse or.

[00:01:22]

Not think at all.

[00:01:23]

Right.

[00:01:23]

Nothing, as the iPhone has done, as Wikipedia and Google have done.

[00:01:26]

Yes, we were just talking about that. The iPhones, iPads, whatever, they made it so that anyone can use a computer, but they also made it so that no one has to learn to program. The original vision of computing was that this is something that's gonna give us superpowers. Right? JCLichlider, the head of DARPA while the Internet was developing, wrote this essay called the Man Machine symbiosis, and he talked about how computers can be an extension of ourselves, can help us grow. We can become. There's this marriage between the type of intellect that the computers can do, which is high speed arithmetic, whatever, and the type of intellect that humans can do is more intuition.

[00:02:15]

Yes.

[00:02:16]

But since then, I think the consensus has changed around computing, and I'm sure we'll get into that, which is why people are afraid of AIH is kind of replacing us. This idea of, like, computers and computing are a threat because they're directly competitive with humans, which is not really the belief I hold. They're extensions of us, and I think people learning to program, and this is really embedded at the heart of our mission at relet is what gives you superpowers. Whereas when you're just tapping, you're kind of a consumer. You're not a producer of software. And I want more people to be producers of software. There's a book by dog, not Hofsetter Roshkov. Dog lush Roshkov. It's called program or be programmed. And the idea of, if you're not the one coding, someone is coding you. Someone is programming you. These algorithms on social media, they're programming us. Right.

[00:03:16]

Too late for me to learn to code, though.

[00:03:18]

I don't think so.

[00:03:19]

I don't think so. I can't balance my checkbook. Assuming there are still checkbooks. I don't think there are. But let me just go back to something you said a minute ago, that the idea was originally as conceived by the DARPA guys who made this all possible, that machines would do the math, humans would do the intuition. I wonder, as machines become more embedded in every moment of our lives, if intuition isn't dying or people are less willing to trust theirs. I've seen that a lot in the last few years where something very obvious will happen, and people are like, well, I could sort of acknowledge and obey what my eyes tell me, and my instincts are screaming at me. But the data tell me something different. I feel like my advantage is I'm very close to the animal kingdom.

[00:04:07]

That's right.

[00:04:08]

And I just believe in smell. But I wonder if that's not a result of the advance of technology.

[00:04:15]

Well, I don't think it's inherent to the advance of technology. I think it's a cultural thing. Right. It's how to, again, this vision of computing as a replacement for humans versus an extension machine for humans. And so you go back, Bertrand Russell wrote a book about history of philosophy and history of mathematics, and going back to the ancients and pythagoras and all these things, and you could tell in the writing. He was almost surprised by how much intuition played into science and math and, you know, in the sort of ancient era of advancements in logic and philosophy and all of that, whereas I think the culture today is like, well, you gotta check your intuition at the door.

[00:05:04]

Yes.

[00:05:04]

Yeah, you're biased. Your intuition is racist or something. And you have to. This is bad. And you have to be this, like, blank slate and, like, you trust the data. But by the way, data is. You can make the data say a lot of different things.

[00:05:19]

Oh, I've noticed. Wait, can I just ask a totally off topic question that just occurred to me? How are you this well, educated. I mean, so you grew up in Jordan speaking Arabic in a displaced palestinian family. You didn't come to the US until pretty recently. You're not a native english speaker. How are you reading Bertrand Russell? And what was your education? Is every palestinian family in Jordan as well educated?

[00:05:44]

Kind of like. Yeah, palestinian diaspora is like pretty well educated. And you're starting to see this generation, our generation of kind of who grew up are starting to sort of become more prominent. I mean, in Silicon Valley, you know, a lot of C suite and VP level executives. A lot of them are palestinian originally. A lot of them wouldn't say so because there's still, you know, bias and discrimination and all that.

[00:06:09]

But they wouldn't say they're a Palestinian.

[00:06:10]

They wouldn't say. And, you know, they're called Adam. And some of them, some of the christian Palestinians especially kind of blend in. Right. And so, but there's a lot of them over out there.

[00:06:18]

But how did you, so how do you wind up reading? I assume you read Bertrand Russell in English.

[00:06:23]

Yes.

[00:06:24]

How did you learn that? You didn't grow up in an english speaking country?

[00:06:28]

Yeah, well, Jordan is kind of an english speaking country.

[00:06:31]

Well, it kind of is. That's true.

[00:06:32]

Right. So, you know, it was, it was a british colony, I think one of the, you know, you know, the Independence Day, like, happened like fifties or something like that, or maybe sixties. So it was like pretty late in the, you know, british sort of empires history that Jordan stopped being colony. So there was a lot of british influence I went to. So my father, my father is a government engineer. He didn't have a lot of money, so we lived a very modest life, kind of like middle, lower middle class. But he really cared about education. He sent us to private schools, and in those private schools we learned kind of using british diploma. Right. So igcse a levels, you know, that's. Are you familiar with.

[00:07:16]

Not at all. Yeah.

[00:07:17]

So part of the sort of british colonialism or whatever is like education system became international. I think it's a good thing.

[00:07:28]

Oh, yeah. There are british schools everywhere.

[00:07:29]

Yeah, yeah. British schools everywhere. And there's a good education system. It gives students a good level of freedom and autonomy to kind of pick the kind of things they're interested in. So I went to a lot of math and physics, but also did like random things. I did child development, which I still remember. And now that I have kids, I actually use. And in high school. You do that in high school. And I learned.

[00:07:51]

What does that have to do with the civil rights movement?

[00:07:54]

What do you mean?

[00:07:55]

Well, that's the only topic in american schools.

[00:07:57]

Really?

[00:07:58]

Yeah. Oh, yeah. You spend 16 years learning about the civil rights movement so everyone can identify the Edmund Pettus bridge, but no one knows anything else.

[00:08:04]

Oh, God, I'm so nervous about that with my kids.

[00:08:08]

Opt out. Trust me. That's so interesting. When did you come to the US?

[00:08:14]

2012.

[00:08:15]

Damn. And now you've got a billion dollar company. That's pretty good.

[00:08:19]

Yeah, I mean, America's amazing. Like, I just love this country. It's given us a lot of opportunities. I just love the people. Like everyday people. I like just talk to people. I was just talking to my driver, which she was like, you know, I'm so embarrassed. I didn't know who Doctor Carlson was. Good.

[00:08:36]

That's why I live here. Yeah.

[00:08:38]

I was like, well, good for you. I think that means you're just like, you're just living your life. And she's like, yeah, I have my kids and my chickens and my whatever. I was like, that's great.

[00:08:46]

It means you're happy.

[00:08:48]

It means you're happy.

[00:08:50]

So I'm sorry to digress. I'm sorry to grasp, referring to all these books, I'm like, you're not even from here. It's incredible. So, but back to AI and to this question of intuition, you don't think that it's an. It's inherent. So in other words, if my life is to some extent governed by technology, by my phone, by my computer, by all the technology embedded in every electronic object, you don't think that makes me trust machines more than my own gut?

[00:09:21]

You can choose to, and I think a lot of people are being guided to do that. But ultimately, you're giving away a lot of freedom. And it's not just me saying that. There's a huge tradition of hackers and computer scientists that started ringing the alarm bell really long time ago about the way things were trending, which is more centralization, less diversity of competition in the market. And you have one global social network as opposed to many now. It's actually getting a little better. But you had a lot of these people start the crypto movement. I know you were at the bitcoin conference recently and you told them CIA started bitcoin. They got really angry on Twitter.

[00:10:16]

I don't know that. But until you can tell me who Satoshi was, I have some questions.

[00:10:23]

Actually, I have a feeling about who Satoshi was. But that's a separate competition.

[00:10:26]

No, it's not. Let's just stop right now because I will never forget to ask you again, who is Satoshi?

[00:10:31]

There's a guy. His name is Paul Leroux, by the.

[00:10:34]

Way, for those watching and don't know who Satoshi was. Satoshi is the pseudonym that we use for the person who created bitcoin. But we don't know.

[00:10:42]

It's amazing. You know, it's this thing that was created. We don't know who created it. He never moved the money. I don't think. Maybe there was some activity here and there, but there's, like, billions, hundreds of billions of dollars locked in. So we don't know the person is they're not cashing out of. And it's like, pretty crazy story, right?

[00:11:00]

It's amazing. So Paul le Roux.

[00:11:01]

Yeah. Paul le Roux was crypto hacker in Rhodesia before Zimbabwe, and he created something called encryption for the masses, em four, and was one of the early. By the way, I think Snowden used em four as part of his hack. So he was one of the people that really made it so that cryptography is accessible to more people. However, he did become a criminal. He became a criminal mastermind in Manila. He was really controlling the city almost. He paid off all the cops and everything. He was making so much money from so much criminal activity. His nickname was Sletoshi with an l. And so there's a lot of circumstantial evidence. There's no cutthroat evidence. But I just have a feeling that he generated so much cash, he didn't know what to do with it, where to store it. And on the side, he was building bitcoin to be able to store all that cash. And around that same time that Satoshi disappeared, he went to jail. He got booked for all the crime he did. He recently got sentenced to 25 years of prison. I think judge asked him, what would you do if you would go out?

[00:12:19]

And he's like, I would build an ASIC chip to mine bitcoin. So, look, this is a strong opinion, loosely held, but it's just like, there's.

[00:12:28]

So he is currently in prison.

[00:12:29]

He's currently in prison, yeah.

[00:12:32]

In this country or the Philippines, I think this country.

[00:12:35]

Cause he was doing all the crime here. He was selling drugs online, essentially.

[00:12:39]

Huh. We should go see him in jail.

[00:12:41]

Yeah, yeah. Check out his stories.

[00:12:44]

I'm sorry, I just had to get that out of you. So I keep digressing. So you see AI, and you're part of the AI ecosystem. Of course. But you don't see it as a threat, do you? No, no, I don't.

[00:12:59]

See it as a threat at all. And I think, and I heard some of your podcasts with Joe Rogan, whatever, and you were like, we should nuke the data centers.

[00:13:06]

And I'm excitable on the basis of very little information. Yeah.

[00:13:11]

Well, actually, tell me, what is your theory about the threat of aih?

[00:13:15]

I always. I want to be the kind of man who admits upfront his limitations and his ignorance. And on this topic, I'm legitimately ignorant, but I have read a lot about it and I've read most of the alarmist stuff about it. And the idea is, as you well know, that the machines become so powerful that they achieve a kind of autonomy, and they, though designed to serve you, wind up ruling you.

[00:13:39]

Yeah.

[00:13:40]

And I'm really interested in Ted Kaczynski's writings, his two books that he wrote, obviously, as to say, ritually, I'm totally opposed to letter bombs or violence of any kind, but Ted Kaczynski had a lot of provocative and thoughtful things to say about technology. It's almost like having live in help, which people make a lot of money, they all want to have live in help, but the truth about live and help is they're there to serve you, but you wind up serving them, and it inverts. And AI is a kind of species of that. That's the fear. And I don't want to live. I don't want to be a slave to a machine any more than I already am. So it's kind of that simple. And then there's all this other stuff. You know a lot more about this than I do, since you're in that world. But, yeah, that's my concern.

[00:14:27]

That's actually a quite valid concern. I would like, decouple the existential threat concern from the concern. And we've been talking about this of like, machine, like us being slaves to the machines. And I think Tez Kaczynski's critique of technology is actually one of the best.

[00:14:46]

Yes, thank you.

[00:14:47]

Yeah, I wish he hadn't killed people.

[00:14:49]

Of course, because I'm against killing, but I also think it had the opposite of the intended effect. He did it in order to bring attention to his thesis and ended up obscuring it. But I really wish that every person in America would read not just his manifesto, but the book that he wrote from prison, because they're just so. At the least, they're thought provoking and really important.

[00:15:13]

Yeah, yeah. I mean, briefly, and we'll get to existential risk in a second. But he talked about this thing called the power process, which is he thinks that it's intrinsic to human happiness to struggle for survival, to go through life as a child, as an adulthood, build up yourself, get married, have kids, and then become the elder and then die.

[00:15:36]

Right, exactly.

[00:15:37]

And he thinks that modern technology kind of disrupts this process and makes people miserable.

[00:15:43]

How do you know that?

[00:15:44]

I read it. I'm very curious. I read a lot of things, and I just don't have mental censorship in a way, like, I can.

[00:15:53]

I love.

[00:15:54]

I'm really curious. I'll read anything.

[00:15:56]

Do you think being from another country has helped you in that way?

[00:16:00]

Yeah. And I also, I think just my childhood, I was, like, always different. When I had hair, it was all red. It was bright red. And my whole family is kind of, or at least half of my family are redheads. And because of that experience, I was like, okay, I'm different. I'm comfortable being different. I'll be different. And, you know, that just commitment to not worrying about anything, you know, about conforming or, like, it was forced on me that I'm not conforming just by virtue of being different and being curious and being, you know, good with computers and all that. I think that carried me through life. It's just I, like, I get, you know, I get, like, almost a disgust reaction to conformism and, like, mob mentality.

[00:16:57]

I couldn't agree with. I had a similar experience, childhood. I totally agree with you. We traveled to an awful lot of countries on this show, to some free countries, the dwindling number, and a lot of not very free countries, places famous for government censorship. And wherever we go, we use a virtual private network of VPN, and we use expressVPN. We do it to access the free and open Internet. But the interesting thing is, when we come back here to the United States, we still use express VPN. Why? Big tech surveillance. It's everywhere. It's not just North Korea that monitors every move its citizens make. No, that same thing happens right here in the United States and in Canada and Great Britain and around the world. Internet providers can see every website you visit. Did you know that? They may even be required to keep your browsing history on file for years and then turn over to federal authorities if asked? In the United States, Internet providers are legally allowed to and regularly do sell your browsing history everywhere you go online. There is no privacy. Did you know that? Well, we did, and that's why we use express VPN.

[00:18:05]

And because we do, our Internet provider never knows where we're going on the Internet. They never hear it in the first place. Thats because 100% of our online activity is routed through ExpressVPNs secure, encrypted servers. They hide our ip address so data brokers cannot track us and sell our online activity. On the black market, we have privacy. ExpressVPN lets you connect to servers in 105 different countries. So basically, you can go online like youre anywhere in the world. No one can see you. This was the promise of the Internet in the first place, privacy and freedom. Those didn't seem like they were achievable, but now they are. Expressvpn. We cannot recommend it enough. It's also really easy to use, whether or not you fully understand the technology behind it. You can use it on your phone, laptop, tablet, even your smart tvs. You press one button, just tap it, and you're protected. You have privacy. So if you want online privacy and the freedom it bestows, get it. You can go to our special link right here to get three extra months free of expressVpn. That's expressvpn.com Tucker Express expresvpn.com, tucker, for three extra months free.

[00:19:27]

Hey, it's Kimberly Fletcher here from moms for America with some very exciting news. Tucker Carlson is going on a nationwide tour this fall, and moms for America has the exclusive vip meet and greet experience for you. Before each show, you can have the opportunity to meet Tucker Carlson in person. These tickets are fully tax deductible donations. So go to MomsforAmerica us and get one of our very limited vip meet and greet experiences with Tucker at any of the 15 cities on his first ever coast to coast tour, not only will you be supporting moms for America in our mission to empower moms, promote liberty, and raise patriots, your tax deductible donation secures you a full vip experience with priority entrance and check in premium gold seating in the first five rows, access to a pre show cocktail reception, an individual meet and greethe and photo with America's most famous conservative and our friend, Tucker Carlson. Visit momsforamerica us today for more information and to secure your exclusive vip meet and greet tickets. See you on the tour.

[00:20:56]

Hey, guys, Josh Hammer here, the host of America on Trial with Josh Hammer, a podcast for the first podcast network. Look, there are a lot of shows out there that are explaining the political news cycle, what's happening on the hill, this to that. There are no other shows that are cutting straight to the point when it comes to the unprecedented lawfare, debilitating and affecting the 2024 presidential election. We do all that every single day right here on America on trial with Josh Hammer. Subscribe and download your episodes wherever you get your podcast. It's America on trial with Josh Hammer. So Kaczynski's thesis, that struggle is not only inherent to the condition, but an essential part of your evolution as a man or as a person, and that technology disrupts that. I mean, that seems right to me.

[00:21:44]

Yeah. And I actually struggle to sort of, um, dispute that, uh, despite being a technologist. Right. Ultimately, uh, again, like I said, it's like one of the best critiques. I think we can spend the whole podcast kind of really trying to tease it apart. I think ultimately where I kind of defer. And again, it just goes back to a lot of what we're talking about. My views on technology as an extension of us is like, we just don't want technology to be, uh, a thing that's just merely replacing us. We want it to be an empowering thing. And what we do at relet is we empower people to learn the code, to build startups, to build companies, to become entrepreneurs. And, um. And I think you can, in this world, you have to create the power process. You have to struggle, and, uh, yes, you can. This is why I'm also, you know, a lot of technologists talk about Ubi and universal basic.

[00:22:44]

Oh, I know.

[00:22:45]

I think it's all wrong because it just goes against human nature.

[00:22:48]

Thank you.

[00:22:49]

So I think you want to kill.

[00:22:51]

Everybody, put them on the dole.

[00:22:52]

Yes. Yes. So I don't think technology is inherently at odds with the power process. I'll leave it at that. And we can go to existential threat.

[00:23:05]

Yeah, of course, sir. Boy, am I just aggressive. I can't believe I interview people for a living. We had dinner last night.

[00:23:14]

That was awesome. It was one of the best dinners.

[00:23:17]

We hit about 400 different threats.

[00:23:19]

Yes. That's amazing.

[00:23:22]

So that's what's out there. I know. I'm sort of convinced of it, or it makes sense to me, and I'm kind of threat oriented anyway, so people with my kind of personality are sort of always looking for the big bad thing that's coming. The asteroid or the nuclear war, the AI slavery. But I know some pretty smart people who, very smart people who are much closer to the heart of AI development, who also have these concerns. And I think a lot of the public shares these concerns.

[00:23:54]

Yeah.

[00:23:55]

And the last thing I'll say before soliciting your view of it, much better informed view of it, is that there's been surprisingly and tellingly little conversation about the upside of Aih. So instead it's like, this is happening and if we don't do it, China will. That may. I think that's probably true, but like, why should I be psyched about it? Like, what's the upside for me?

[00:24:18]

Right?

[00:24:19]

You know what I mean? Normally when some new technology or huge change comes, the people who are profiting from like, you know what? It's going to be great. It's going to be great. You're not going to ever have to do X again. You know, you just throw your clothes in a machine and press a button and they'll be clean.

[00:24:33]

Yes.

[00:24:34]

I'm not hearing any of that about it.

[00:24:35]

That's a very astute observation. And I'll exactly tell you why. And to tell you why. It's like a little bit of a long story because I think there is a organized effort to scare people about AI.

[00:24:49]

Organized?

[00:24:49]

Organized, yes. And so this starts with a mailing list. In the nineties is a transhumanist mailing list called the extropions. And these extropions, they might have got them wrong, extropia or something like that, but they believe in the singularity. So the singularity is a moment of time where AI is progressing so fast, or technology in general progressing so fast that you can't predict what happens. It's self evolving and it just. All bets are off. We're entering a new world where you.

[00:25:27]

Just can't predict it, where technology can't.

[00:25:29]

Be controlled, technology can't be controlled. It's going to remake, remake everything. And those people believe that's a good thing because the world now sucks so much and we are imperfect and unethical and all sorts of irrational whatever. And so they really wanted for the singularity to happen. And there's this young guy on this list, his name's Iliezer Itkowski, and he claims he can write this AI and he would write really long essays about how to build this AIH suspiciously. He never really publishes code, and it's all just prose about how he's going to be able to build AI anyways. He's able to fundraise. They started this thing called the Singularity Institute. A lot of people were excited about the future, kind of invested in him. Peter Thiel, most famously. And he spent a few years trying to build an AI again, never published code, never published any real progress. And then came out of it saying that not only you can't build AI, but if you build it, it will kill everyone. So he switched from being this optimist. Singularity is great to actually, AI will for sure kill everyone. And then he was like, okay, the reason I made this mistake is because I was irrational.

[00:26:49]

And the way to get people to understand that AI is going to kill everyone is to make them rational. So he started this blog called less wrong and less wrong walks you through steps to becoming more rational. Look at your biases, examine yourself, sit down, meditate on all the irrational decisions you've made and try to correct them. And then they start this thing called center for Advanced Rationality or something like that. Cifar. And they're giving seminars about rationality, but.

[00:27:18]

The intention seminar about rationality, what's that like?

[00:27:22]

I've never been to one, but my guess would be they will talk about the biases, whatever, but they have also weird things where they have this almost struggle session like thing called debugging. A lot of people wrote blog posts about how that was demeaning and it caused psychosis in some people. 2017, that community, there was collective psychosis. A lot of people were kind of going crazy. And this all written about it on the Internet, debugging.

[00:27:48]

So that would be kind of your classic cult technique where you have to strip yourself bare, like auditing and Scientology or. It's very common, yes.

[00:27:57]

Yeah.

[00:27:59]

It's a constant in cults.

[00:28:00]

Yes.

[00:28:01]

Is that what you're describing?

[00:28:02]

Yeah, I mean, that's what I read on these accounts. They will sit down and they will, like, audit your mind and tell you where you're wrong and all of that. And it caused people huge distress on young guys all the time talk about how going into that community has caused them huge distress. And there were, like, offshoots of this community where there were so suicides, there were murders, there were a lot of really dark and deep shit. And the other thing is, they kind of teach you about rationality. They recruit you to AI risk, because if you're rational, you're a group. We're all rational now. We learned the art of rationality, and we agree that AI is going to kill everyone. Therefore, everyone outside of this group is wrong, and we have to protect them. AI is going to kill everyone. But also they believe other things. Like, they believe that polyamory is rational and everyone that.

[00:28:57]

Polyamory?

[00:28:57]

Yeah, you can have sex with multiple partners, essentially, but they think that's.

[00:29:03]

I mean, I think it's certainly a natural desire, if you're a man, to sleep with more indifferent women, for sure. But it's rational in the sense how, like, you've never meth happy, polyamorous, long term, and I've known a lot of them, not a single one.

[00:29:21]

So how would it might be self serving, you think, to recruit more impressionable.

[00:29:27]

People into and their hot girlfriends?

[00:29:29]

Yes.

[00:29:30]

Right. So that's rational.

[00:29:34]

Yeah, supposedly. And so they, you know, they convince each other of all these cult like behavior. And the crazy thing is this group ends up being super influential because they recruit a lot of people that are interested in AI. And the AI labs and the people who are starting these companies were reading all this stuff. So Elon famously read a lot of Nick Bostrom as kind of an adjacent figure to the rationale community. He was part of the original mailing list. I think he would call himself a rationale part of the rational community. But he wrote a book about AI and how AI is going to kill everyone, essentially. I think he monitored his views more recently, but originally he was one of the people that are kind of banging the alarm. And the foundation of OpenAI was based on a lot of these fears. Elon had fears of AI killing everyone. He was afraid that Google was going to do that. And so they group of people, I don't think everyone at OpenAI really believed that. But some of the original founding story was that, and they were recruiting from that community so much.

[00:30:46]

So when Sam Altman got fired recently, he was fired by someone from that community, someone who started with effective altruism, which is another offshoot from that community, really. And so the AI labs are intermarried in a lot of ways with this community. And so it ends up, they kind of borrowed a lot of their talking points, by the way, a lot of these companies are great companies now, and I think they're cleaning up house.

[00:31:17]

But there is, I mean, I'll just use the term. It sounds like a cult to me. Yeah, I mean, it has the hallmarks of it in your description. And can we just push a little deeper on what they believe? You say they are transhumanists.

[00:31:31]

Yes.

[00:31:31]

What is that?

[00:31:32]

Well, I think they're just unsatisfied with human nature, unsatisfied with the current ways we're constructed, and that we're irrational, we're unethical. And so they long for the world where we can become more rational, more ethical, by transforming ourselves, either by merging with AI via chips or what have you, changing our bodies and fixing fundamental issues that they perceive with humans via modifications and merging with machines.

[00:32:11]

It's just so interesting because. And so shallow and silly. Like a lot of those people I have known are not that smart, actually, because the best things, I mean, reason is important, and we should, in my view, given us by God. And it's really important. And being irrational is bad. On the other hand, the best things about people, their best impulses, are not rational.

[00:32:35]

I believe so, too.

[00:32:36]

There is no rational justification for giving something you need to another person.

[00:32:41]

Yes.

[00:32:42]

For spending an inordinate amount of time helping someone, for loving someone. Those are all irrational. Now, banging someone's hot girlfriend, I guess that's rational. But that's kind of the lowest impulse that we have, actually.

[00:32:53]

We'll wait till you hear about effective altruism. So they think our natural impulses that you just talked about are indeed irrational. And there's a guy, his name is Peter Singer, a philosopher from Australia.

[00:33:05]

The infanticide guy.

[00:33:07]

Yes.

[00:33:07]

He's so ethical. He's for killing children.

[00:33:09]

Yeah. I mean, so their philosophy is utilitarian. Utilitarianism is that you can calculate ethics and you can start to apply it, and you get into really weird territory. Like, you know, if there's all these problems, all these thought experiments, like, you know, you have two people at the hospital requiring some organs of another third person that came in for a regular checkup or they will die. You're ethically, you're supposed to kill that guy, get his organ, and put it into the other two. And so it gets. I don't think people believe that, per se. I mean, but there's so many problems with that. There's another belief that they have.

[00:33:57]

But can I say that belief or that conclusion grows out of the core belief, which is that you're God. Like, a normal person realizes, sure, it would help more people if I killed that person and gave his organs to a number of people. Like, that's just a math question. True, but I'm not allowed to do that because I didn't create life. I don't have the power. I'm not allowed to make decisions like that because I'm just a silly human being who can't see the future and is not omnipotent because I'm not God. I feel like all of these conclusions stem from the misconception that people are gods.

[00:34:33]

Yes.

[00:34:34]

Does that sound right?

[00:34:34]

No, I agree. I mean, a lot of the. I think it's, you know, they're at roots. They're just fundamentally unsatisfied with humans and maybe perhaps hate, hate humans.

[00:34:50]

Well, they're deeply disappointed.

[00:34:52]

Yes.

[00:34:53]

I think that's such a. I've never heard anyone say that as well, that they're disappointed with human nature, they're disappointed with human condition, they're disappointed with people's flaws. And I feel like that's the. I mean, on one level, of course. I mean, you know, we should be better, but that, we used to call that judgment, which we're not allowed to do, by the way. That's just super judgy. Actually, what they're saying is, you know, you suck, and it's just a short hop from there to, you should be killed, I think. I mean, that's a total lack of love. Whereas a normal person, a loving person, says, you kind of suck. I kind of suck, too. But I love you anyway, and you love me anyway, and I'm grateful for your love. Right? That's right.

[00:35:35]

That's right. Well, they'll say, you suck. Join our rationality community. Have sex with us. So.

[00:35:43]

But can I just clarify? These aren't just like, you know, support staff at these companies? Like, are there?

[00:35:50]

So, you know, you've heard about SBF and FDX, of course.

[00:35:52]

Yeah.

[00:35:52]

They had what's called a polycule.

[00:35:54]

Yeah.

[00:35:55]

Right. They were all having sex with each other.

[00:35:58]

Given. Now, I just want to be super catty and shallow, but given some of the people they were having sex with, that was not rational. No rational person would do that. Come on now.

[00:36:08]

Yeah, that's true. Yeah. Well, so, you know. Yeah. It's what's even more disturbing, there's another ethical component to their I philosophy called longtermism, and this comes from the effective altruist branch of rationality, long termism. Long termism. What they think is, in the future, if we made the right steps, there's going to be a trillion humans, trillion minds. They might not be humans, that might be AI, but they're going to be trillion minds who can experience utility, who can experience good things, fun things, whatever. If you're a utilitarian, you have to put a lot of weight on it, and maybe you discount that, sort of like discounted cash flows. Uh, but you still, you know, have to pause it that, you know, you know, if. If there are trillions, perhaps many more people in the future, you need to value that very highly. Even if you discount it a lot, it ends up being valued very highly. So a lot of these communities end up all focusing on AI safety, because they think that AI, because they're rational. They arrived, and we can talk about their arguments in a second. They arrived at the conclusion that AI is going to kill everyone.

[00:37:24]

Therefore, effective altruists and rational community, all these branches, they're all kind of focused on AI safety, because that's the most important thing, because we want a trillion people in the future to be great. But when you're assigning value that high, it's sort of a form of Pascal's wager. It is sort of. You can justify anything, including terrorism, including doing really bad things, if you're really convinced that AI is going to kill everyone and the future holds so much value, more value than any living human today has value. You might justify really doing anything. And so built into that, it's a.

[00:38:15]

Dangerous framework, but it's the same framework of every genocidal movement from at least the French Revolution. To present a glorious future justifies a bloody present.

[00:38:28]

Yes.

[00:38:30]

And look, I'm not accusing them of genocidal intent, by the way. I don't know them, but those ideas lead very quickly to the camps.

[00:38:37]

I feel kind of weird just talking about people, because generally I like to talk about ideas about things, but if they were just like a silly Berkeley cult or whatever, and they didn't have any real impact on the world, I wouldn't care about them. But what's happening is that they were able to convince a lot of billionaires of these ideas. I think Elon maybe changed his mind, but at some point he was convinced of these ideas. I don't know if he gave them money. I think there was a story at some point, Wall Street Journal, that he was thinking about it. But a lot of other billionaires, billionaires gave them money, and now they're organized, and they're in DC lobbying for AI regulation. They're behind the AI regulation in California and actually profiting from it. There was a story in pirate wares where the main sponsor, Dan Hendrix, behind SB 1047, started a company at the same time that certifies the safety of AI. And as part of the bill, it says that you have to get certified by a third party. So there's aspects of it that are kind of. Let's profit from it.

[00:39:45]

By the way, this is all allegedly based on this article. I don't know for sure. I think Senator Scott Weiner was trying to do the right thing with the bill, but he was listening to a lot of these cult members, let's call them, and they're very well organized, and also a lot of them still have connections to the big AI labs, and some of them work there, and they would want to create a situation where there's no competition in AI regulatory capture, per se. I'm not saying that these are the direct motivations. All of them are true believers. But you might infiltrate this group and direct it in a way that benefits these corporations.

[00:40:32]

Yeah, well, I'm from DC, so I've seen a lot of instances where my bank account aligns with my beliefs. Thank heaven. Just kind of happens. It winds up that way. It's funny. Climate is the perfect example. There's never one climate solution that makes the person who proposes it poorer or less powerful.

[00:40:51]

Exactly.

[00:40:51]

Ever. Not one. We've told you before about Halo. It is a great app that I am proud to say I use. My whole family uses. It's for daily prayer and christian meditation. And it's transformative as we head into the start of school in the height of election season. You need it. Trust me. We all do. Things are going to get crazier and crazier and crazier. Sometimes it's hard to imagine even what is coming next. So with everything happening in the world right now, it is essential to ground yourself. This is not some quack cure. This is the oldest and most reliable cure in history. It's prayer. Ground yourself in prayer and scripture every single day. That is a prerequisite for staying sane and healthy and maybe for doing better eternally. So if you're busy on the road headed to kids sports, there is always time to pray and reflect alone or as a family. But it's hard to be organized about it. Building a foundation of prayer is going to be absolutely critical as we head into November, praying that God's will is done in this country and that peace and healing come to us here in the United States and around the world.

[00:41:58]

Christianity obviously is attacked, under attack everywhere. That's not an accident. Why is Christianity, the most peaceful of all religions, under attack globally? Did you see the opening in the Paris Olympics? There's a reason, because the battle is not temporal. It's taking place in the unseen world. It's a spiritual battle, obviously. So try hallow. Get three months completely free at hallow. That's hallow.com tucker. If there's ever a time to get spiritually in tune and ground yourself in prayer, it's now. Hallow will help personally and strongly and totally sincerely recommend it. Hallow.com tucker. I wonder about the core assumption, which I've had up until right now, that these machines are capable of thinking. Yeah, is that true?

[00:43:05]

So let's go through their chain of reasoning. I think the fact that it's, it's a stupid cult like thing, or perhaps actually a cult, does not automatically mean that their arguments are fully wrong. That's exactly right. I think it does. You do have to kind of discount some of the arguments because it comes from crazy people. But the chain of reasoning is that humans are general intelligence. We have these things called brains. Brains are computers. They're based on purely physical phenomena that we know they're computing. And if you agree that humans are computing, and therefore we can build a general intelligence in the machine, and if you agree up until this point, if you're able to build the general intelligence in the machine, even if only at a human level, then you can create a billion copies of it, and then it becomes a lot more powerful than any one of us. And because it's a lot more powerful than any one of us, it would want to control us, or it would not care about us because it's more powerful. Kind of like, we don't care about ants, we'll step on ants no problem, because these machines are so powerful, they're not going to care about us.

[00:44:25]

And I sort of get off the train at the first chain of, of reasoning. But every one of those steps I have problems with. The first step is the mind is a computer. And based on what? And the idea is, oh, well, if you don't believe that the mind is a computer, then you believe in some kind of, woo, spiritual thing. Well, you have to convince me. You haven't presented an argument.

[00:44:56]

But the idea that, speaking of rational, this is what reason looks like.

[00:45:03]

The idea that we have a complete description of the universe anyways is wrong. Right. We don't have a universal physics. We have physics of the small things. We have physics of the big things. We can't really cohere them or combine them. So just the idea that you, being a materialist, is sort of incoherent because we don't have a complete description of the world. That's one thing. That's a slight argument. I'm not gonna.

[00:45:24]

No, no. It's a very interesting argument, though. So you're saying as someone who, I mean, you're effectively a scientist. Just state for viewers who don't follow this stuff, the limits of our knowledge of physics.

[00:45:38]

Yeah. So we have essentially two conflicting theories of physics. These systems can't be married. They're not a universal system. You can't use them both at the same time.

[00:45:51]

Well, that suggests a profound limit to our understanding of what's happening around us in the natural world. Does it?

[00:45:59]

Yes, it does. And I think this is, again, another error of the rationalist types, is that just assume that we were so much more advanced in our science than we actually are.

[00:46:09]

So it sounds like they don't know that much about science.

[00:46:12]

Yes.

[00:46:14]

Okay, thank you. Thank you. I'm sorry to ask you to pause.

[00:46:16]

Yeah, that's not even the main crux of my argument. There is a philosopher, mathematician, scientist. Wonderful. His name is Sir Roger Penrose. I love how the british kind of give the sir title, someone is accomplished. He wrote this book called the Emperor's New mind, and it's based on the emperor's new clothes. The idea that the emperor's kind of naked. And in his opinion, the argument that the mind is a computer is a sort of consensus argument that is wrong.

[00:46:55]

The emperor's naked accuracy, it's not really an argument. It's an assertion.

[00:46:57]

Yes. It's an assertion that is fundamentally wrong. And the way he proves it is very interesting. In mathematics, there's something called Godel's incompleteness theorem. And what that says is there are statements that are true that can't be proved in mathematics. So he constructs. Godel constructs a number system where he can start to make statements about this number system. So he creates a statement that's like, this statement is unprovable in system f, where the system is f, the whole system is f. Well, if you try to prove it, then that statement becomes false, but you know it's true because it's unprovable in the system. And Roger Parno says, because we have this knowledge, that it is true, by looking at it despite, like, we can't prove it. I mean, the whole feature of the sentence is that it is unprovable. Therefore, our knowledge is outside of any formal system. Therefore, yes, the human brain is, or like, our mind is understanding something that mathematics is not able to give it.

[00:48:17]

To us, to describe.

[00:48:19]

To describe. And I thought the first time I read it, it read a lot of these things.

[00:48:25]

What's the famous. You were telling me last night? I'd never heard it. The Bertrand Russell self canceling assertion.

[00:48:31]

Yeah. It's like, this statement is false. It's called the liar paradox.

[00:48:37]

Explain why. That's just. That's gonna float in my head forever. Why is that a paradox?

[00:48:41]

So this statement is false. If you look at a statement and agree with it, then it becomes true. But if it's true, then it's not true, it's false. And you go to the circular thing and you never stop. It broke logic in a way, yes. And Bertrand Russell spent his whole big part of his life writing this book, Principia Mathematica, and he wanted to really prove that mathematics is complete, consistent, decidable, computable, all of that. And then all these things happened. Godel's uncomplete's theorem. Turing, the inventor of the computer. Actually, this is the most ironic piece of science history that nobody ever talks about. But Turing invented the computer to show its limitation. So he invented the Turing machine, which is the ideal representation of a computer that we have today. All computers are Turing machines. And he showed that this machine, if you give it a set of instructions, it can't tell whether those set of instructions will ever stop, will run and stop, will complete to a stop, or will continue running forever. It's called the halting problem. And this proves that mathematics have undecidability. It's not fully decidable or computable. So all of these things were happening as he was writing the book, and it was really depressing for him because he kind of went out to prove that mathematics is complete and all of that.

[00:50:18]

And this caused kind of a major panic at the time between mathematicians and all of that. It's like, oh, my God, our systems are not complete.

[00:50:29]

So it sounds like the deeper you go into science and the more honest you are about what you discovered, the more questions you have, which kind of gets you back to where you should be in the first place, which is in a posture of humility.

[00:50:42]

Yes.

[00:50:43]

And yet I see science used, certainly in the political sphere. I mean, those are all dumb people. So it's like, who cares? Actually, Kamala Harris lecturing about science. I don't even hear it. But also, some smart people believe the science. The assumption behind that demand is that it's complete and it's knowable and we know it. And if you're ignoring it, then you're ignorant, willfully or otherwise. Right?

[00:51:05]

Well, my view of science, it's a method, ultimately, it's a method anyone can apply. It's democratic, it's decentralized. Anyone can apply the scientific method, including people who are not trained.

[00:51:15]

But in order to practice the method, you have to come from a position of humility that I don't know. I'm using this method to find out, and I cannot lie about what I observe. Right?

[00:51:24]

That's right. And today, capital S. Science is used to control and used to propagandize and.

[00:51:34]

Lie, of course, but in the hands of just really people who shouldn't have power, just dumb people with pretty ugly agendas. But we're talking about the world that you live in, which is unusually smart people who do this stuff for a living and are really trying to advance the ball in science. And I think what you're saying is that some of them, knowingly or not, just don't appreciate how little they know.

[00:51:59]

Yeah. And they go through this chain of reasoning for this argument, and none of those are at minimum complete and just take it for granted. If you even doubt that the mind is a computer, I'm sure a lot of people will call me heretic and will call me all sorts of names, because it's just dogma.

[00:52:25]

That the mind is a computer, that.

[00:52:27]

The mind is a computer. It's dogma in technology, science.

[00:52:30]

That's so silly.

[00:52:32]

Yes.

[00:52:34]

Well, I mean, let me count the ways the mind is different from a computer. First of all, you're not assured of a faithful representation of the past. Memories change over time. Right. In a way that's misleading and who knows why, but that is a fact, right? That's not true of computers, I don't think. But how are we explaining things like intuition and instinct? Those are not. Well, that is actually my question. Could those ever be features of a machine?

[00:53:03]

You could argue that neural networks are sort of intuition machines, and that's what a lot of people say. But neural networks, and maybe I will describe them just for the audience. Neural networks are inspired by the brain. And the idea is that you can connect a network of small, little functions, just mathematical functions, and you can train it by giving examples, like give it a picture of a cat. And if it's yes, let's say this network has to say yes if it's a cat. No if it's not a cat. So to give it a picture of a cat and then the answer is no, then it's wrong. You adjust the weights based on the difference between the picture and the answer, and you do this, I don't know, a billion times. And then the network encodes features about the cat. And this is literally exactly how neural networks work, is you tune all these small parameters until there's some embedded feature detection, especially in classifiers. And this is not intuition. This is basically automatic programming. The way I see it, of course, we can write code manually. You can go to our website, write code.

[00:54:32]

But we can generate algorithms automatically via machine learning. Machine learning essentially discovers these algorithms and sometimes discovers, like, very crappy algorithms. For example, all the pictures that we gave it of a cat had grass in them. So it would learn that grass equals cat. The color green equals cat.

[00:54:58]

Yes.

[00:54:58]

And then you give it one day a picture of a cat without grass, and it fails. And, like, what happened all turns out it learned the wrong thing. So because it's obscure, what it's actually learning, people interpret that as intuition, because the algorithms are not explicated. And there's a lot of work now on trying to explicate these algorithms, which is great work, companies like anthropic, but I don't think you can call it intuition just because it's obscure.

[00:55:35]

So what is it? How is intuition different? Human intuition.

[00:55:43]

For one. We don't require a trillion examples of cat to learn a cat.

[00:55:49]

Good point.

[00:55:52]

A kid can learn language with very little examples. Right now, when we're training these large language models like chat, GPT, you have to give it the entire Internet for it to learn language. And that's not really how humans work. And the way we learn is we combine intuition and some more explicit way of learning. And I don't think we've figured out how to do it with machines just yet.

[00:56:21]

Do you think that structurally it's possible for machines to get there?

[00:56:31]

So this chain of reasoning, I can go through every point and present arguments to the contrary, or at least present doubt, but no one is really trying to deal with those doubts. And my view is that I'm not holding these doubts very, very strongly, but my view is that we just don't have a complete understanding of the mind, and you at least can't use it to argue. That kind of machine that acts like a human, but much more powerful, can kill us all. But do I think that AI can get really powerful? Yes, I think AI can get really powerful. Can get really useful. I think functionally it can feel like it's general. AI is ultimately a function of data, the kind of data that we put into it. The functionality is based on this data. So we can get very little functionality outside of that. Actually, we don't get any functionality outside of that data. It's actually been proven that these machines are just the function of their data.

[00:57:40]

The sum total of what you put in.

[00:57:42]

Exactly. Garbage in, garbage out. The cool thing about them is they can mix and match different functionalities that they learn from the data. So it looks a little bit more general, but let's say we collected all data of the world, we collected everything that we care about, and we somehow fit it into a machine, and now everyone's building these really large data centers. You will get a very highly capable machine that will kind of look general, because we collected a lot of economically useful data, and we'll start doing economically useful tasks, and from our perspective, it will start to look general. So I'll call it functionally AGI. I don't doubt we're sort of headed in some direction like that, but we haven't figured out how these machines can actually generalize and can learn, and can use things like intuition, for when they see something fundamentally new outside of their data distribution, they can actually react to it correctly and learn it efficiently. We don't have the science for that.

[00:58:48]

Because we don't have the understanding of it on the most fundamental level. You began that explanation by saying, we don't really understand the human brain, so how can we compare it to something because we don't even really know what it is.

[00:58:59]

There's a machine learning scientist, Francois Choulet. I don't know how to pronounce french names, but I think that's his name. He took an IQ like test where you're rotating shapes and whatever, and an entrepreneur put a million dollars for anyone who's able to solve it using AI. All the modern AI's that we think are super powerful couldn't do something that a ten year old kid could do. And it showed that, again, those machines are just functions of their data. The moment you throw a problem that's novel at them, they really are not able to do it. Now, again, I'm not fundamentally discounting the fact that we'll get there, but just the reality of where we are today. You can't argue that we're just going to put more compute and more data into this and suddenly it becomes God and kills us all, because that's the argument. And they're going to DC and they're going to all these places, they're springing up regulation. This regulation is going to hurt american industry, it's going to hurt startups, it's going to make it hard to compete, it's going to give China a tremendous advantage, and it's gonna really hurt us.

[01:00:10]

Based on these flawed arguments that they're not actually battling with these real questions.

[01:00:16]

It sounds like they're not. And what gives me pause is not so much the technology, it's the way that the people creating the technology understand people. So I think the wise and correct way to understand people is as not self created beings, people did not create themselves. People cannot create life as beings created by some higher power who at their core have some kind of impossible to describe, spark a holy mystery. And for that reason, they cannot be enslaved or killed by other human beings. That's wrong. There is right and wrong. That is wrong. I mean, lots of gray areas. That's not a gray area, because they're not self created.

[01:00:55]

Yes.

[01:00:56]

Right. I think that all humane action flows from that belief and that the most inhumane actions in history flow from the opposite belief, which is people are just objects that can and should be improved, and I have full power over them. Like, that's a real. That's a totalitarian mindset. And it's the one thing that connects every genocidal movement is that belief. So it seems to me, as an outsider, that the people creating this technology have that belief.

[01:01:22]

Yeah. And you don't even have to be spiritual to have that belief. Look, I.

[01:01:28]

You certainly don't. Yeah. Yeah. So I think that's actually a rational conclusion based on.

[01:01:32]

I 100% agree. I'll give you one interesting anecdote again from science. We've had brains for half a billion, if you believe in evolution, all that, we have had brains for half a billion years, and we've had a human like species for half a million, perhaps more, perhaps a million years. There's a moment in time 40,000 years ago. It's called the great leap forward, where we see culture, we see religion, we see drawings, we saw very little of that before that, tools and whatever, and suddenly we're seeing this cambrian explosion of culture.

[01:02:20]

Pointing to something larger than just daily needs or the world around them, but.

[01:02:25]

But we're still not able to explain it. David Reich wrote this book, I think, who we are, where we came from. In it, he talks about trying to look for that genetic mutation that happened, that potentially created this explosion. They have some idea of what it could be in some candidates, but they don't really have it right now. But you have to ask the question, what happened 30 or 40,000 years ago?

[01:02:52]

Where it's clear, I mean, it's indisputable that the people who lived during that period were suddenly grappling with metaphysics. Yes, they're worshiping things.

[01:03:03]

There's a clear separation between, again, the animal brain and the human brain, and it's clearly not computation. We suddenly didn't grow a computer in a brain. Something else happened.

[01:03:18]

But what's so interesting is the instinct of modern man is to look for something inside the person that caused that, whereas I think the very natural and more correct instinct is to look for something outside of man that caused that.

[01:03:29]

I'm open to both.

[01:03:30]

Yeah. I mean, I don't know the answer. I mean, of course I do know the answer, but I'll just pretend I don't. But at very least, both are possible. So if you confine yourself to looking for a genetic mutation or change. Genetic change, then you're sort of closing out. That's not an imperial scientific way of looking at things, actually. You don't foreclose any possibility. Right. Science, you can't.

[01:03:55]

Right.

[01:03:56]

Sorry.

[01:03:56]

Yeah, that's very interesting. So, you know, I think that these machines, I'm betting my business that on AI getting better and better and better, and it's gonna make us all better. It's gonna make it all more educated.

[01:04:14]

Okay? So, okay, now. Now's the time for you to tell me why I should be excited about something I've been hearing.

[01:04:21]

Yeah. So this technology, large language models, where we kind of fed a neural network, the entire Internet, and it has capabilities, mostly around writing, around information lookup, around summarization, around coding. It does a lot of really useful thing, and you can program it to pick and match between these different skills. You can program these skills using code. And so the kind of products and services that you can build with us are amazing. So, one of the things I'm most excited about, this application of the technology, there's this problem called the bloom's two sigma problem. There's this scientist that was studying education, and he was looking at different interventions to try to get kids to learn better, faster, or have just better educational outcomes. And he found something kind of bad, which is there's only one thing you could do to move kids, not in a marginal way, but in a two standard deviations from the norm, like, in a big way, like, better than 98% of the other kids. By doing one on one tutoring using a type of learning called master learning. One on one tutoring is the key formula there. That's great. I mean, we discovered the solution to education.

[01:06:00]

We can uplevel everyone, all humans on earth. The problem is, like, we don't have enough teachers to do one on one tutoring. It's very expensive. No country in the world can afford that. So now we have these machines that can talk, that can teach. They can present information, that you can interact with it in a very human way. You can talk to it, it can talk to you back. We can build AI applications to teach people one on one, and you can have it. You can serve 7 billion people with that, and everyone can get smarter.

[01:06:43]

I'm totally for that. I mean, that was the promise of the Internet. Didn't happen. So I hope this. I was gonna save this for last, but I can't control myself. So I just know, being from DC, that when the people in charge see new technology, the first thing they think of is like, how can I use this to kill people? So what are the military applications potentially, of this technology?

[01:07:07]

That's one of the other things that I'm sort of very skeptical of, this lobbying effort to get government to regulate it, because I think the biggest offender would be of abuse of this technology, probably government, you think? I watched your interview with Jeffrey Sachs, who's like a Columbia professor, very, very mainstream. And I think he got assigned to a Lancet sort of study of COVID origins or whatever. And he arrived at very, at the time, heterodox view that it was created in a lab and was created by the us government. And so the government is supposed to protect us from these things. And now they're talking about pandemic readiness and whatever. Well, let's talk about, let's talk about how do we watch what the government's doing? How do we actually have democratic processes to ensure that you're not the one abusing these technologies because they're going to regulate it. They're going to make it so that everyday people are not going to be able to use these things, and then they're going to have free rein on how to abuse these things.

[01:08:16]

Just like with encryption.

[01:08:18]

Right? Encryption is another one. That's right.

[01:08:20]

But they've been doing that for decades. Yes, like, we get privacy, but you're not allowed it because we don't trust you. But by using your money and the moral authority that you gave us to lead you, we're going to hide from you everything we're doing, and there's nothing you can do about it. I mean, that's the state of America right now. So how would they use AI to further oppress us?

[01:08:42]

I mean, you can use it in all sorts of ways, like autonomous drones. We already have autonomous drones. It got a lot worse. You can, you know, there was a video on the Internet where, like, the, you know, chinese guard or whatever was walking with a dog, with a robotic dog, and the robotic dog had a gun mounted to it. And so you can have robotic set of dogs with shooting guns, little Sci-Fi like you can make as a dog lover.

[01:09:10]

That's so offensive to me.

[01:09:11]

It is kind of offensive, yeah.

[01:09:13]

In a world increasingly defined by deception and the total rejection of human dignity, we decided to found the Tucker Carlson network, and we did it with one principle in tell the truth. You have a God given right to think for yourself. Our work is made possible by our members. So if you want to enjoy an ad free experience and keep this going, join tcn@tuckercarlson.com. dot slash podcast. Tuckercarlson.com podcast.

[01:09:56]

There was this huge expose in this magazine called 972 about how Israel was using AI to target suspects, but ended up killing huge numbers of civilians. It's called the lavender, a very interesting piece.

[01:10:13]

So the technology wound up killing people who were not even targeted. Yes, it's pretty dark. What about surveillance?

[01:10:27]

I think this recent AI boom, I think it could be used for surveillance. I'm not sure if it gives a special advantage. I think they can get the advantage by, again, if these lobbying groups are successful, part of their ideal outcome is to make sure that no one is training large language models. To do that, you would need to insert surveillance apparatus at the compute level. And so perhaps that's very dangerous. Our computers would spy on us to make sure we're not training AI's. I think the kind of AI that's really good at surveillance is the vision AI, which China perfected. So that's been around for a while now. I'm sure there's ways to abuse language models for surveillance, but I can't think of it right now.

[01:11:24]

What about manufacturing?

[01:11:28]

It would help with manufacturing. Right now, people are figuring out how to do. I invested in a couple of companies how to apply this technology, foundation models to robotics. It's still early science, but you might have a huge advancement in robotics if we're able to apply this technology to it.

[01:11:49]

So the whole point of technology is to replace human labor, either physical or mental, I think. I mean, historically, that's what the steam engine replaced, the arm, et cetera, et cetera. So if this is as transformative as it appears to be, you're going to have a lot of idle people. And that's, I think, the concern that led a lot of your friends and colleagues to support UbI, universal basic income. Like, there's nothing for these people to do, so we just got to pay them to exist. You said you're opposed to that. I'm adamantly opposed to that. On the other hand, what's the answer?

[01:12:24]

Yeah, so there's two ways to look at it. We can look at the individuals that are losing their jobs, which is tough and hard. I don't really have a good answer, but we can look at it from a macro perspective. And when you look at it from that perspective, for the most part, technology created more jobs over time. Before alarm clocks, we had this job called the knocker opera, which goes to your room. You pay them and come every day at 05:00 a.m. they knock on your.

[01:12:54]

Or ring the village bell, right.

[01:12:57]

And that job disappeared, but we had ten times more jobs in manufacturing or perhaps 100 or 1000 more jobs in manufacturing. And so overall, I think the general trend is technology just creates more jobs. And so I'll give you a few examples how AI can create more jobs. Actually it can create more interesting jobs. Entrepreneurship, it's like a very american thing, right? It's like America is the entrepreneurship country. But actually new firm creation has been going down for a long time, at least 100 years. It's just like been going down. Although we have all this excitement around startups or whatever, Silicon Valley is the only place that's still producing startups like the rest of the country. There isn't as much startup or new firm creation, which is kind of sad because again, the Internet was supposed to be this great wealth creation engine that anyone has access to. But the way it turned out is it was concentrated in this one geographic area.

[01:13:58]

Well, it looked, I mean, in retrospect, looks like a monopoly generator, actually.

[01:14:02]

Yeah. But again, it doesn't have to be that way. And the way I think AI would help is that it will give people the tools to start businesses because you have this easily programmable machine that can help you with programming. I'll give you a few examples. There's a teacher in Denver that during COVID was a little bored, went to our website, we have a free course to learn how to code. He learned a bit of coding. He used his knowledge as a teacher to build an application that helps teachers use AI to teach. And within a year he built a business that's worth tens of millions of dollars. That's bringing in a huge amount of money. I think he raised $20 million. And that's a teacher who learned how to code and created this massive business really quickly. We have stories of photographers doing millions of dollars in revenue. AI will decentralize access to this technology. There's a lot of ways in which you're right, technology tend to centralize, but there's a lot of ways that people kind of don't really look at in which technology can decentralize.

[01:15:14]

Well, that was, I mean, that promise makes sense to me. I would just, I fervently want it to become a reality. I have a, we have a mutual friend who showed me nameless, so smart, and a good, humane person who's very way up into the subject and participates in the subject. And he said to me, well, one of the promises of AIH is that it will allow people to have virtual friends or mates that it will feel, you know, it will solve the loneliness problem that is clearly a massive problem in the United States. And I felt like, I don't want to say it because I like him so much, but that seemed really bad to me.

[01:15:52]

Yeah, I'm not interested in those. I think we have the same intuition about what's, what's dark and dystopian versus what's.

[01:16:02]

He's a wonderful person. But I may, I just don't think he's thought about it or I don't know what. But we disagree. But why just, I don't even disagree. I don't have an argument, just an instinct. But like, people should be having sex with people, not machines, right?

[01:16:14]

That's right. That's right. Like, I would go so far as to say some of these applications are like a little unethical, like the, you know, praying on sort of lonely, lonely men with no. With no. With no opportunities for a mate. And like, you know, it will make it so that they were actually not motivated to go out and date and get an extra girlfriend.

[01:16:38]

Like porn ten x.

[01:16:39]

Yes, yes. And I think that's really bad. That's really bad for society. And so I think the application, look, you can apply this technology in a positive way or you can apply it in a negative way. You know, I would love for this, you know, doom cult if instead they were trying to make it so that AI is applied in a positive way. If we had a cult that was like, oh, we're going to lobby, we're going to go out and make it so that AI is a positive technology, I'd be all for that. And by the way, there are, in history, there are times where the culture self corrects. I think there's some self correction on porn that's happening right now. You know, fast food, right? I mean, you know, just generally junk. You know, everyone is like, whole foods is like high status now. Like you eat whole foods. There's a place called whole foods you can go to.

[01:17:35]

That's right.

[01:17:35]

And people are interested in eating healthy.

[01:17:38]

And chemicals in the air and water. Another thing that was a very esoteric concern even ten years ago was only the wackos. It was Bobby Kennedy cared about that no one else did. Now that's like a feature of normal conversation.

[01:17:48]

Yes. Everyone's worried about microplastics and the testicles.

[01:17:52]

That's right. Which is, I think, a legitimate concern.

[01:17:54]

Absolutely.

[01:17:55]

So what I'm not surprised that there are cults in Silicon Valley. I don't think you named the only one I think there are others. That's my sense. And I'm not surprised, because, of course, every person is born with the intuitive knowledge that there's a power beyond himself. That's why every single civilization has worshiped something. And if you don't acknowledge that, it doesn't change. You just worship something even dumber. Yeah, but, so my question to you, as someone who lives and works there, is what percentage of the people who are making decisions in Silicon Valley will say out loud, not, I'm a Christian, jew or muslim, but that, like, I'm not, you know, there is a power bigger than me in the universe. Do people think that? Do they acknowledge that?

[01:18:34]

For the most part, no. Yeah, like, I think most. I don't. I don't want to say most people, but, like, the vast majority of the discussions tend to be more intellectual. I think people just take for granted that everyone has a secular, mostly secular point of view.

[01:18:51]

Well, I think that the truly brilliant conclusion is that we don't know a lot and we don't have a ton of power. That's my view. Right. So the actual intellectual will, over time, if he's honest, will reach.

[01:19:04]

This is the view of many scientists and many people who really went deep. I mean, I don't know who said it. I'm trying to remember, but someone said the first gulp of science make you an atheist, but at the bottom of the cup, you'll find God waiting for you.

[01:19:19]

Matthias Desmet wrote a book about this, supposedly about COVID It was not about COVID I just cannot recommend it more strongly. But the book is about the point you just made, which is the deeper you go into science, the more you see some sort of order reflected that is not random at all. And a beauty exhibited in math, even. And the less you know, and the more you're certain that this is by that there's a design here, and that's not human or, quote, natural, it's supernatural. That's his conclusion, and I affirm it. But how many people do you know in your science world who think that.

[01:20:04]

Yeah, I can count them on one hand, basically.

[01:20:07]

How interesting. That concerns me because I feel like without that knowledge, hubris is inevitable.

[01:20:14]

Yeah, and a lot of these conclusions are from hubris. The fact that there's so many people that believe that AI is an imminent existential threat, a lot of people believe that we're going to die. We're all going to die in the next five years comes from that. Hubris.

[01:20:29]

How interesting. I've never, until I met you, I've never thought of that. That actually. And that is itself an expression of hubris. I never thought of that.

[01:20:40]

Yeah, you can go negative with hubris. You can go positive, and we're gonna. And I think the positive thing is good. Like, I think Elon is an embodiment of that as, like, just a self belief that you can, like, fly rockets and build electric cars is good. And maybe in some cases it's delusional, but, like, net net will kind of put you on a good path for creation. I think it can go pathological if you. If you're, for example, SBF. And again, he's kind of part of those groups, just sort of believed that he can do anything in service of his ethics, including steal and cheat and all of that.

[01:21:21]

Yeah. I never really understood. Well, of course, I understood too well, I think. But the obvious observable fact that effective altruism led people to become shittier toward each other, not better.

[01:21:38]

Yeah. I mean, it's such an irony, but I feel like it's in the name. If you call yourself such grandiose thing, you're typically horrible. The Islamic State is neither islamic or state. Effective. Altruists are neither altruists.

[01:21:56]

The United nations is not united. No. That's. Boy, is that wise. So I don't think, to your earlier point, that any large language model or machine could ever arrive at what you just said, because, like, the deepest level of truth is wrapped in irony always. And machines don't get irony, right?

[01:22:19]

Not yet.

[01:22:20]

Could they?

[01:22:22]

Maybe. I mean, I don't take as strong of a stance as you are at, like, the, you know, the capabilities of the machines. I do believe that, you know, if you represent it.

[01:22:33]

I don't know. I mean, I'm asking. I really don't know what they're capable of.

[01:22:35]

Well, I think maybe they can't come up with real novel irony that is, like, really insightful for us. But if you put a lot of irony in the data, they'll understand, right?

[01:22:46]

They can ape, human iron.

[01:22:47]

They can ape. I mean, they're ape machines. They're imitation machines. They're literally imitating, like, you know, the way large language models are trained is that you give them a corpus of text and they hide different words and they try to guess them, and then they adjust the weights of those neural networks, and then eventually they get really good at guessing what humans would say.

[01:23:08]

Well, then, okay, so you're just kind of making the point unavoidable. Like, if the machines, as you have said, it makes sense, are the sum total of what's put into them.

[01:23:16]

Yeah.

[01:23:17]

Then. And that would include the personalities and biases of the people putting the data in.

[01:23:22]

That's right.

[01:23:23]

Then you want, like, the best people, the morally best people, which is to say the most humble people to be doing that. But it sounds like we have the least humble people doing that.

[01:23:32]

Yeah, I think some of them are humble. I think some people working in AI are really upstanding and good and want to do the right thing, but there are a lot of people with the wrong motivations coming at it from fear and things like that. This is the other point I will make, is that free markets are good because you're going to get all sorts of entrepreneurs with different motivations. And I think what determines the winner is not always the ethics or whatever, but it's the larger culture. What kind of product is pulling out of you? If they're pulling the porn and the companion chat bots, whatever, versus they're pulling the education and the healthcare. And I think all the positive things that will make our life better. I think that's really on the larger culture. I don't think we can regulate that with government or whatever, but if the culture creates demand for things, just makes us worse as humans, then there are entrepreneurs that will spring up and serve this.

[01:24:41]

That's totally right. And it is a snake eating its tail at some point. Because, of course, you serve the baser human desires and you create a culture that inspires those desires in a greater number of people. In other words, the more porn you have, the more porn people want. Actually, yes. I wonder about the pushback from existing industry, from the guilds. So, like, if you're the AMA, for example, you mentioned medical advances. That's something that makes sense to me for diagnoses, which really is just a matter of sorting the data. Like, what's most likely.

[01:25:29]

That's right.

[01:25:29]

And a machine can always do that more efficiently and more quickly than any hospital or individual doctor. And diagnosis is like the biggest hurdle.

[01:25:39]

Yes.

[01:25:42]

That's going to actually put people out of business. Right. If I can just type my symptoms into a machine and I'm getting a much higher likelihood of a correct diagnosis than I would be after three days at the Mayo clinic. Who needs the Mayo clinic?

[01:25:55]

I actually have a concrete story about that. I've dealt with a chronic issue for a couple of years. I spent hundreds of thousands of dollars on doctors out of pocket. Get world's experts and all that.

[01:26:07]

Hundreds of thousands of dollars, yes.

[01:26:09]

And they couldn't come up with a right diagnosis. And eventually it took me, like, writing a little bit of software to collect the data or whatever, but I ran it. I ran the AI. I used AI. I ran the AI once and it gave me a diagnosis they haven't looked at. And I went to them, they were very skeptical of it. And then we ran the test. Turns out it was the right diagnosis.

[01:26:27]

Oh, that's incredible.

[01:26:29]

Yeah, it's amazing. It changed my life.

[01:26:31]

That's incredible. But you had to write the software.

[01:26:33]

To get there, a little bit of software.

[01:26:35]

So that's just we're not that far from, like, having publicly available. Right.

[01:26:40]

And by the way, I think that anyone can write a little bit of software right now at relet, we are working on a way to generate most of the code for you. We have this program called 100 days of code. If you give it 20 minutes, do a little bit of coding every day, in like three months, you'll be good enough coder to build a startup. I mean, eventually you will get people working for you and you'll upscale and all of that, but you'll have enough skills and in fact, put up a challenge out there. People listening to this, if they go through this and they build something that they think could be a business, whatever, I'm willing to help them get it out there promoted. We'll give them some credits and cloud services, whatever. Just tweet at me or something and mention this podcast and I'll. What's your Twitter amassad am a s a D.

[01:27:30]

So. But there are a lot of entrenched interests. I mean, I don't want to get into the whole Covid poison thing, but I'm revealing my biases. But I mean, you saw it in action during COVID where it's always a mixture of motives. I do think there are high motives mixed with low motives because that's how people are. It's always a bouillabaise of good and bad. But to some extent, the profit motive prevailed over public health. That is, I think, fair to say, yes. And so if they're willing to hurt people to keep the stock price up, they, I mean, what's the resistance you're going to get to allowing people to come to a more accurate diagnosis with a machine for free?

[01:28:14]

Yeah. So in some sense, that's why I think open source AI, people learning how to do some of the stuff themselves, is probably good enough. Of course, if there's a company that's building these services, it's going to do better. But just the fact that this AI exists and a lot of it is open source. You can download it on your machine and use it is enough to potentially help a lot of people. By the way, you should always talk to your doctor. I talk to my doctor. I'm not giving people advice to kind of figure out all this themselves. But I do think that it's already empowering. So that's sort of step one.

[01:28:53]

But for someone like me, I'm not going to talk to a doctor until he apologizes to my face for lying for four years because I have no respect for doctors at all. I have no respect for anybody who lies, period. And I'm not taking life advice, and particularly important life advice, like, about my health, from someone who's a liar. I'm just not doing that because I'm not insane. I don't take real estate advice from homeless people. I don't take financial advice from people who are going to jail for fraud. So, like, I'm sure there's a doctor out there who would apologize, but I haven't met one yet. So for someone like me who's just, I'm not going to a doctor until they apologize, this could be, like, literally life saving.

[01:29:31]

So to the question of whether there's going to be a regulatory capture, I think that the, that's, I mean, that's why you see Silicon Valley getting into politics.

[01:29:44]

Hmm.

[01:29:45]

You know, Silicon Valley, what was always sort of into politics, you know, when I was, I remember I came in 2012, it was early on in my time, it was the Romney Obama debate. And I was, can I just pause?

[01:30:03]

Imagine a debate between Romney and Obama who agree on everything.

[01:30:08]

Yes. I didn't see a lot of daylight, and people were just making fun of Romney. He said something like, binders full of women, and that's stuck with it. Whatever. And I remember asking everyone around me, who are you with? I was like, of course, Democrats. Of course. Anyone here for Republicans? And they're like, oh, because they're dumb. Only dumb people. But we're Republicans. And Silicon Valley was this one state town in a way, actually. Look, there's data on donations by company for state. There's Netflix is 99% to Democrats and 1% to Republicans. If you look up the diversity of parties in North Korea, it's actually a little better.

[01:31:02]

Oh, of course it is. More choices. They have a more honest media, too.

[01:31:06]

But anyways, you see now a lot of people are surprised that a lot of people in tech are going for Republicans, going for Trump, and particularly Marc Andreessen and Ben Horowitz put out a two hour podcast talking about they are.

[01:31:22]

The biggest venture capitalists in the United States. I think.

[01:31:25]

I don't know on what metric you would judge, but they're certainly on their way to be the biggest. They're the most, I think the best, for sure.

[01:31:37]

They put out a. What was their. I should have watched it. I didn't.

[01:31:41]

Yeah. So the reasoning for why they would vote for Trump, by the way, they would have never done that in like, 2018 or 19, whatever. I, and this vibe shift that's happening.

[01:31:57]

How is it received?

[01:32:00]

It's still mixed, but I think way better than what would have happened ten years ago. They would have been canceled and no one would ever. No found or take their money.

[01:32:09]

But it's like, I mean, again, I'm an outsider just watching, but Andreessen Horowitz is so big and so influential, and they're considered smart and not at all crazy, that, like, that's got to change minds. If Andreessen Horowitz is doing it.

[01:32:23]

Yeah, it will have certainly changed minds. I think give people some courage to say I'm for Trump as well, at minimum. But I think it does change my arguments, is they put out this agenda called little Tech. There's big tech and they have their lobbying and whatever. Who's lobbying for little tech? Smaller companies. Companies like ours, but much smaller, too. Like one, two person companies. And actually no one is. Your company would be considered little in Silicon Valley.

[01:32:58]

I want a little company.

[01:32:59]

Right. But, Scott, really, startups that just started, typically, no one is protecting them sort of politically. No one's really thinking about it. And it's very easy to disadvantage startups like you just talked about with healthcare regulation. Very easy to create regulatory capture such that companies can't even get off the ground doing their thing. And so they came up with this agenda that we're going to be the firm that's going to be looking out for that little guy, the little tech, which I think is brilliant. And part of their argument for Trump is that the AI, for example, the Democrats are really excited about regulating AI. One of the most hilarious things that happened, I think Kamala Harris was invited to AI safety conference, and they were talking about existential risk. And she was like, well, someone being denied health care, that's existential for them. Someone, whatever. That's existential. So she interpreted existential risk as like, any risk is existential. And so, yeah, that's just one anecdote. But, like, there was this anecdote where she was like, AI, it's a two letter word. And you clearly don't understand it very well. And they're moving very fast at regulating it.

[01:34:27]

They put out an executive order that a lot of people think.

[01:34:30]

They kind of, I mean, the, the tweaks they've done so far from a user perspective to keep it safe are really just making sure it hates white people. It's about pushing a dystopian, totalitarian, social agenda, racist social agenda on the country. Is that going to be embedded in it permanently?

[01:34:50]

I think it's a function of the culture rather than the regulation. I think the culture was sort of this woke culture broadly in America, but certainly in Silicon Valley. And now that the vibe shift is happening, I think Microsoft just fired their dai team. Microsoft, really? Yeah. I mean, it is a huge vibe.

[01:35:12]

Are they going to learn to code, do you think?

[01:35:14]

Microsoft, perhaps so. I wouldn't pin this on the government just yet, but it's very easy.

[01:35:25]

Oh, no, no, no. I just meant to democratic members of Congress, I know for a fact, applied pressure.

[01:35:29]

Oh, they did.

[01:35:30]

To the labs like, no, you can't. It has to reflect our values.

[01:35:34]

Okay. Yeah, so maybe that's where it's.

[01:35:36]

But is that permanent? Am I always going to get, when I type in who is George Washington? You know, a picture of Denzel Washington.

[01:35:42]

You know, it's already changing, is what I'm saying. It's already, a lot of these things are being reversed. It's not perfect, but it's already changing. And that's, I think, just a function of the larger cultural change. I think Elon buying Twitter and letting people talk and debate moved the culture to, I think, a more moderate place. I think he's gone a little more, a little further. But I think it was net positive on the culture because it was so far left. It was so far left inside these companies, the way they were designing their products, such that George Washington will look like there's a black George Washington, what have you. That's just insane. Right? It was like, it was verging on insanity.

[01:36:29]

Well, it's lying. And that's what freaked me out. I mean, it's like, I don't know, just tell the truth. There are lots of truths I don't want to hear that don't comport with my desires, but I don't want to be lied to. George Washington was not black. None of the framers were. They were all white protestant men. Sorry.

[01:36:44]

That's right. Yeah.

[01:36:45]

So, like, that's a fact. Deal with it. So if you're going to lie to me about that, you're my enemy, right?

[01:36:51]

I I think so. I mean, you're, and I would say it's a small element of these companies that are doing that, but they tend to be the control. They were the controlling element. Those sort of activist folks that were. And I was at Facebook in 2015.

[01:37:06]

You worked at Facebook?

[01:37:07]

I worked at Facebook, yeah.

[01:37:08]

I didn't know that.

[01:37:09]

I worked on open source mostly. I worked on React and React native, one of the most powerful kind of wave programming user interfaces. So I mostly worked on that. I didn't really work on the blue app and all of that, but I saw the cultural change where a small minority of activists were just shaming anyone who is thinking independently. It sent Silicon Valley in this sheep like direction where everyone is afraid of this activist class because they can cancel you. They can. I think one of the early shots fired there was Brandon Icke, the inventor of JavaScript, the inventor of the language that runs the browser because of the way he votes or donates, whatever, got fired from his position as CTO of Mozilla browser, and that was seen as a win or something. Again, I was very politically, I was not really interested in politics in 20, 12, 13 when I first came to this country, but I just accepted as like, oh, you know, all these people, Democrats, liberal is what you are, whatever. But I just looked at that. I was like, that's awful. No matter what his political opinion is, you're taking from a man, his ability to earn a living.

[01:38:34]

Eventually he started another browser company, and he's good, right? But this cancel culture created such a bubble of conformism, and the leadership class at these companies were actually afraid of the employees.

[01:38:48]

So that is the fact that bothers me most. Silicon Valley is defining our future. That is technology. We don't have technology in the United States anymore. Manufacturing creativity has obviously been extinguished everywhere in the visual arts, everywhere. Silicon Valley is the last place. What's the most important?

[01:39:07]

Yes.

[01:39:08]

And so the number one requirement for leadership is courage. Number one.

[01:39:12]

Yes.

[01:39:12]

Number one. Nothing even comes close to bravery as a requirement for wise and effective leadership. So if the leaders of these companies were afraid of, like, 26 year old, unmarried, screechy girls in the HR department, like, whoa, that's really cowardly, like, shut up. You're not leading this company. I am. That's super easy. I don't know why that's so hard. Like, what?

[01:39:38]

The reason I think it was hard was because these companies were competing for talent hand over fist, and it was the sort of zero interest era and sort of us economy, and everyone was throwing cash at, like, talented. And therefore, if you offend the sensibilities of the employees even to the slightest bit, you're afraid that they're going to leave or something like that. I'm trying to make up an excuse for them.

[01:40:08]

Well, you could answer this question, because you are the talent that you came all the way from Jordan to work in the Bay Area, to be at the center of creativity inside. So the people who do what you do, who can write code, just the basis of all of this, are they. I don't like. They seem much more like you or James Damore. They just. They don't seem like political activists to.

[01:40:34]

Me, for the most part, yeah. They're still a segment of the sort of programmer population.

[01:40:40]

Well, they have to be rational because code is about reason, right?

[01:40:43]

Nah, I mean, this is the whole thing, you know, it's like. I don't think. I mean, a lot of these people that we talked about are into code and things like that. They're not rational, really? Yeah. Like, look, I think coding could help you become more rational, but it could be very easily override that, I think.

[01:40:56]

Isn't that the basis of it? I thought if this is true and that is true, then that must be true. I thought that was the point. Yeah.

[01:41:02]

But people are very easy. Very. It's very easy for people to just, you know, compartmentalize. Right. I was like, now I'm doing coding, now I'm doing emotions.

[01:41:12]

Oh, so the brain is not a computer.

[01:41:13]

The brain is not a computer.

[01:41:14]

Exactly. Exactly.

[01:41:15]

That's my point.

[01:41:16]

I know.

[01:41:17]

You know, so I'm probably responsible for the most amount of people learning to code in America, because I was, like, built. The reason I came to the US is I built this piece of software that was the first to make it easy to code in the browser. It went super viral, and a bunch of us companies started using him, including code Academy, and I joined them as a founding engineer. They had just started, two guys, amazing guys. They had just started and I joined them, and we taught, like, 50 million people how to code. Many millions of them are american. And the sort of rhetoric at a time, what you would say is, like, coding is important because I'll teach you how to think, computational thinking and all of that. I sort of like, not. Maybe I've said it at some point, but I've never really believed it. I think coding is a tool you can use to build things, to automate things to. It's a fun tool. You can do art with it. You can do a lot of things with it. But ultimately, I don't think you can sit people down and sort of make them more rational.

[01:42:25]

And you get into all these weird things if you try to do that. People can become more rational by virtue of education, by virtue of seeing that taking a more rational approach to their life yields results. But you can't really teach it that way.

[01:42:46]

Well, I agree with that completely. That's interesting. I just thought it was a certain. Because I have to say, without getting into controversial territory, every person I've ever met who writes code is kind of similar in some ways to every other person I've ever met who writes code. Not a broad cross section of any population.

[01:43:03]

No.

[01:43:05]

At all.

[01:43:06]

Well, people who make it a career. But I think anyone can write a lot of code, I'm sure.

[01:43:10]

I mean, people who get paid to do it.

[01:43:12]

Right?

[01:43:12]

Right. Yeah. Interesting. So bottom line, do you see? And then we didn't even mention Elon Musk. David Sacks have also come out for Trump. So do you think the vibe shift in Silicon Valley is real?

[01:43:28]

Yes, actually, I would credit sacks originally, like, perhaps more than Elon, because, look, it's one party state.

[01:43:36]

Yeah.

[01:43:37]

No one watches you, for example. No one ever watched anything sort of over generalized. But most people didn't get any right wing or center right opinions. For the most part, they didn't seek it. It wasn't there. You're swimming in just liberal democratic sort of talking points. I'd say Saks in the all in podcast was sort of the first time a lot of people started on a weekly basis hearing a conservative talk, being David Sachs amazing. And I would start to hear at parties and things like that. People describe their politics, as I call it. They would be like, you know, I agree with you most of the time. I agree with Sax's point of view on all end podcasts. Like, yeah, you're kind of maybe moderate or center right at this point.

[01:44:33]

Well, he's so reasonable. First of all, he's a wonderful person, in my opinion. But I didn't have any sense of the reach of that podcast until I did it. I had no sense at all. And he's like, will you do my podcast? Sure. Because I love David Sachs. I do the podcast. Like, everyone I've ever met texts me, oh, you're on all in podcasts? It's not my world. But I didn't realize that is the vector if you want to reach sort of business minded people who are nothing very political but are probably going to, like, send money to a buddy who's bundling for Commwell because, like, she's our candidate.

[01:45:08]

Yes.

[01:45:09]

That's the way to reach people like that.

[01:45:11]

That's right. By the way, this is my point about technology can have a centralizing effect, but also decentralizing.

[01:45:17]

Yes.

[01:45:17]

So YouTube, you can argue YouTube is the centralized thing. They're pushing opinions on us, whatever. But now you have a platform on YouTube after you got fired from Fox.

[01:45:28]

Right.

[01:45:29]

You know, Saks can have a platform and put these opinions out. And I think there was a moment during COVID that I felt like they're going to close everything down.

[01:45:42]

Yeah. For good reason. You felt that way.

[01:45:45]

Yes. And maybe they were. Maybe there's going to be some other event that will allow them to close it down. But one of the things I really love about America is the first Amendment. It's just the most important institutional innovation in the history of humanity.

[01:46:01]

I agree with that completely.

[01:46:02]

And we should really.

[01:46:03]

You grew up without it, too. I mean, it must be.

[01:46:06]

We should really protect it. Like, we should. Like, we should be so coveting of it. Like, you know, we should, you know, like your wife or something.

[01:46:15]

Can you totally agree? Hands off. Can you just repeat your description of its importance historically? I'm sorry. You put it so well.

[01:46:24]

It's the most important institutional innovation in human history.

[01:46:29]

The first amendment is the most important institutional innovation in human history. Yes, I love that. I think it's absolutely right. And as someone who grew up with it in a country that had had it for 200 years when I was born, you don't feel that way. It's just like, well, it's the first amendment. It's just part of nature. It's like gravity. It just exists. But as someone who grew up in a country that does not have it, which is true of every other country.

[01:46:55]

On the planet, it's the only country that has it.

[01:46:58]

You see it that way. You see it as the thing that makes America America.

[01:47:01]

Well, the thing that makes it so that we can change course.

[01:47:04]

Yes.

[01:47:05]

Right. And the reason why we had this conformist mobile rule mentality that people call woke, the reason that we're now past that, almost still kind of there, but we're on our way past that is because of the First Amendment and free speech. And again, I would credit Elon a lot for buying Twitter and letting us talk and can debate and push back on the craziness. Right.

[01:47:41]

It's kind of.

[01:47:42]

It's.

[01:47:43]

Well, it's beautiful. I've been a direct beneficiary of it as I think everyone in the country has been. So I'm not. And I love Elon, but I'm. I mean, it's a little weird that, like, a foreigner has to do that. A foreigner, foreign born person. You, Elon, appreciates it in this way. It's like. It's a little depressing. Like, why didn't some american born person do that? I guess. Cause they don't. We don't take it.

[01:48:06]

Yeah.

[01:48:07]

Take it for granted.

[01:48:07]

I wrote a thread. It was like ten things I like about America I expected to do well. You know, it was like three, four years ago. It went super viral. Did Wall Street Journal covered it. Peggy Noonan, you know, called me, and I was like, I want to write a story about it. I was like, okay, it's like a Twitter thread. You can read it. But, you know, I. And I just, like, talk about normal things, you know, free speech, one of them, but also, like, hard work, appreciation for talent and all of that. And it was starting to close up. Right. I started to see meritocracy, kind of like, being less valued. And that's part of the reason why I wrote the thread. And what I realized is, like, yeah, most Americans just don't think about that and don't really value it as much.

[01:48:54]

I agree.

[01:48:55]

And so maybe you do need to.

[01:48:57]

Oh, I think that's absolutely right. But why do you. I mean, I have seen. I hate to say this, because I've always thought that my whole life that foreigners are great. I like traveling to foreign countries. I like, my best friend is foreign born, actually, as opposed to mass immigration, as I am, which I am.

[01:49:15]

Arabs really like you, by the way.

[01:49:16]

Oh, well, I really like Arabs. Ive thrown off the brainwashing just to sidebar. I feel like we had a bad experience with Arabs 23 years ago, and what a lot of Americans didnt realize, but I knew from traveling a lot in the Middle east. Yeah, it was bad. It was bad. However, that's not representative of the people that I have dinner with in the Middle east at all. Someone once said to me, those are the worst people in our country. Right? And no, I totally agree with that strongly. I always defend the Arabs in a heartfelt way, but no. I wonder if some of the, particularly the higher income immigrants recently I've noticed, are parroting the same kind of anti american crap that they're learning from the institute. You come from Punjab and go to Stanford, and all of a sudden you've got the same rotten, decadent attitudes of your native born professors from Stanford. Do you see that?

[01:50:22]

I'm not sure. What's the distribution? Like, speaking of Indians, on the right side of spectrum, we have Vivek. And who's the best?

[01:50:31]

Who's a perfect example of what I'm saying? Like, the VEC is thought through, not just, like, first amendment. Good, but why? It's good.

[01:50:37]

Yeah, well, you know, I'm not sure. You know, I'm not sure. I think it's. Yeah, I think foreigners, for the most part, do appreciate it more, but it's easy, you know, I talked about how I just try not to be, you know, this conformist kind of really absorb everything around me and act on it. But it's very easy for people to go in these one party state places and really become part of this mob mentality where everyone believes the same thing. Any deviation from that is considered cancelable offense. And you asked about the shift in Silicon Valley. Part of the shift is Silicon Valley still has a lot of people who are independent minded, and they see this sort of conformist type of thinking in the democratic party, and that's really repulsive for them. Where there's, like, a party line. It's like Biden's sharpest attack. Sharpest attack. Everyone says that. And then the debates happen. Oh, unfit, unfit, unfit. And then, oh, he's out. Oh, kamala, kamala, kamala. It's like. It's like lockstep, and there's, like, no range. There's no. There's very little dissent within that party. And maybe Republicans, I think, at some point were the same.

[01:51:55]

Maybe now it's sort of a little different. But this is why people are attracted to the other side and Silicon Valley. By the way, this is advice for the Democrats. If you want Silicon Valley back, maybe don't be so controlling of opinions and be okay with more dissent.

[01:52:16]

You have to relinquish a little bit of power to do that. I mean, it's the same as raising teenagers. There's always a moment in the life of every parent of teenagers where a child is going in a direction you don't want. It's shooting heroin direction. You have to intervene with maximum force. But there are a lot of directions a kid can go that are deeply annoying to you. And you have to restrain yourself a little bit if you want to preserve the relationship. Actually, if you want to preserve your power over the child, you have to pull back and be like, I'm not gonna say anything.

[01:52:48]

That's right.

[01:52:48]

This child will come back. My gravitational pull is strong enough that I'm not gonna lose this child because she does something that offends me today. That's right. You know what I mean? You can't hold too tightly. And I feel like they don't understand. I feel like the democratic party, I'm not an intimate, of course, I'm not in the meetings, but I feel by their behavior that they feel very threatened. That's what I see. These are people who feel like they're losing their power.

[01:53:15]

Yes.

[01:53:16]

And so they have to control what you say on Facebook. I mean, what?

[01:53:19]

Yes.

[01:53:19]

If you're worried about people, say on Facebook, you know, you've lost confidence in yourself.

[01:53:23]

That's right. That's right.

[01:53:25]

Do you feel that?

[01:53:26]

Yeah. And I mean, you know, there's Matt Taibbi and Michael Schellenberger, and a lot of folks did a lot of great work on censorship.

[01:53:35]

Yes.

[01:53:36]

And the government's kind of involvement in that and how they, they push social media companies. I don't know if you can put it just on the Democrats. Cause I think part of it happened during the Trump administration as well.

[01:53:48]

For sure.

[01:53:49]

But I think they're more excitable about it. They really love misinformation as a term, which I think is kind of a b's term.

[01:53:57]

It's a meaningless term.

[01:53:58]

It's a meaningless term.

[01:53:59]

All that matters is whether it's true or not. And the term mis and disinformation doesn't even address the veracity of the claim.

[01:54:05]

That's right.

[01:54:06]

It's irrelevant to them whether it's true or not. In fact, if it's true, it's more upsetting. Yeah.

[01:54:09]

It's like everything what we talked about earlier, it's just making people stupid by taking their faculty of trying to discern truth. I think that's how you actually become rational, by trying to figure out whether something is true or not and then being right or wrong. And then that really trains you for having a better judgment. You talked about judgment. That's how people build good judgment. You can't outsource your judgment to the group. Which, again, feels like what's asked from us, especially in liberal circles, is that no Fauci knows better. Two weeks to stop the spread. Take the jab, stay home, wear the mask. It's just talking down to us as children. You can't discuss certain things on YouTube. You'll get banned at some point, you couldn't say the Lapleak theory, which is now the mainstream theory.

[01:55:08]

Yes.

[01:55:10]

And again, a lot of this self corrected because of the First Amendment.

[01:55:14]

Yeah. And elon. Wow, that was as interesting as dinner was last night a little less profanity, but I'm really grateful that you took the time to do this.

[01:55:22]

Thank you. It's absolutely my pleasure.

[01:55:24]

It was mine. Thank you.

[01:55:26]

Thanks.

[01:55:28]

Thanks for listening to Tucker Carlson show. If you enjoyed it, you can go to TuckerCarlson to see everything that we have made the complete library tuckercarlson.com.