Transcribe your podcast
[00:00:00]

The humans are still more powerful than the AI's. The problem is that we are divided against each other and the algorithms are using our weaknesses against us. And this is very dangerous, because once you believe that people who don't think like you are your enemies, democracy collapses and then the election becomes like a war. So if something ultimately destroys us, it will be our own delusions, not the AI's.

[00:00:23]

We have a big election in the United States.

[00:00:24]

Yes, democracy in the states is quite fragile, but the big problem is, what if?

[00:00:30]

Surely that will never happen.

[00:00:32]

Yuval Noah Harari, the author of some of the most influential non fiction books in the world today and is now.

[00:00:38]

At the forefront of exploring the world shaping power of AI and how it is beyond anything humanity has ever faced before. Biggest social networks in the world, they're effectively gonna go for free speech. What is your take on that?

[00:00:48]

The issue is not the humans. The issue is the algorithms. So let me unpack this. In the 2010s, there was a big battle between algorithms for human attention. Now the algorithms discover, when you look at history, the easiest way to grab human attention is to press the fear button, the hate button, the greed button. The problem is that there was a misalignment between the goal that was defined to the algorithm and the interests of human society. But this is how it becomes really disconcerting. Because if so much damage was done by giving the wrong goal to a primitive social media algorithm, what would be the results with AI in 20 or 30 years?

[00:01:25]

So what's the solution?

[00:01:26]

We've been in this situation many times before in history, and the answer is.

[00:01:30]

Always the same, which is, are you optimistic?

[00:01:33]

I try to be a realist.

[00:01:38]

This is a sentence I never thought I'd say in my life. We've just hit 7 million subscribers on YouTube, and I want to say a huge thank you to all of you that show up here every Monday and Thursday to watch our conversations from the bottom of my heart, but also on behalf of my team, who you don't always get to meet. There's almost 50 people now behind the diary of a CEO that worked to put this together. So from all of us, thank you so much. We did a raft last month, and we gave away prizes for people that subscribed to the show up until 7 million subscribers. And you guys love that raffle so much that we're going to continue it. So every single month we're giving away money, can't buy prizes, including meetings with me, invites to our events, and 1000 pound gift vouchers to anyone that subscribes to the diary of a CEO. There's now more than 7 million of you. So if you make the decision to subscribe today, you can be one of those lucky people.

[00:02:25]

Thank you from the bottom of my heart.

[00:02:27]

Let's get to the conversation.

[00:02:31]

Ten years ago, you made a video that was titled why humans run the world. It's a very well known TED talk that you did. After reading your new book, Nexus. I wanted to ask you a slightly modified question, which is, do you still believe that ten years from now, humans will fundamentally be running the world?

[00:02:54]

I'm not sure. It depends on the decisions we all take in the coming years. But there is a chance that the answer is no. That in ten years, algorithms and AI's will be running the world. I'm not having in mind some kind of hollywoodian science fiction scenario of one big computer kind of conquering the world. It's more like a bureaucracy of AI's that we will have millions of AI bureaucrats everywhere, you know, in the banks, in the government, in businesses, in universities, making more and more decisions about our lives, that everyday decisions, whether to give us a loan, whether to accept us to a job. And we will find it more and more difficult to understand the logic, the rationale, why the algorithm refused to give us a loan, why the algorithm accepted somebody else for the job. And, you know, you could still have democracies with people voting for this president or this prime minister. But if most of the decisions are made by AI's and humans, including the politicians, have difficulty understanding the reason why the AI's are making a particular decision, then power will gradually shift from humanity to these new alien intelligences.

[00:04:20]

Alien intelligences?

[00:04:21]

Yeah. I prefer to think about AI. I know that the acronym is artificial intelligence, but I think it's more accurate to think about it as an alien intelligence, not in the sense of coming from outer space, in the sense that it makes decision in a fundamentally different way than the human minds. Artificial means have the sense that we design it, we control it. Something artificial is made by humans. With each passing year, AI is becoming less and less artificial and more and more alien. Yes, we still design the kind of baby AI's, but then they learn and they change, and they start making unexpected decisions, and they start coming up with new ideas which are an alien to the human way of doing things. You know, there is this famous example with the game of Go that in 2016, Alphago defeated the world champion Lisa doll. But the amazing thing about it was the way it did it, because humans have been playing go for 2500 years a board game. A board game, a strategy game developed in ancient China and considered one of the basic arts that any cultivated civilized person in east, in East Asia had to know.

[00:05:45]

And tens of millions of Chinese and Koreans and Japanese played go. For centuries, entire philosophies developed around the game of how to play it. It was considered a good preparation for politics and for life. And people thought that they explored the entire realm, the entire geography, landscape of Gaul. And then Alphago came along and showed us that actually for 2500 years people were exploring just a very small bit, a very small part of the landscape of go. There are completely different strategies of how to play the game that not a single human being came up with in more than 2000 years of playing it. And Alphago came up with it in just a few days. So this is alien intelligence. And you know, if it's just a game then, but the same thing is likely to happen in finance, in medicine, in religion, for better or for worse.

[00:06:48]

You wrote this book, Nexus. Nexus, how do you pronounce it?

[00:06:53]

Nexus.

[00:06:54]

Nexus.

[00:06:55]

I'm not an expert on pronunciation, so.

[00:06:59]

You could have written many a book. You're someone that's, I think, broadly curious about the nature of life, but also the nature of history. For you to write a book that is so detailed and comprehensive, there must have been a pretty strong reason why this book had to come from you now. And why is that?

[00:07:17]

Because I think we need historical perspective on the AI revolution. I mean, there are many books about AI. This is, Nexus is not a book about AI. It's a book about the long term history of information networks. I think that to understand what is really new and important about AI, we need perspective of thousands of years to go back and look at previous information revolutions like the invention of writing and the printing press and the radio. And only then you really start to understand what is happening around us right now. One thing you understand, for instance, is that AI is really different. People compare it to previous revolutions, but it's different because it's the first technology ever in human history that is able to make decisions independently and to create new ideas independently. A printing press could print my book, but it could not write it. It could just copy my ideas. An atom bomb could destroy a city, but it can't decide by itself which city to bomb or why to bomb it. And AI can do that. And you know, there is a lot of hype right now around AI. So people get confused because they now try to sell us everything as AI.

[00:08:42]

Like you want to sell this table to somebody, oh, it's an AI table. And this water. This is AI water. So, people, what is AI? Everything is AI. No, not everything. There is a lot of automation out there, which is not AI. If you think about a coffee machine that makes coffee for you, it does things automatically, but it's not an AI. It's pre programmed by humans to do certain things, and it can never learn or change by itself. A coffee machine becomes an AI. If you come to the coffee machine in the morning and the machine tells, hey, based on what I know about you, I guess that you would like an espresso. It learned something about you, and it makes an independent decision. It doesn't wait for you to ask for the espresso, and it's really AI. If it tells you, and I just came up with a new drink, it's called Boffy, and I think you would like it. That's really AI. When it comes up with completely new ideas that we did not program into it and that we did not anticipate. And this is a game changer in history. It's bigger than the printing press, it's bigger than the atom bomb.

[00:09:59]

You said we need to have a historical perspective in it. Do you consider yourself to be a historian?

[00:10:04]

Yes, that's my profession, is a historian. That's kind of. This is my training. I was originally a specialist in medieval military history. I wrote about the Crusades and the 100 years war, and the strategy and logistics of the english armies that invaded France in the 14th century. This was my first articles. And this is the kind of perspective, or of knowledge that I also bring to try and understand what's happening now with AI.

[00:10:34]

Because most people's understanding of what AI is, comes from them playing around with a large language model, like chachi Pt, or Gemini or Grok or something, that's like their understanding of it. You can ask it a question and it gives you an answer. That's really what people think of AI as. And so it's easy to be a bit complacent within, or to see this technological shift as being trivial. But when you start talking about information and the disruption of the flow of information and information networks, and when you bring it back through history, and you give us this perspective on the fact that information effectively glues us all together, then it starts to become, for me, I think about it completely differently.

[00:11:15]

I mean, there are two ways I think about it. I mean, one way is that when you realize that, as you said, that information is the basis for everything, when you start to shake the basis everything can collapse or change, or something new could come up. For instance, democracies are made possible only by information technology. Democracy, in essence, is a conversation. A group of people conversing, talking, trying to make decisions together. Dictatorship is somebody dictates everything. One person dictates everything. That's dictatorship. Democracy is a conversation. Now, in the Stone Age, hunter gatherers living in small bands, they were mostly democratic. Whenever the band needed to decide anything, they could just talk with each other and decide. As human societies grew bigger, it just became technically difficult to hold the conversation. So the only examples we have from the ancient world for democracies are small city states like Athens or Republican Rome. These are the two most famous examples. Not the only ones, but the most famous. And even the ancients, even philosophers like Plato and Aristotle, they knew once you go beyond the level of a city state, democracy is impossible. We do not know of a single example from the pre modern world of a large scale democracy.

[00:12:41]

Millions of people spread over a large territory, conducting their political affairs democratically. Why not? Because of this or that dictator that took power. Because democracy was simply impossible. You cannot have a conversation between millions of people when you don't have the right technology. Large scale democracy becomes possible only in the late modern era, when a couple of information technologies appear. First the newspaper, then telegraph and radio and television, and they make large scale democracy possible. So democracy, it's not like you have democracy and on the side you have these information technologies. No, the basis of democracy is information technology. So if you have some kind of earthquake in the information technology, like the rise of social media or the rise of AI, this is bound to shake democracy, which is now what we see around the world, is that we have the most sophisticated information technology in history, and people can't talk with each other. The democratic conversation is breaking down, and every country has its own explanation. Like you talk to Americans, what's happening there between Democrats and Republicans? Why can't they agree on even the most basic facts? And they give you all these explanations about the unique conditions of american history and society.

[00:14:05]

But you see the same thing in Brazil. You see the same thing in France, in the Philippines. So it can't be the unique conditions of this or that country. It's the underlying technological revolution. And the other thing that history kind of, that I bring from history is how even relatively small technological changes, seemingly small changes, can have far reaching consequences. Like you think about the invention of writing. Originally, it was basically people playing with mud. I mean, writing was invented for the first. It was invented many times in many places, but most the first time in ancient Mesopotamia, people take clay tablets, which is basically pieces of mud, and they take a stick and they use the stick to make marks in the clay, in the mud. And this is the invention of writing. And this had a profound effect. To give just one example, you think about ownership. What does it mean to own something? Like, I own a house, I own a field. So previously, before writing, to own a field, if you live in a small mesopotamian village, like 7000 years ago, you own a field. This is a community affair. It means that your neighbors agree that this field is yours, and they don't pick fruits there, and they don't graze their sheep there, because they agree it's yours.

[00:15:36]

It's a community agreement. Then comes writing, and you have written documents and ownership changes its meaning. Now, to own a field or a house means that there is some piece of dry mud somewhere in the archive of the king with marks on it, that says that you own that field. So suddenly, ownership is not a matter of community agreement between the neighbors, it's a matter of which document sits in the archive of the king. And it also means, for instance, that you can sell your land to a stranger without the permission of your neighbors, simply by giving the stranger this piece of dry mud in exchange for gold or silver or whatever. So what a big change. A seemingly simple invention, like using a stick to draw some signs on a piece of mud. And now think about what AI will do to ownership, like maybe ten years down the line. To own your house means that some AI says that you own it. And if the AI suddenly says that you don't own it, for whatever reason, that you don't even know, that's it, it's not yours.

[00:16:54]

That mark on that piece of mud was also the invention of sort of written language. And I think I was thinking about when I was reading your book about how language holds our society together. Not in the way that we often might assume, as in me having a conversation with you, but passwords, poetry, like banking, it's like our whole society is secured by language. And the first thing that the AI's have mastered is with large language models, is the ability to replicate that which is. Which made me. I think about all the things that in my life are actually held together with language. Even my relationships now, because I don't see my friends, my friends live in Dubai and America and Mexico. So we conversate in language, our relationships are held together in language. And as you said, democracies are held together in language. And now there's a more intelligent force that's mastered that.

[00:17:50]

Yeah, it was so unexpected. Like, you know, five years ago, people said, AI will master this or that, self driving vehicles. But language, nah, this is such a complicated problem. This is the human masterpiece language. It will never master language. And chatgpt came and, you know, I'm a words person, and I'm simply amazed by the quality of the texts that these large language models produce. It's not perfect, but they really understand the semantic field of words. They can string words together and sentences to form a coherent text. That's really remarkable. And as you said, this is the basis for everything. Like, I give instructions to my bank with language. If AI can generate text and audio and image, then how do I communicate with the bank in a way which is not open to manipulation by an AI?

[00:18:55]

But the tempting part in that sentence is you don't like communicating with your bank anyway, as in calling them, being on the phone, waiting for another human. So the temptation is, I don't like speaking to my bank anyway, so I'm going to let the AI's do that.

[00:19:10]

I'm going to invest if I can trust them. I mean, the big question is, I mean, why does the bank want me to call personally to make sure that it's really me. It's not somebody else telling the bank, oh, make this transfer to, I don't know, Cayman Islands. It's really me. And how do you make sure, how do you build this trust? I mean, the whole of finance for thousands of years is just one question. Trust, all these financial devices, money itself is really just trust. It's not made from gold or silver or paper or anything. It's how do you create trust between strangers? And therefore most financial inventions, in the end, they are linguistic and symbolic inventions. You don't need some complicated physics. It's complicated symbolism. And now AI might start creating new financial devices and will master finance because it mastered language. And like you said, we now communicate with other people, our friends, all over the world. You know, in the 2010s, there was a big battle between algorithms for human attention. We're just discussing it before the podcast, like, how do we get the attention of people? But there is something even more powerful out there than attention, and that's intimacy.

[00:20:29]

If you really want to influence people, intimacy is more powerful than attention.

[00:20:36]

How are you defining intimacy in this regard?

[00:20:39]

Someone that you have a long term acquaintance with, that you know personally, that you trust, that to some extent that you love, that you care about? And until today, it was utterly impossible to fake intimacy and to mass produce intimacy. You know, dictators could mass produce attention. No. Once you have, for instance, radio, you can tell all the people in Nazi Germany or in the Soviet Union, the great leader is giving a speech. Everybody must turn their radio on and listen. So you can mass produce attention, but this is not intimacy. You don't have intimacy with the great leader. Now with AI, you can, for the first time in history, at least theoretically, mass produce intimacy with millions of bots, maybe working for some government, faking intimate relationships with us, which will be hard to know that this is a bot and not a human being.

[00:21:43]

It's interesting because when I've had so many conversations with relationship experts and a variety of people that speak to the decline in human to human intimacy and the rise in loneliness and us becoming more sexless as a society and all of these kinds of things. So it's almost with the decline in human to human intimacy and human to human connection and the rise of this sort of the possibility of artificial intimacy, it begs the question what the future might look like in a world where people are lonelier than ever, more disconnected than ever, but still have the same maslovian need for that connection and that feeling of love and belonging. And maybe this is why we're seeing a rise in polarization at the same time, because people are desperately trying to belong somewhere. And the algorithm is like reinforcing my echo chamber. So I'm, you know, and it's. But I don't know how that ends.

[00:22:43]

I don't think it's deterministic. It depends on the decision we make individually and as a society. There are, of course, also wonderful things that this technology can do for us. The ability of AI to hold a conversation, the ability to understand your emotions. It can potentially mean that we will have lots of AI teachers and AI doctors and AI therapists that can give us better healthcare services, better education services than ever before. Instead of being in Oebo, a kid, in a class of 40 other kids, that the teacher is barely able to give attention to this particular child and understand his or her specific needs and his or her specific personality. You can have an AI tutor that is focused entirely on you and that is able to give you a quality of education which is really unparalleled.

[00:23:44]

I had this debate with my friend on the weekend. He's got two young kids who are one years old and three years old. And we were discussing in the future, in sort of 16 years time, where would you rather send your child? Would you rather send your child to be taught by a human in a classroom, as you've described with lots of people, lots of noise, where they're not getting personalized learning. So if the classroom are more intelligent, they're being left behind. If they're more intelligent, they're being dragged back. Or would you rather your child sat in front of a screen, potentially, or a humanoid robot and was given really personalized, tailored education that was probably significantly cheaper than, say, private educational university?

[00:24:24]

You need the combination. I mean, I think that for many of the lessons, it will be better to go with the AI tutor, which again, you don't even have to sit in front of a screen. You can go to the park and get a lesson on ecology, just listening as you walk. But you will need large groups of kids for break time because very often you learn that the most important lessons in school are not learned during the lessons, they are learned during the breaks. And this is something that should not be automated. You would still need large group of children together with human supervision for that.

[00:25:07]

The other thing I thought about a lot when I was reading your book is this idea that I would assume that us having more information and more access to information would lead to more truth in the world, less conspiracy, more agreement. But that doesn't seem to be the case.

[00:25:25]

No, not at all. Most information in the world is junk. I mean, I think the best way to think about it is it's like with food, that there was a time, like a century ago in many countries where food was scarce, so people ate whatever they could get. Especially it was full of fat and sugar, and they thought that more food is always good. Like if you ask your great grandmother, she would, yes, more food is always good. And then we reach a time of abundance in food and we have all this industrialized, processed food, which is artificially full of fat and sugar and salt and whatever, and it's obviously bad for us. The idea that more food is always good. No, and definitely not all this junk food. And the same thing has happened with information. That information was once scarce, so if you could get your hands on a book, you would read it, because it was nothing else. And now information is abundant. We are flooded by information, and much of it is junk information, which is artificially full of greed and anger and fear because of this battle for attention. And it's not good for us.

[00:26:41]

So we basically need to go on an information diet. That again, the first step is to realize that it's not the case that more information is always good for us. We need a limited amount and we actually need more time to digest the information. And we have to be, of course, also careful about the quality of what we take in. Because, again, of the abundance of junk information. And the basic misconception, I think, is this link between information and truth that people think, okay, if I get a lot of information, this is the raw material of truth. And more information will mean more knowledge. And that's not the case, because even in nature, most information is not about the truth. The basic function of information in history and also in biology, is to connect. Information is connection. And when you look at history, you see that very often the easiest way to connect people is not with the truth. Because the truth is a costly and rare kind of information. It's usually easier to connect people with fantasy, with fiction. Why? Because the truth tends to be not just costly, the truth tends to be complicated, and it tends to be uncomfortable and sometimes painful.

[00:28:09]

If you think of, you know, like in politics, a politician who would tell people the whole truth about their nation is unlikely to win the elections, because every nation has this skeleton in the cupboard and all these dark sides and dark episodes that people don't want to be confronted with. So we see that politically it's not. If you want to connect nations, religions, political parties, you often do it with fictions and fantasies and fear.

[00:28:44]

I was thinking about sapiens and the role that stories play in engaging our brains. And I was thinking a lot about the narratives. In the UK, we have a narrative where we're told that much of the cause of the problems we have in society, unemployment, other issues with crime, are because there's people crossing from France on boats. And it's a. A very effective narrative to get people to band together, to march in the streets. And in America, obviously, the same narrative of the wall and the southern border, they're crossing our border in the millions. It's their rapists. They're not sending their good people. They're coming from mental institutions, has galvanized people together. And those people are now, like, marching in the streets, voting based on that story. That is a fearful story.

[00:29:27]

It's a very powerful story because it connects to something very deep inside us. And if you want to get people's attention, if you want to get people's engagement. So the fear button is one of the most efficient, most effective buttons to press in the human mind. And again, it goes back to the Stone Age. So if you live in a Stone age tribe, one of your biggest worries is that the people from the other tribe will come to your territory and you will take your food or will kill you. So this is a very ingrained fear, not just in humans, in every social animal. They did the experiments on chimpanzees that show that chimpanzees have also a kind of almost instinctive fear or disgust towards foreign chimpanzees from a different band and politicians and religious leaders. And they learn how to play on. These human emotions are almost like you play on a piano. Now, originally, these feelings, like disgust, they evolved in order to help us. You know, the most basic level, disgust is there because, you know, especially as a kid, you want to experiment with different foods, but if you eat something that is bad for you, you need to puke it, you need to throw it out.

[00:30:58]

So you have disgust protecting you. But then you have religious and political leaders throughout history hijacking this defensive mechanism and teaching people from a very young age to not just to fear, but to be disgusted by foreign people, by people who look different. And this is, again, as an adult, you can learn all the theories and you can educate yourself that this is not true, but still very deep in your mind. If there is a part that is just. These people are disgusting, these people are dangerous. And we saw it throughout history how in many different movements have learned how to use these emotional mechanisms to motivate people.

[00:31:53]

We sit down at a very interesting time, Yuval, because two quite significant things have happened in the last, I think, year as it relates to information and many of the things we've been talking about. One of them is Elon Musk bought Twitter, and his real mandate has been this idea of free speech. And as part of that mandate, he's unblocked a number of figures who were previously blocked on Twitter, a lot of them right leaning people that were blocked for a variety of different reasons. And then also this week, Mark Zuckerberg released basically a letter publicly. And in that letter he says that he regrets the fact that he cooperated so much with the FBI, the government, when they asked him to censor things on Facebook, one particular story, he says he regrets doing that. And it looks like if you read between what he's saying, well, he actually says explicitly, he says, we're going to push back harder in the future if governments or anybody else asks us to censor certain messaging. Now, what I'm seeing is that Twitter, which is one of the biggest social networks in the world and meta, the biggest social network in the world, have now taken this stance.

[00:32:59]

And effectively they're going to let information flow. They're effectively going to go for this free speech narrative. Now, as someone that's used these platforms for a long time, specifically x or Twitter, it is crazy how different it is these days. There are things that I see every time I scroll that I never would have seen before. This free speech position. Now, I'm not taking a stance whether it's good or bad. It's just very interesting. And there's clearly an algorithm that is now really, like, if I scroll, if I go on x right now, I will see someone being killed with a knife, I reckon within 30 seconds. And I will see someone getting hit by a cardinal. I will see extreme Islamophobia, potentially, but then I'll also see the other side. So it's not just I'm saying I'll see all of the sides. And when you were talking earlier about, like, is that good for me? I had a flashback to my friend this weekend. It was my birthday, so me and my friends were together, just looking over at him, mindlessly scrolling these, like, horror videos on Twitter as he was sat on my left thinking, God, he's, like, frying his dopamine receptors.

[00:34:02]

And I just think this whole new, like, free speech movement. What is your take on this idea of free speech and the role, you.

[00:34:09]

Know, only humans have free speech. Bots don't have free speech. The tech companies are constantly confusing us about this issue, because the issue is not the humans. The issue is the algorithms. And let me explain what I mean. If the question is whether to ban somebody like Donald Trump from Twitter, I agree, this is a very difficult issue, and we should be extremely careful about banning human beings, especially important politicians, from voicing their views and opinions, however much we dislike their opinions or them personally. It's a very serious matter to ban any human being from a platform. But this is not the problem. The problem on the platform is not the human users. The problem is the algorithms. And the companies constantly shift the blame to the humans in order to protect their business interests. So let me unpack this. Humans create a lot of content all the time. They create hateful content. They create sermons on compassion. They create cooking lessons, biology lessons, so many different things, a flood of information. The big question is, then, what gets human attention? Everybody wants attention now. The companies also want attention. The companies give the algorithms that run the social media platforms a very simple goal.

[00:35:38]

Increase user engagement, make people spend more time on Twitter, more time on Facebook, engage more, sending more likes, and recommending to their friends. Why? Because the more time we spend on the platforms, the more money they make. Very, very simple. Now, the algorithms made a huge, huge discovery by experimenting on millions of human guinea pigs. The algorithms discovered that if you want to grab human attention, the easiest way to do it is to press the fear button, the hate button, the greed button. And they started recommending to users to watch more and more content full of hate and fear and greed to keep them glued to the screen. And this is the deep cause of the epidemic of fake news and conspiracy theories and so forth. And the defense of the companies is, we are not producing the content. Somebody, a human being, produced a hate field conspiracy theory about immigrants. And it's not us. It's like a bit like, I don't know, the chief editor of the New York Times publishing a hate filled conspiracy theory on the front of the first page of the newspaper. And when you ask him, why did you do it?

[00:37:02]

Or you blame him, look what you did. He said, I didn't do anything. I didn't write the piece. I just put it on the front of the New York Times. That's all. That's nothing. It's not nothing. People are producing immense amount of content. The algorithms are the kingmakers. They are the editors. Now, they decide what gets viewed. Sometimes they just recommend it to you. Sometimes they actually autoplay it to you. Like you chose to watch some video at the end of the video to keep you glued to the screen. The algorithm. Immediately, without you telling him it, the algorithm. Without you telling the algorithm, the algorithm autoplays some kind of video full of fear or greed just to keep you glued to the screen. It is the algorithm doing it. And this should be banned, or this should at least be supervised and regulated. And this is not freedom of speech because the algorithms don't have freedom of speech. Yeah, the person who produced the hate field video, I would be careful about banning them. But that's not the problem. It's the recommendation, which is the problem. The second problem is that a lot of the conversations now online are being overrun by bots.

[00:38:26]

Again, if you look, for instance, at Twitter X as an example. So people often want to know what is trending, which stories get the most attention. If everybody's interested in a particular story, I also want to know what everybody's talking about. And very often it's the bots that are driving the conversation, because a particular story initially gets a lot of traction, a lot of traffic, because a lot of bots retweet it. And then people see it and think they don't know it's bots. They think it's humans. So they say, oh, lots of humans are interested in this. So I also want to know what's happening, and this draws more attention. This should be forbidden. Bots are very basically, you cannot have a eyes pretending to be human beings. This is fake humans. This is counterfeit humans. If you see activity online and you think it's human activity, but actually it's bot activity, this should be banned. And it doesn't harm the free speech of any human being because it's a bot. It doesn't have freedom of speech.

[00:39:38]

I was thinking a lot about what you said about these algorithms are actually running, running the world. And I mean, yeah, so if the algorithms are deciding what I see based on what I spend my time looking at, because they want to make know, the platforms want to make more money, and if I have a innate sort of predisposition to spend more time focused on things that scare me.

[00:39:57]

Yeah.

[00:39:58]

Or then you just have to give me a couple of years, and every year that goes past, I'll become more fearful, more.

[00:40:07]

It reinforces your own weaknesses. I mean, it's like the food industry. So the food industry discovered we liked food with a lot of salt and fat in it, and gives it more to us, and then it says, but this is what the customers want, what do you want from us? It's the same thing, but even worse, with these algorithms that because this is the food for the mind. Yes. Humans have a tendency that if something is very frightening or something fills them with anger, they focus on it and they tell all their friends about it. But to artificially amplify it, it's just not good for our mental health and social health. It is using our own weaknesses against us instead of helping us deal with them.

[00:40:55]

Is it fair to say, now this is me just jumping to conclusions a little bit, but is it fair to say that in a world where you remove restrictions around blocking certain characters, right wing characters that are their messages, maybe based on immigration, etcetera, you remove those restrictions so they're all allowed on every platform, and then you program the algorithm to be focused on revenue that eventually more people will become right wing. And I say that in part because it's a right wing narrative to say that immigrants are bad and that, you know, I'm not saying that the left are innocent, because they're absolutely not. But I'm saying that the fearful narratives, the fear seems to come more from the right, in my opinion. Like, especially in the UK, it was, the fear comes from immigrants, and these people are going to take your money and all these kinds of things.

[00:41:46]

I think the key issue is not to label it as a right or left issue, because, again, democracy is a conversation. And you can have a conversation only if you have several different opinions. And I think it should be okay to have a conversation about immigration, that people should be able to have different opinions about it. That's fine. The problem starts when one side vilifies and demonizes anybody who doesn't think, and you see it to some extent from both sides. But in the case of immigration, so you would have these conspiracy theories that anybody who supports immigration, for instance, they want to destroy the country. They are part of this conspiracy to flood the country with immigrants and to change its nature and whatever. And this is the problem that democracy. Once you believe that people who don't think like you, they are not just your political rivals, they are your enemies. They are out to destroy you. They intend to destroy your way of life, your group, then democracy collapses, because there can be no way between enemies. Democracy doesn't work. It works if you think that the other side is wrong, but there are still essentially good people who care about the country, who care about me, but they have different opinions.

[00:43:21]

If you think that they are my enemies, they try to destroy me, then the election becomes like a war, because you are fighting for your survival. You will do anything to win the election because your survivor is at stake. If you lose, you have no incentive to accept the verdict. If you win, you only take care of your tribe and not of the enemy tribe.

[00:43:45]

What if you don't believe the election is legitimate?

[00:43:48]

Then democracy can't function. This is, again, the basic democracy can't exist in just any. It's like a delicate plant that needs certain conditions in order to survive and to flourish. And one condition, for instance, is that you have information technologies that allows the conversation. Another condition is that you trust the institutions. If you don't trust the institution of elections, it doesn't work. And a third condition is that you need to think that the people on the other side of the political divide, they are my rivals, but they are not my enemies. Now, the problem with what's happening now with democratic conversations is because of this tendency to go to more and more and more extremes. It creates the impression that the other side is an enemy. And this is a problem not just for the right, also for the left, that on both sides, you see this feeling that the other side is an enemy and that its positions are completely illegitimate. And if we reach that point, then the conversation collapses. And it should be possible to have complex conversations and discussions about difficult issues like immigration, like gender, like climate change, without seeing the other side as an enemy, which was possible for generations.

[00:45:20]

So why is it that now it seems to just become impossible to talk with the other side or to agree about anything?

[00:45:30]

We have a big election in the United States this year.

[00:45:32]

Very big one. Yeah.

[00:45:34]

Do you think a lot about it?

[00:45:37]

Yes, yes. I mean, it seems like a very later it would be a coin toss. I mean, like 50 50. You know, elections become really an existential issue if there is a chance they will be the last elections. If one side is. Intends to simply change the rules of the game, if it comes to power, then it becomes existential. Because again, democracy works on the basis of self correcting mechanisms, that this is the big advantage of democracy over dictatorship. In a dictatorship, a dictator can make a lot of good decisions, but sooner or later they will make a bad decision. And there is no mechanism in a dictatorship to identify and correct such mistakes.

[00:46:27]

Like Putin.

[00:46:28]

Yeah, there is just no mechanism in Russia that could say Putin made a mistake. He should go, he should let somebody else try a different course of action. This is the great advantage of democracy. You try something, it doesn't work, you try something else. But the big problem is, what if you choose someone who then changes the system, neutralizes its self correcting mechanism, and then you cannot get rid of them anymore. This is what happened, for instance, in Venezuela, that originally Chavez and the Chavista movement, they came to power democratically. People wanted to, let's try this. And now, in the last elections, a couple of weeks ago, evidence is very, very clear that Maduro lost big time. But he controls everything, the election committee, everything. And he claims, no, I won and they destroyed Venezuela. You know, it's something like a quarter of the population fled the country, which was one of the richest countries in South America before, and they just can't get rid of the guy.

[00:47:34]

Surely that will never happen in the west.

[00:47:36]

Oh, don't say never in history. History can catch up with you, whoever you are.

[00:47:43]

That's one of the illusions we come.

[00:47:45]

Venezuela was part of the west in many ways, still is.

[00:47:49]

This is one of the illusions we live under, though we think that that can never happen to the UK or the United States or Canada. These sort of quote unquote civilized nations.

[00:48:00]

You know, according to some measurements, democracy in the United States is quite new and quite, quite fragile, if you think about it in terms of who gets to vote, for instance. So it's. Yeah, it would be. Again, I don't know what are the chances? But even if there is a 20% chance that a Trump administration would change the rules of the game of american democracy in such a way as to make it, for instance, by changing the rules about who votes or how do you count votes, that it will become almost impossible to get rid of them. That's not out of the possible and historical terms.

[00:48:48]

Do you think it's possible that Trump will do that?

[00:48:50]

Yes. I mean, you saw it on the 6 January. I mean, the most sensitive moment in every democracy is the moment of transfer of power. And the magic of democracy is that democracy is meant to ensure a peaceful transfer of power. But as I said, you choose one party, you give them a try. After some time, if people say they didn't do a good job, let's try somebody else. And, you know, we have people who hold, in the United States, they hold the biggest power in the world. The president of the United States have enough power to destroy human civilization. All these nuclear missiles, all this arming, and he loses the election, and he says, okay, I give up all this power, and I let the other guy try. This is amazing. And this is exactly what Trump didn't do. He. From the beginning, I mean, even from 2016, he refused. They asked him directly, if you lose the election, will you accept the results? And he said no. And in 2020, he did not hand power peacefully. He tried to prevent it. And the fact that he's now running again, and I think to some extent, the lesson he got from the 6 January is that I can basically get away with anything, at least with my people, with my base, that it was like a test, a try.

[00:50:18]

If I do this extreme thing and they still support me afterwards, it basically means they will support me no matter what I do.

[00:50:29]

I'm wondering, in a world of, um, such a fragile democracy, when information flows and networks are disrupted by something like AI, if misinformation and disinformation, and the ability for me to make a video, I could make a video right now of Donald Trump speaking and saying something in his voice, and I could help that video go viral. Like, how do you hold together democracy and communication when you don't believe anything that you're seeing online?

[00:50:57]

Hmm.

[00:50:58]

And we're just at the start of this now.

[00:51:00]

We haven't seen anything yet. This is just really the first baby steps.

[00:51:05]

I'm going to play a video on this screen right now so people can see. And for those listening, you'll just hear it. I'm going to play a video that Isaac over there in the corner of the room, made of me speaking in this chair. And it wasn't me, and I didn't say it, and I wasn't in this chair.

[00:51:18]

Hey there.

[00:51:19]

This is AI Steve, do you think I'll be able to take over the diary of a CEO one day?

[00:51:23]

Leave your comments below.

[00:51:25]

And it sounds exactly like me. Identical. And it's not me. And I wonder this with, you know, most of us get our political information and our information generally now from social media. Yeah, from. And if I can't believe anything that I'm seeing, because it's all easy to make. Some kid in Russia in their bedroom can make a video of the prime minister here. I don't know where we get our information from anymore. How are we.

[00:51:46]

The answer is institutions. We've been in this situation many times before in history, and the answer is always the institutions. You cannot trust the technology. You trust the institution that verifies the information. Think about it like with print that you can write on a piece of paper anything you want. You can write the prime minister of Britain said, and then you open quotation marks and you put something into the mouth of the prime minister. You can write anything you want. And when people read it, they don't believe it or they shouldn't believe it. Just because it's written that the prime minister said it doesn't mean that it's true. So how do we know which pieces of paper to believe as an institution we would believe, or greater chance we will believe? If on the front page of the New York Times or of the Sunday Times or of the Guardian, you will have the british prime minister said, open quotation marks, blah, blah, blah, because we don't trust the paper or the ink. We trust the institution of the Guardian or the Wall Street Journal or whatever, with videos. We never had to do that because nobody could fake them.

[00:53:01]

So we trusted the technology. If we saw a video, we said, this has to be true. But when it becomes very easy to fake videos, then we revert to the same principle as with sprint. We need an institution to verify it. If we see the video on the official website of CNN or of the Wall Street Journal, then we believe it, because we believe the institution backing it. And if it's just something on TikTok, we know that, you know, any kid can do that. Why should I believe it? So now we are in the transition period. We are still not used to it. So when we see a video of Donald Trump or Joe Biden, the video still gets to us, because we grew up in a time when it was impossible to fake it. But I think very quickly people will realize you can't trust videos, you can only trust the institutions. And the question is, will we be able to produce, to create, to maintain trustworthy institutions fast enough to save the democratic conversation? Because if not, if you can't believe anything, this is the ideal for dictators. When you can't trust anything, the only system that works is a dictatorship.

[00:54:22]

Because democracy works on trust, but dictatorship works on terror, on fear. You don't need to trust anything. In a dictatorship, you don't trust anything. You fear for democracy to work, you need to trust, for instance, that some information is reliable, that the election committee is impartial, that the courts are just. And if more and more institutions are attacked and people lose trust in them, then democracy collapses, going back to information. So one option is that the old institutions, like newspapers and tv stations, they will be the institutions that we trust to verify certain videos, or we will see the emergence of new institutions. And again, the big question is whether we'll be able to develop trust in them. And I specifically say institutions and not individuals. No large scale society, especially not a democratic society, can function without trustworthy bureaucratic institutions.

[00:55:36]

And will those bureaucratic institutions be AI?

[00:55:41]

That's the big question, because increasingly AI's will be the bureaucrats.

[00:55:46]

And what do you mean by bureaucrats? What's the word bureaucrat? What does that mean?

[00:55:49]

Oh, that's a very important question, because human civilization runs on bureaucracy.

[00:55:54]

Bureaucrats are essentially officials in government that.

[00:55:57]

Trade, not just in government. I mean, the origin of the word bureaucrat, it comes from French, from the 18th century. And bureaucracy means that the rule of the writing desk is to rule the world or to rule society with pen and papers and documents, like the example we gave in the very beginning about ownership. So you own a house because there is a document in some archive that says that you own it. And a bureaucrat produced this document. And if you now need to retrieve it, then this is the job of a bureaucrat, to find the right document at the right time. And all big systems run on it. Hospitals and schools and corporations and banks and sports associations and libraries, they all run on these documents. And the bureaucrats who know how to read and write and find and file documents. One of our big problems is that it's. It's difficult for us to understand bureaucratic systems because they are a very recent development in human evolution. And this makes us suspicious about them. And we tend to believe all kinds of conspiracy theories about the deep state and about what's going on in all these bureaucracies.

[00:57:26]

And it's really complicated, and it's going to be more complicated as more of the decisions will be made by AI bureaucrats. An AI bureaucrat means that decisions like how much money to allocate to a particular issue will no longer be made by a UN official. It will be made by an algorithm. And when people decide why, when people ask, why is the switch system broken? Why didn't they give enough money to fix it? I don't know. The algorithm just decided to give the money to something else.

[00:58:01]

Why will bureaucracies be run by AI over people? Like, why will at some point a nation decide that, in fact, AI is better at making these decisions?

[00:58:12]

First of all, it's not a future development. It's already happening. More and more of the decisions are being made by AI's. And this is just because the amount of information you need to take into account are enormous, and it's very difficult for humans to do it. It's much easier for the AI's to do it.

[00:58:33]

All these people, you know, bureaucrats, lawyers, accountants, it sounds like I always wonder, what are humans going to be left to do? In your book, you say that AI is going to far, AI is going so far beyond human intelligence that it should actually be referred to alien intelligence. Now, if it goes so far beyond human intelligence, it's my assumption that most of the work that we do is based on intelligence. So even like me doing this podcast now, this is me asking questions based on information that I've gathered, based on what I think I'm interested in, but also based on what I think the audience will be interested in. And compared to AI, I'm like a little monkey, do you know what I mean? If an AI has an iq that is 100 times mine, and source of information that is that million times bigger than mine, there's no need for me to do this podcast. I can get an AI to do it, and in fact, an AI can talk to an AI and deliver that information to a human. But then, if we look at most industries, like being a lawyer, accountancy, I mean, a lot of the medical profession is based on information driving.

[00:59:40]

Think of us, the biggest employer in the world is the profession of driving, whether it's delivery or uber or whatever it is. Where do humans belong in this complex?

[00:59:50]

Anything which is just information in, information out, is ripe for automation. These are the easiest jobs to automate.

[00:59:59]

Like being a coder.

[01:00:01]

Like being a coder, or like being an accountant. At least certain types of accountants, lawyers, doctors, they are the easiest to automate. If a doctor, the only thing they do is just take information in all kinds of results of blood tests and whatever, and they information out, they diagnose the disease, and they write a prescription. This will be easy to automate in the coming years and decades. But a lot of jobs, they require also social skills and motor skills. If your job requires a combination of skills from several different fields, it's not impossible, but it's much more difficult to automate it. So if you think about a nurse that needs to replace a bandage to a crying child, this is much, much harder to automate than just a doctor that writes a prescription, because this is not just data. The nurse needs good social skills to interact with the child and motor skills to just replace the bandage. So this is harder to automate. And the will, even for people who just deal with information, there will be new jobs. The problem will be the retraining, and not just, you know, retraining in terms of acquiring new skills, but psychological retraining.

[01:01:27]

How do you kind of reinvent yourself in a new profession and do it not once, but again and again and again? Because as the AI revolution unfolds and we are just at the very beginning of it, we haven't seen anything yet. So there will be all jobs disappearing, new jobs emerging, but the new jobs will rapidly change and vanish, and then there will be a new wave of new jobs, and people will have to reinvent themselves four, five, six times to stay relevant. And this will create immense psychological stress.

[01:02:01]

So many of the big companies are also working at the same time on humanoid robots. There's this humanoid robot race going on. And by humanoid robots, I mean Tesla, have their humanoid robot, I think it's called Optimus, which they're developing, and it'll cost x thousands of pounds. And I watched a video of it recently where it can do quite delicate motor skill based stuff. So probably clean the house, it can probably work on the production line, probably put things in boxes. And I just wonder, when we say people are going to lose their jobs in a world where you have humanoid robots and you have intelligence, that's beyond us, and you combine the two where the humanoid robots are very, very intelligent. I don't know what I'm like. Where did the unemployed go to to find these new professions? Obviously, it's difficult to forecast the new professions of the future. History tells us that. But I can't figure out what the new professions are. I mean, my girlfriend does breath work. I guess the breath work part is quite easy to disrupt. But then she takes women away for retreats in Portugal and stuff. So I'm like, okay, she's going to kind of be safe because these women are going there to connect with humans and to be in this little special place offline, intentionally.

[01:03:10]

So retreats, you'll probably be fine.

[01:03:13]

Anything that, you know, there are things that we want in life which are not just about solving problems. Like, I'm sick, I want to be healthy. I want my problem solved. But there are many things that we want to have a connection. Like, if you think about sports, robots or machines can run much faster than people for a very long time now, and we just had the Olympics, and people are not very interested in seeing robots running against each other or against people, because what really makes sports interesting in the end is the human weaknesses and the ability of humans to deal with their weaknesses. And human athletes still have jobs, even though in many lines, like running, you can have a machine run much faster than the world champion.

[01:04:09]

I thought about this the other day.

[01:04:11]

And another example is priests. Like, one of the easiest jobs to automate is the priesthood of at least certain religions, because you just need to repeat the same texts and gestures again and again in specific situations. Like, if you have a wedding ceremony, then the priests just need to repeat the same words, and there you are, you're married. Now, we don't think about priests as being in danger of being replaced by robots, because what we want from a priest is not just the mechanical repetition of certain words and gestures. We think that only another frail flesh and blood human who knows what is pain and love and who can suffer. Only they can connect us to the divine. So most people would not be interested in having the wedding conducted by a robot, even though technically it's very easy to do it. Now, the big question, of course, what happens if AI gains consciousness? This is like the trillion dollar question of AI consciousness. Then all bets are off. But that's a different and very, very big discussion. I mean, whether it's possible, how would we know? And so forth.

[01:05:33]

Do you think it's possible?

[01:05:35]

We have no idea. I mean, we don't understand what consciousness is. We don't know how it emerges in the organic brain. So we don't know if there is an essential connection between consciousness and organic biochemistry so that it can't arise in an inorganic, silicon based computer. There is a big confusion. First of all, should be said again between consciousness and intelligence. Intelligence is the ability to reach goals and solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate. Humans and other animals, we solve problems through our feelings. Our feelings are not something on the side. They are a main method for how to deal with the world, how to solve problems. Now, so far, computers, they solve problems in a completely different way than humans. Again, they are alien intelligence. They don't have any feelings. When they win a game of chess, they are not joyful. When they lose a game, they are not sad. They don't feel anything. Now, we don't know how organic brains produce these feelings of pain and pleasure and love and hate. So this is why we don't know whether an inorganic structure based on silicon and not carbon, whether it will be able to generate such things or not.

[01:07:11]

That's, I think, the biggest question in science. And so far, we have no answer.

[01:07:18]

Isn't consciousness just like a hallucination? Isn't it just like an illusion that I think I'm conscious because I've got the circuitry, which tells me that I am effectively, it tells me through a bunch of feelings and things that I'm conscious. Like, I think I'm looking at you now. I think I can see you.

[01:07:35]

The feeling is real. I mean, even if we are all. It's like the Matrix and we are all in.

[01:07:40]

How do you know it's real?

[01:07:41]

It's the only real thing in the world. I mean, there is nothing. Everything else is just conjuncture. We only experience our own feelings. What we see, what we smell, what we touch. This we actually experience. This is real. Then we have all these theories about, why do I feel pain? Oh, it's because I stepped on a nail. And there is such a thing in the world as a nail and whatever it could be that we are all inside a big computer on the planet zirconde, run by super intelligent mice.

[01:08:12]

If I spoke to an AI, I could get an AI to tell me that it feels pain and sadness.

[01:08:18]

That's a big problem, because there is a huge incentive to train AI's to pretend to be alive, to pretend to have feelings. And we see that there is a huge effort to produce such AI's. And in truth, because we don't understand consciousness, we don't have any proof even that other humans have feelings. I feel my own feelings, but I never feel your feelings. I only assume that you're also a conscious being. And society grants this status of a conscious entity to not only to humans, but also to some animals, not based on any scientific proof, but based on social convention. Like, most people feel that their dogs are conscious, that their dogs can feel pain and pleasure and love and so forth. So society accepts most societies that dogs are sentient beings and they have some rights under the law now as AI, even if AI has no feelings, no consciousness, no sentience whatsoever, but it becomes very good at pretending to have feelings and convincing us that it has feelings, then this will become a social convention, that people will feel that their AI friend is a conscious being and therefore should be granted rights.

[01:09:49]

And there is even already a legal path for how to do it, at least in the United States. You don't need to be a human being in order to be a legal person.

[01:10:01]

It's funny because you mentioned, you kind of alluded to the fact jokingly, that we might just be in like a simulation. It was one of you like, well, maybe we're just in a simulation, but could be. And it's funny because in a world of AI, I think my belief in that as a possibility has only increased. Yes, this is in fact just a simulation because I've watched us go from when I was born not really having Internet access, to now being able to kind of speak to this aliena on my computer that can now do things for me, and having virtual reality experiences which are sometimes quite indistinguishable, where I fall into the trap of believing that I am inside squid games, because I've got this headset on. And you play it forward and you play it forward and you play it forward and you imagine any rate of improvement. Then I hear the arguments for simulation theory and I go, do you know, probably if you play this forward 100 years, at the rate we're on, the rate of trajectory we're on, then we will be able to create information networks and organisms that don't in like a laboratory or in a computer that don't necessarily realize they're in the computer.

[01:11:11]

Especially with like, what's going on with.

[01:11:14]

BIOS, it's already happening to some extent. You know, these information bubbles that more and more people live inside them. It's still not the whole physical world, but you get the same event and people on, say, different part of the political, political spectrum. They just can't agree on anything. They live in their own matrixes. And, you know, when the Internet came along for the first time, the main metaphor was the web, the World Wide Web. A web is something that connects everything. And now the main metaphor, which is this simulation theory, is representing this new metaphor. The new metaphor is the cocoon. It's a web that turns on you and encloses you from all sides, so you can no longer see anything outside and there could be other cocoons with other people in there and you have no way to get to them.

[01:12:15]

Yeah.

[01:12:16]

Nothing that happens in the world can connect you anymore because you're in different cocoons.

[01:12:21]

You've only got to look at someone else's phone. You've only got to look at someone else's Twitter or X or Instagram.

[01:12:28]

Is this the same reality?

[01:12:29]

It is so different. Do you know what I was talking about over the weekend? My friend was sat to my left, scrolling. He clicked on the Discovery section, which is where you find new content. I looked down at his phone and was like, it's all Liverpool football club. It's like the entire feed is Liverpool. And my entire feed is completely different. I was just thinking, wow, he lives in a completely different world to me because he's a Liverpool fan, I'm a Manchester United fan. And to think about that, to think that when you open your phone and many of us are spending up to 9 hours a day on our mobile phones, you're experiencing a completely different window into a completely different world than I am.

[01:13:09]

And this was, you know, this is a very ancient fear because, for instance, Plato wrote exactly about that. And the most famous parable, I think, from greek philosophy, is the allegory of the cave in which Plato imagines a theoretical scenario, an imaginary scenario of a group of prisoners chained inside a cave with their face to a blank wall in which shadows are being projected from behind them and they mistake the shadows for reality. And he was basically describing people in front of screen, just mistaking the screen for reality. And you have the same thing in ancient India with Buddhists and hindu sages talking about Maya, which is the world of illusions and the deep fear that maybe we are all trapped inside a world of illusions, that the most important thing that we think in the world, the wars we fight, we fight wars over illusions in our mind. And this is now becoming technically possible. Like previously, it was these philosophical thought experiments. Now, part of what is interesting as a historian about the present era is that a lot of ancient philosophical problems and discussions are becoming technical issues that, yes, you can suddenly realize Plato's cave in your phone.

[01:14:52]

So scary. I find it really scary because you're right. I think right now some people might say that they have some kind of grasp over the ranking system or why something shows up when I search it or whatever. But as these intelligence aliens become more and more powerful, of course we would have less understanding because we're handing over the decision making.

[01:15:14]

In some industries, they are now completely the kingmakers. I'm here on a book tour. I wrote Nexus, so I go from podcast to podcast, from tv station to tv station, to talk about my book. But the entities I'm really trying to impress are the algorithms, because if I can get the attention of the algorithms, the humans will follow. You know, that's our reality. We are basically kind of carbon creatures in a silicon world.

[01:15:51]

I used to think we were in control, though, and I feel like the silicon is in control.

[01:15:57]

Control is shifting. We are still in control to some extent. We are still making the most important decisions, but not for long. And this is why we have to be very, very careful about the decision we make in the next few years, because in ten years, in 20 years, it could be too late. By then, the algorithms will be making the most important decisions.

[01:16:22]

You talk about a couple of big dangers you see with the algorithms in AI and the shift in disruption of information. One of them is this alignment problem, which. How would you explain the alignment problem to me? In a way that's simple to understand.

[01:16:37]

So the classical kind of example is a thought experiment invented by the philosopher Nick Bostrom in 2014, which sounds crazy, but bear with it. He imagines a super intelligent AI computer, which is bought by a paperclip factory. And the paperclip manager tells the AI, your goal, the reason I bought you, your goal, your entire existence, you're here to produce as many paperclips as possible. That's your goal. And then the AI conquers the entire world, kills all humans, and turns the entire planet into factories for producing paperclips. And it even begins to send expeditions to outer space to turn the entire galaxy into just paperclip production industry. And the point of the thought experiment is that the AI did exactly what it was told. It did not rebel against the humans. It did exactly what the boss wanted. But, of course, it was not that. The strategy it shows was not aligned with the real intentions, with the real interests of the. Of the human factory manager, who just couldn't foresee that this will be the result. Now, this sounds like outlandish and ridiculous and crazy, but it already happened to some extent, and we talked about it.

[01:18:09]

This is the whole problem with social media and user engagement. In the very same years that Nick Bostrom came up with this thought experiment, in 2014, the managers of Facebook and YouTube, they told their algorithms, your goal is to increase user engagement and the algorithms of social media, they conquered the world and turned the whole world into user engagement, which was what they were told to do. We are now very, very engaged. And again, they discover that the way to do it is with outrage and with fear and with conspiracy theories. And this is the alignment problem. When Mark Zuckerberg told the Facebook, algorithms increase user engagement, he did not foresee, and he did not wish that the result will be collapse of democracies, wave of conspiracy theories and fake news, hatred of minorities, he did not intend it. But this is what the algorithms did, because there was a misalignment between the way that the algorithm, the goal that was defined to the algorithm and the interests of human society and even of the human managers of the companies that are deployed these algorithms. And this is still a small scale disaster, because the social media algorithms that created all this social chaos over the last ten years, they are very, very primitive AI.

[01:19:56]

This is like the amoebas of, if you think about the development of AI as an evolutionary process, for this is still the amoeba stage, the amoeba being.

[01:20:07]

The very simple, the very simple life.

[01:20:10]

Forms, the beginning, like the single cell life form, we are still, in evolutionary terms, organic evolution. We are like billions of years before we will see the dinosaurs and the mammals or the humans. But digital evolution is billions of times faster than organic evolution. So the distance between an AI amoeba and the AI dinosaurs could be covered in just a few decades. If chedgpt is the amoeba, how would the AI Tyrannosaurus rex would look like? And this is where the alignment problem becomes really disconcerting, because if so much damage was done by giving kind of the wrong goal to a primitive social media algorithm, what would be the results of giving a misaligned goal to a T. Rex AI in 20 or 30 years?

[01:21:13]

The issue at the heart of this is some people might think, okay, just give it a different goal. But when you're dealing with private companies who are listed on the stock market, there really is only one goal that keep that make money. Exactly. That benefits survival. So all of the platforms have to say, the goal of this platform is to make more money and to get.

[01:21:33]

More attention, because also, it's mathematically easy. And there is a huge, huge problem in how to define for AI's and algorithms the goal in a way they can understand. Now, the great thing about make money or increase user engagement is that it's very easy to measure it mathematically. One day, you have a million hours being watched on YouTube, then next, a year later, it's 2 million. Very easy for the algorithm to see, hey, I'm making progress, but let's say that Facebook would have told its algorithm increase user engagement in a way that doesn't undermine democracies. How do I measure that? Who knows? What is the definition for the robustness of democracy? Nobody knows. So defining the goal for the algorithm as increase user engagement, but don't harm democracy, almost impossible. This is why they go for the kind of easy goals which are the most dangerous.

[01:22:40]

But even in that scenario, if I'm the owner of a social network and I say, increase user engagement, don't harm democracy, the problem I have is my competitor who leaves out the second part and just says increase user engagement is going to beat me because they're going to have more users, more eyeballs, more revenue. Advertisers are going to be happier, then my company is going to falter, investors are going to pull out.

[01:23:01]

That's a question, because there are two things to take into consideration. First of all, you have governments. Governments can regulate, and they can penalize a social media company that defines goals in a socially responsible way, just as they penalize newspapers or tv stations or car companies that behave in an antisocial way. The other thing is that humans are not stupid and self destructive. That if we would like to have better products, in the sense of also socially better products, and I gave earlier the example with food diets, like, think how much. Yes, the food companies, they discovered that if they fill a product artificially with lots of fat and sugar and salt, people would like it. But people discovered that this is bad for their health. So you now have, for instance, a huge market for diet products, and people are becoming very aware of what they eat. The same thing can happen in the information market.

[01:24:13]

The cost, though, is like 80, 70, 80% of people in the US have, like, chronic disease and are obese. And, you know, life expectancy is now looks like it's going the other way a little bit in the western world, and it's. I don't know, I just feel like with policing consumption of goods like alcohol, nicotine, food seems much more simple than policing information. And the flow of information beyond, you know, beyond racism or like, inciting violence, I don't know how you police.

[01:24:51]

We already covered the two most basic and powerful tools, how to hold companies liable for the actions of their algorithms, not for the content that the users produce, but for the actions of the algorithms. I don't think we should penalize Twitter or Facebook if somebody posts a racist post. I would be very careful about penalizing Facebook for that, because then who decides what is racism? And so forth? But if the algorithm of Facebook deliberately spreads some racist conspiracy theory. That's the algorithm. That's not. You want a free speech?

[01:25:35]

How do you know it's a racist conspiracy theory, though?

[01:25:37]

Okay, so now, now we get to the difficult conversation, but this is something that we have the courts for. And I would be very, very careful about having the courts judge on the content of the production of individual users. But when it comes to algorithms deliberately, routinely spreading a particular type of information, like a conspiracy theory, we can involve the courts. The key issue is who has liability that it's the company that is liable for what the algorithm is doing and not the human individual liable for what they are saying. And another key distinction here is between private and public. Like, part of the problem is the erasure of the boundary between the two. I think that humans have a right to stupidity in private. That in your private space, with your friends and with your family, you have a right to stupidity. You can say stupid things, you can tell racist jokes, you can tell homophobic jokes. It's not good, it's not nice, but you're a human being. You're allowed to do that. But not in public. I mean, even for politicians, like, as a gay person, if the prime minister tells a homophobic joke in private, I don't need to care about that.

[01:27:05]

That's his or her business. But if they say it in public on television, that's a huge problem. Now, traditionally, it was very easy to distinguish private from public. You are in your private house with a group of friends, you say something stupid. That's private. It's nobody's business. You go to the town square and you stand on a pedestal and you shout something to thousands of people. That's public. Here, you can be punished if you say something racist or homophobic or outrageous. But it was easy for you to know. Now, the problem is, you go, let's say, on WhatsApp, you think you're just talking with two of your friends, and you say something really stupid, and then it gets viral and it's all over the place. And I don't have an easy solution for that. But one measure, which is adopted by some governments, is, for instance, that people who have a large following, they are held to a different standard than people who don't. Even on the most basic thing of identifying yourself as a human being. We don't want that. Everybody would have to get some certification from the government to talk with their friends on WhatsApp.

[01:28:28]

But if you have 100,000 followers online, we need to know that you are not a botanical, that you're actually human being. And again, this is not covered by freedom of speech because bots don't have freedom of speech.

[01:28:42]

It's slippery slope, right? Because I've gone back and forth on this argument of anonymity and whether it's a good thing or a bad thing for social networks. And the rebuttal that I got when I leant to the side of id ing people is that, like, totalitarian governments will use that as a way to basically punish the people who are speaking.

[01:28:59]

The totalitarian governments are doing it whether we like it or nothing. It's not a question that if the British do it, then the Russians will say, okay, so we'll also do it. The Russians are doing it anyway.

[01:29:11]

Will Americans start to do it? Will they start to? If someone speaks out against Trump and he has access to their identity and information, can he go look at them and get them arrested?

[01:29:20]

If we reach that point when the courts will allow such a thing, then we are in very deep trouble already. And what we should realize is that with the surveillance technology now in existence, a totalitarian government has so many ways to know who you are that it's. That's not the main issue. Right.

[01:29:45]

You talked about the platforms being responsible for the consequences.

[01:29:49]

Yes.

[01:29:50]

In the UK, over the last month, we've had. I don't know if you've heard, but we have lots of riots. And I think it was all triggered originally when there was news that broke that someone had murdered some young children and there was a confusion or a sort of misinformation around that person's religion. And that meant that people probably.

[01:30:10]

So that's an excellent example, because if I personally, privately say to just two of my friends, I think the person who did it is xdev. I don't think we should be. You should be persecuted for that. I could say it in a private living room, and it's the same thing if I say it on WhatsApp or on Facebook. But if a Facebook algorithm picks up this piece of fake news and starts recommending it to more and more users, then Facebook is liable for the action of its algorithms. You should be able to take it to court and say the algorithm deliberately recommended a piece of fake news. And again, if the fake news was produced by an influencer with a million followers, then it's also his. He is also liable for that. But if a private individual in a private setting said something which is not true, it's fake news, and then an algorithm deliberately spread it, the main fault is with the algorithm and the people who should be in jail are the managers of the company that owns this algorithm and not the individual who uttered the words. Going back to the riots issue, let's say that, I don't know.

[01:31:36]

The Guardian, on the day of the riots, decided to pick up a piece of this fake news and publish it on its front page. And they now take the editor of the Guardian to court. And he says, but I didn't write it. I just found this piece of fake news and decided to put it on the front page of the Guardian. Now, it would be obvious to us that the editor did something very, very, very wrong, and he might or she might have to sit in jail. And it's not the problem of the person who originally produced the piece of fake news. If you're the editor of one of the biggest newspaper in the country and you decide to publish something on your front page, you had better be very, very sure that what you're publishing is the truth, especially if it can incite to violence.

[01:32:29]

How would a social network owner know that? How would they be able to verify that everything is true at that scale?

[01:32:35]

Not everything. But if, for instance, something is likely to lead to violence, the very first, it's a precautionary principle. First of all, do no harm. Again, I'm not asking Facebook to censor the piece of fake news. I'm only asking it. Don't get your algorithms to spread it on purpose in order to get user engagement and make a lot of money. If you are not sure about it, just don't spread it. It's as easy as that.

[01:33:08]

How does it know it's fake news versus it thinking that it's actually really important, life saving news?

[01:33:14]

So, for example, that's the responsibility of the company. How does the editor of the Guardian know of the Financial Times or of the Sunday Times? How do they know if something is true and if something should be published on the front page of. If you are now managing a social media company, you are managing one of the most powerful newspapers in the world, and you should have the same kind of responsibilities and the same kind of expertise. If you have no idea how to judge whether an algorithm should recommend something to millions of people, you are in the wrong business. If you can't stand the heat, get out of the kitchen. Don't run a social media company. If you don't know what should be shown to millions of people.

[01:34:00]

One of the, it's very pertinent because obviously Mark Zuckerberg's letter that he wrote this week says, I was approached by the FBI, who told me that Russia were trying to influence the elections and they were given some information that there was this laptop story. Joe Biden, Hunter Biden, who was Joe Biden's son, had this laptop story, which Facebook didn't know if it was real or not. And they thought maybe it was a russian plant. That is, Russia had put the story there to try and make sure Joe Biden didn't win the elections. So Facebook deprioritized it, stopped it going viral and suppressed it. Turns out it was a real story and it wasn't fake. And Mark Zuckerberg says he regrets suppressing it because it was in fact a real story. And in suppressing it, he kind of influenced the election to some degree. So it's so complicated to the point.

[01:34:52]

That I just, it's complicated to run a big media company. If you complicated to run the Wall Street Journal or Fox News. And then what happens if the FBI comes to Fox News or comes to the Wall Street Journal and tells them, look, there is this story planted by the Russians, don't encourage it. And later on it turns out that it was wrong, could happen. And as the manager of the Wall Street Journal, you need to deal with it. And do I trust the FBI? Under what conditions? Sometimes they should. Sometimes I should be suspicious.

[01:35:27]

I feel like you're going to end up in jail if you're the editor of the Wall Street Journal. You can end up in gel either way, because either way you're influencing elections. And if you influence, but that's the business.

[01:35:37]

I mean, the real problem is when you have extremely powerful people like Zuckerberg or Elon Musk that pretend that they don't have power, that they don't have influence, that they don't shape elections. We know for centuries that the owners and editors of newspapers, they shape elections, and therefore we hold them to certain standards. And now the owners and managers of platforms like Twitter and YouTube and Facebook, they have more power than the New York Times or the Guardian or the Wall Street Journal, and they should be held to at least the same degree of accountability and their shtick that, oh, we are just a platform. We just allow everybody to publish what they want. It doesn't work like that. And we don't accept it with traditional media. So why should we accept it with, that's the whole trick of these tech companies, that, again, we have thousands of years of history and they tell us, oh, it doesn't apply to us. Like, if you have a traditional industry like cars, it's obvious to everybody you cannot put a new car on the road unless you made some safety checks to make sure the car is safe.

[01:36:53]

You cannot put a new medicine on the market or a new vaccine on the market without safety. That's obvious, right? But when it comes to algorithms. No, no, no. That's a different set of rules. You can put any algorithm you want on the market. You don't need any safety rules. And even more basic than that, you think about something like theft. You have the Ten Commandments. Don't steal. And you know, people know. Yes. You shouldn't steal until it comes to information. Ah, no, no, no. It doesn't apply to information. I can take your information and without your permission, do all kinds of things with it and sell it to third parties. And this is not stealing. Don't steal. Doesn't apply to my line of business. And this is what the tech giants have been doing in many cases over the last decade or two, telling us that history doesn't apply to them, that all the wisdom that humanity gained in a very painful way over centuries and thousands of years of dealing with dictatorships and with whatever, it doesn't apply to the new technology. And it does. It does apply.

[01:38:01]

Do you ever feel tempted to just log off and just, like, go live in a field somewhere, maybe like a desert, maybe just create a little bit of a cult?

[01:38:10]

I do it every year.

[01:38:11]

Oh, really?

[01:38:11]

Yeah. I take a long meditation retreat of between 30 days and 60 days. Like this year, I plan in December after the book tour is over, to go 60 days for meditation retreat in India and just completely disconnect. No smartphone, no Internet, not even books or writing paper, just information, fast.

[01:38:35]

Why?

[01:38:36]

It's good for the mind. Again, like with food, too much in isn't good for us. We need time to digest and to detoxify. And it's true of the mind as well. If you just keep bombarding it with more, you get addicted to the wrong things, you develop bad habits, and you need, or at least I need time off in order to really kind of digest everything that happened and to decide what I want and what I don't want, what kind of habits, addictions, I should try to be rid of, and also to get to know my own mind. When the mind is constantly bombarded by information from outside, it's so noisy, you cannot get to know it because there is so much noise. But when the noise goes away, then you can start to understand, what is the mind? How does it function? How does it work? Where do thoughts come from? What is fear? What is anger? When you're boiling with anger because of something you now read. You are focused on the object of your anger, but you can't understand the anger itself. The anger controls you. When you have an information fast, you can just observe what happens to me when I'm angry, what happens to my mind, to my body.

[01:40:20]

How does it control me? And this is more important than any angry story in the world to understand what anger actually is. It's very, very difficult. How many times do people stop and just, you know, try to get to know their anger and not the object of the anger? This is we do all the time. We kind of replay. We heard something terrible that a politician we don't like. I don't know, somebody's angry about Trump. So he would replay it again. And, oh, he said like this. He did like that. He will do this, he will do that. And you don't get to know your anger that way.

[01:40:54]

I have about 50 different companies in my portfolio at flight group now, some of which I've invested in and some of which I've co founded or founded myself. One thing I've noticed is that most companies don't put enough effort into the hiring process. In my mind, the first and most critical thing in business is assembling your group of people. Because the definition of the word company is group of people. And throughout all of my companies, whenever I'm looking to hire someone, my first port of call is LinkedIn jobs, who, I'm happy to say, are also a sponsor of this podcast. They've helped us source professionals who we truly can't find anywhere else. Even those who aren't actively searching for a new job, but who might be open to a perfect role. In fact, over 70% of LinkedIn users don't visit other leading job sites. So if you're not looking on LinkedIn, you're probably looking in the wrong place. So today, I'm giving the diary of a CEO community a free LinkedIn job post. Head to LinkedIn.com doacnow and let me know how you get on terms and conditions apply. Everything I am, every goal I have, every company I founded, this podcast, all rests on this tectonic plate I didn't even know existed, which is my health.

[01:41:59]

You remove my health, you remove everything I have. You remove my dog, I still have myself. You remove my girlfriend, I still have myself. But if you remove my health, I lose everything. So it has to be my first priority. It has to be number one. And I've orientated my life around that one area of my health that people often overlook is my oral health. And a game changer for my routine has been Colgate Total, who are a sponsor of this podcast. Unlike ordinary toothpaste that only clean, Colgate total really does provide superior 24 hours protection for your whole mouth. Colgate is the number one brand recommended by dentists. So join me in prioritizing your oral health. To learn more about Colgate Total's superior science, visit the link in the episode description below.

[01:42:41]

So interesting. I was playing out the scenarios in my head as you were speaking of this future where there's almost these two species of human. You have one species of human who are connected to the information highway through the Internet, through the neuralink in their brain. That's just like, they're hooked and the algorithm is feeding them information and they're acting upon it and they're feeding it. And then you have this other group of people who decided to reject that, who didn't get the neuralink, who aren't trying to interface with AI, and that are living in a tribe in some jungle somewhere. And I, like my girlfriend said this to me many years ago, she's going, I think there's going to be a split. And I kind of like, whatever. But now I'm like, I can see why, as things get more extreme, you go, you know what? I'm going to make a decision here. And especially when I saw the neuralink that Elon Musk's working on that allows you to control computers with your brain.

[01:43:30]

I sat the computer to control your brain also.

[01:43:33]

It goes, you're right.

[01:43:34]

And she didn't think about that, but I just imagined, and this is a question for everyone listening, if there's you and me, and I have the chip in my brain that now humans have in their brain that they're using to control computers with, I am a different species to you because I can control the. I can control my car downstairs. I can control the lights in this room. I can. I can ask my brain questions and get the answers. My iq becomes 5000. Yours is still 150 or 200. Yours is probably 250. But I'm a different species to you. I have such a huge competitive advantage over you that if you don't get the chip, then you're screwed.

[01:44:15]

That's speciation. Yeah, again, on a small scale. We saw it before in history. There are the people who adopted the written document and the people who rejected it, and they are not with us anymore, because the people who adopted the written document, they built these kingdoms and empires and they conquered everybody else, and we are in danger of the same thing happening. And this is not a good thing, because it's not like life was better for the people with the documents. In many cases, life was better for the hunter gatherers who lived before.

[01:44:50]

So what's the solution? If I had to, you know, having read your book, brilliant book Nexus, a brief history of information networks from the Stone Age to Aihdem, what is the solution? How do we stop the alignment problems, us all becoming paperclips, the social chaos, the misinformation, the silicon curtain, as you talk about in the book, how do we stop these things destroying our world? Is there hope? Are you optimistic?

[01:45:18]

The key is cooperation, is connection between humans. The humans are still more powerful than the AI's. The problem is that we have divided against each other, and the algorithms unintentionally are increasing the divide again. This is the oldest rule of every empire, is divide and rule. This was the rule of the Romans, of the british empire. If you want to rule a place, you divide the people of that place against one another, and then it's easy to manipulate and control them. This is now happening to the entire human species with AI. That just as we had kind of, you know, the Iron curtain in the cold war, now we have the silicon curtain dividing not just China from the US, but also Democrats from Republicans, also one person from another person, and all of us from the AI's, which increasingly make the decisions about all that. We still have the power for, I don't know, five years, ten years, 20 years to make sure it doesn't go in dystopian direction. But for that, we need to cooperate.

[01:46:29]

Are you optimistic?

[01:46:32]

I try to be a realist. I mean, the last few. I mean, I just came from Israel, and I saw a country destroying itself for no good reason whatsoever. And it's a country that just pressed the self destruct button and for no good reason. And it can happen on a global scale.

[01:46:51]

What do you mean and press the self destruct button?

[01:46:53]

It's not just the war between Israeli and Palestinians, but israeli society turning against itself. Greater and greater division and animosity, and it's like a dark hole of. Of anger and of violence, which is sucking more and more people in all over the world. You now feel the shockwaves from this dark hole in the Middle east, and there is no good reason, there is no objective reason. I'll say something about the israeli palestinian conflict. There is no objective reason for it. It's not like there is not enough land between the Mediterranean and the Jordan river. That people have to fight for the little land there is, or that there is not enough food. There is enough food for everyone to eat. There is enough land to build houses and hospitals and schools for everyone. Why do people fight? Because of different stories. In their minds, they have these different mythologies, that God gave this whole place just to us. You have no right to be here. And they fight over that. And this is a local or regional tragedy. It can happen on a global scale. Again, if something ultimately destroys us, it will be our own delusions, not the AI's.

[01:48:18]

The AI's, they get their opening because of our weaknesses, because of our delusions.

[01:48:26]

Yuval, thank you so much for writing a book. I think this book is one of the most well timed books that I've ever come across because of everything that's happening in the world right now. And it really helped me to understand that the problem isn't necessarily me versus you if you're on the other side of the aisle. The problem is information, the networks of information that we consume. Who's controlling those networks of information?

[01:48:49]

Somebody is manipulating us to be on different side, not just to be on different sides, but to see each other as enemies.

[01:48:56]

And right now, that's a person. But it might not be soon.

[01:48:59]

It might not be a person. No.

[01:49:01]

And understanding that, I think, helps us focus on the root cause of issues that are sometimes hard to identify. That is, I think the problem is my neighbor. I think it's that person with different color skin. But actually, if you look one level deeper, it's the information networks and what I'm being exposed to that are brainwashing me and creating those stories. And as you talk about in your previous books, stories are ultimately what are running the world. And it's this wonderful, that nexus is just a wonderful book at a wonderful time that helps us to access this knowledge of the power of information and how it impacts democracy and relationships and society and business and everything in between in a way that I hope will lead to action. And I think that is something to be optimistic about.

[01:49:49]

Yeah. Ultimately, I think most humans are good, good people. When you give people bad information, they made bad decisions. The problem is not with the humans, it's with the information.

[01:50:05]

Amen, yuval. We have a closing tradition on this podcast where the last guest leaves the question for the next guest, not knowing who they're going to be leaving it for.

[01:50:11]

Oh, okay.

[01:50:12]

And the question that for you is, what does it mean to be strong.

[01:50:26]

To accept reality as it is, to deal with reality without trying to hide it disappear. It put a veil over it.

[01:50:40]

So interesting. I think you're right. I think you're right. Certainly not the answer I would have given. But, you know, you come.

[01:50:49]

What would you say?

[01:50:55]

Oh, what I say. I guess I probably would have spoken to, like, perseverance in the face of a lot of different difficulties. And one of those is information, but it's just that idea of persevering towards whatever your subjective goal is in the face of, and in spite of a variety of different difficulties, maybe that strength. So that could be raising a kid, or it could be going to the gym or whatever. But I like your definition as well, because I think it's much more important in the times we find ourselves in. And honestly, as a podcaster, you sometimes feel like you're caught right in the middle of it. Because I think everyone's trying to figure out if I'm like, on the right wing, on the left wing, if I believe this, if I endorse every guest that I sit with. And you almost have to try and remain impartial. But it's very, very difficult for people to understand that because they want you to fit somewhere and they want to because that's weakness.

[01:51:55]

I mean, you have a lot of people who claim to be very strong, who admire strength as a value, but they can't deal with parts of reality that don't fit into their worldview or their desire. And they think that strength is. I have the strength to just make these parts of reality disappear. And no, this is weakness. And I am sorry for going back to that. But this is also the war. Like, what is war? Is trying to disappear a part of reality that you don't like, in this case, an entire people. I don't like these people. I don't think they should be in reality. So I try to make them disappear. And people say, oh, he's a very strong leader. He's not. He's a very weak leader that a strong leader would be able to acknowledge. No, these people existed. They are part of reality. Let's now find out how do we live with them.

[01:52:56]

Amen. Your book Nexus, a brief history of information networks from the Stone Age to AI, is a must read for everybody that listens to this podcast and that has any interest in these subjects at all. It's endorsed by two of my favorite people, Mustafa Solomon, but also Stephen Fry and Roy Stewart, who's a great person as well. And it's endorsed for a very good reason, because it's a completely mind expanding book written from someone who only writes exceptional, culture shifting books, so I'm going to link it below. I highly recommend anybody that's listening to this conversation and that's interested in this subject matter to go and get this book right now. It's available right now for pre order and then it's shipping in five days from now when it releases. So be the first to read it and hopefully be the first to understand in action some of the things that you learn in this book. Yuval, thank you so much for your time.

[01:53:43]

Thank you.

[01:53:47]

Isn't this cool? Every single conversation I have here on the diary of a CEO, at the very end of it, you'll know I asked the guest to leave a question in the diary of a CEO. And what we've done is we've turned every single question written in the diary of a CEO into these conversation cards that you can play at home. So you've got every guest we've ever had their question, and on the back of it, if you scan that QR code, you get to watch the person who answered that question. We're finally revealing all of the questions and the people that answered the question. The brand new version, two updated conversation cards, are out right now@theconversationcards.com. dot they've sold out twice instantaneously. So if you are interested in getting hold of some limited edition conversation cards, I really, really recommend acting quickly.