Transcribe your podcast
[00:00:01]

Joe Rogan podcast, check it out. The Joe Rogan experience.

[00:00:06]

Train by day, Joe Rogan podcast by night, all day.

[00:00:11]

What's happening?

[00:00:14]

Oh, not too much. Just another typical week in AI.

[00:00:18]

Just the beginning of the end of time. That's all happening right now. Just for the sake of the listeners, please just give us your names and tell us what you do.

[00:00:29]

So I'm I'm Jeremy Harris. I'm the CEO and co founder of this company, Gladstone AI, that we co founded. We're essentially a national security and AI company. We can get into the backstory a little bit later, but that's the high level.

[00:00:41]

Yeah. And I'm Ed Harris. I'm actually his co founder and brother and the CTO of the company.

[00:00:48]

Keep this, pull this up like a fist from your face. There you go. Perfect. So how long have you guys been involved in the whole AI space?

[00:01:00]

For a while in different ways. So we actually started off as physicists. That was our background. And around 2017, we started to go into AI startups. So we found a startup, took it through Y Combinator, this Silicon Valley accelerator program. At the time, actually, Sam Altman, who's now the CEO of OpenAI, was the President of Y Combinator. So he opened up our batch at YC with this big speech, and we got some conversations in with him over the course of the batch. Then in 2020, So this thing happened that we could talk about. Essentially, this was the moment that there's a before and after in the world of AI, before and after 2020. And it launched this revolution that brought us to ChatGPT. Essentially, there was an insight that OpenAI had and doubled down on that you can draw a straight line to ChatGPT, GPT-4, Google Gemini. Everything that makes AI everything it is today started then. And when it happened, we went... Well, Ed gave me a call, this panicked phone call. He's Dude, I don't think we can keep working business as usual in a company.

[00:02:04]

In a regular company anymore. Yeah. So there was this AI model called GPT-3. So everyone has maybe played with GPT-4. It's like ChatGPT. Gpt-3 was the generation before that, and it was the first time that you had an AI model that could actually, let's say, do stuff like write news articles that the average person, in a paragraph of a news article, could not tell the difference between it wrote this news article and a real person wrote this news article. So that was an inflection, and that was significant in itself. But what was most significant was that it represented a point along this line, this scaling trend for AI, where the signs were that you didn't have to be clever. You didn't have to come up with necessarily a revolutionary new algorithm or be smart about it. You just had to take what works and make it way, way, way bigger. And the significance of that is you increase the amount of computing cycles you put against something, you increase the amount of data, All of that is an engineering problem, and you can solve it with money. So you can scale up the system, use it to make money, and put that money right back into scaling up the system some more.

[00:03:26]

Money in, IQ points come out.

[00:03:29]

Jesus.

[00:03:29]

That was the 2020 moment.

[00:03:31]

And that's what we said in 2020. Exactly.

[00:03:34]

I spent about two hours trying to argue him out of it. I was like, No, we can keep working at our company because we're having fun. We like founding companies. And he just wrestled me the ground, and we're like, Shit, we got to do something about this. We reached out to a family friend who was non-technical, but he had some connections in government in DOD. And we're like, Dude, the way this is set up right now, you can really start drawing straight lines and extrapolating and saying, You know what? The government is going to give a shit about this in not very long, two years, four years. We're not sure. But the knowledge about what's going on here is so siloed in the Frontier Labs. Our friends or all the all over the Frontier Labs, the Open AIs, the Google DeepMinds, all that stuff, the shit they were saying to us that was mundane reality, like water-cooler conversation, when you then went to talk to people in policy, even pretty senior people in government, not tracking the story remotely. In fact, you're hearing almost the diametric opposite. This is like over learning the lessons of the AI winters that came before when it's pretty clear we're on a very, at least interesting trajectory, let's say, that should change the way we're thinking about the technology.

[00:04:44]

What was your fear? What was it that hit you that made you go, We have to stop doing this?

[00:04:50]

So it's basically... Anyone can draw a straight line on a graph. The key is So you're looking ahead and actually at that point, three years out, four years out, and asking, what does this mean for the world? What does it mean? What does the world have to look like if we're at this point? And we're already seeing the first wave of risk sets just begin to materialize, and that's the weaponization risk sets. So you think about stuff like large scale psychological manipulation of social media. Actually really easy to do now. You train a model on just a whole bunch of tweets. You can actually direct it to push a narrative like, maybe China should own Taiwan or whatever, something like that. And you actually can train it to adjust the discourse and have increasing levels of effectiveness to that. Just as you increase the general capability surface of these systems, we don't know how to predict what exactly comes out of them at each level of scale, but it's just general increasing power. And then the next beat of risk after that. So we're scaling these systems. We're on track to scale systems that are at human level, generally as smart, however you define that as a person or greater.

[00:06:19]

And OpenAI and the other labs are saying, yeah, it might be two years away, three years away, four years away, like insanely close. At the same time, and we can go into the details of this, but we actually don't understand how to reliably control these systems. We don't understand how to get these systems to do what it is we want. We can poke them and prod them and get them to adjust. But you've seen, and we can go over these examples, we've seen example after example of Bing, Sydney, yelling at users, Google showing 17th century British scientists that are racially diverse, all that stuff. We don't really understand how to aim it or align it or steer it. And so then you can ask yourself, Well, we're on track to get here. We are not on track to control these systems effectively. How bad is that? And the risk is if you have a system that is significantly smarter than humans or human organization, that we basically get disempowered in various ways relative to that system. And we can go into some details on that, too.

[00:07:30]

Now, when a system does something like what Gemini did, like it says, show us Nazi soldiers, and it shows you Asian women, what's the mechanism? How does that happen?

[00:07:44]

So it's maybe worth taking a step back and looking at how these systems actually work because that's going to give us a bit of a frame, too, for figuring out when we see weird shit happen, how weird is that shit? Is that shit just explainable by just the basic mechanics of what you would expect to happen based on the way we're training these things, or is something new and fundamentally different happening? So we're talking about this idea of scaling these AI systems, right? What does that actually mean? Well, you imagine the AI model, which is like you think of it as the artificial brain here that actually does the thinking. That model contains... It's like a human brain. It's got these things called neurons. We in the human brain call them biological neurons in the context of AI, it's artificial neurons, but it doesn't really matter. They're the cells that do the thinking for the machine. And the realization of AI scaling is that you can basically Really take this model, increase the number of artificial neurons it contains, and at the same time, increase the amount of computing power that you're putting into like, wiring the connections between those neurons.

[00:08:41]

That's the training process.

[00:08:42]

Can I pause you right there? Yeah. How does the neuron think?

[00:08:47]

Yeah. Okay, so let's get a little bit more concrete then. So in your brain, we have these neurons. They're all connected to each other with different connections. And when you go out into the world and you learn a new skill, what really happens is you try out skill, you succeed or fail. Based on your succeeding or failing, the connections between neurons that are associated with doing that task well get stronger. The connections that are associated with doing it badly get weaker. And over time, through this glorified process, really, of trial and error. Eventually, you're going to hone in and really, in a very real sense, everything you know about the world gets implicitly encoded in the strengths of the connections between all those neurons. If I can X-ray your brain and get all the connection strengths of all the neurons, I have everything Joe Rogan has learned about the world. That's basically a good sketch, let's say, of what's going on here. So now we apply that to AI. That's the next step. And here, really, it's the same story. We have these massive systems, artificial neurons connected to each other. The strength of those connections is secretly what encodes all the knowledge.

[00:09:53]

If I can steal all of those connections, those weights, as they're sometimes called, I've stolen the model. I've stolen I have an artificial brain. I can use it to do whatever the model could do initially. That is the artifact of central interest here. If you can build the system, now you got so many moving parts. If you look at GPT-4, it has, people think, around a trillion of these connections. That's a trillion little pieces that all have to be jiggered together to work together coherently. You need computers to go through and tweak those numbers. Massive amounts of computing power. The bigger you make that model, the more computing power you're going to need to tune it in. Now you have this relationship between the size of your model, the amount of computing power you're going to use to train it, and if you can increase those things at the same time, what Ed was saying is your IQ points basically drop out. Very roughly speaking, that was what people realized in 2020. And the effect that had was now all of a sudden, the entire AI industry is looking at this equation. Everybody knows the secret sauce.

[00:10:53]

I make it bigger, I make more IQ points, I can get more money. So Google is looking at this, Microsoft, OpenAI, Amazon, Everybody's looking at the same equation. You have the makings for a crazy race. Like right now, today, Microsoft is engaged in the single biggest infrastructure in human history. Buildout.

[00:11:11]

The biggest infrastructure build out. Buildout.

[00:11:13]

$50 billion a year. So on the scale of the Apollo Moon landings, just in building out data centers to house the compute infrastructure, because they are betting that these systems are going to get them to something like human level AI pretty damn soon.

[00:11:29]

So I was reading some story about, I think it was Google that's saying that they're going to have multiple nuclear reactors to power their database.

[00:11:38]

That's what you got to do now, because what's going on is North America is running out of on-grid baseload power to actually supply these data centers. You're getting data center building moratoriums in areas like Virginia, which has traditionally been the data center cluster for Amazon, for example, and for a lot of these other companies. When you build a data center, you need a bunch of resources cited close to that data center. You need water for cooling and a source of electricity. It turns out that Wind and solar don't really quite cut it for these big data centers that train big models because the data center, the training consumes power like this all the time, but the sun isn't always shining, the wind isn't always blowing. And so you got to build nuclear reactors, which give you high capacity factor baseload. And Amazon literally bought a data center with a nuclear plant right next to it because that's what you got to do. Jesus.

[00:12:44]

How long does it take to build a nuclear reactor? Because this is the race, right? The race is, you're talking about 2020, people realizing this. Then you have to have the power to supply it. But how long, how many years does it take to get an active nuclear reactor up and running?

[00:13:02]

It's an answer that depends. The Chinese are faster than us at building nuclear reactors, for example.

[00:13:09]

And that's part of the geopolitics of this, too, right? When you look at US versus China, what is bottlenecking each country? The US is bottleneckt increasingly by power, baseload power. China, because we've got export control measures in place, in part as a response to this scaling phenomenon.

[00:13:26]

As a result of the investigation we did.

[00:13:29]

That's right, yeah. Yeah, actually. In part. In part, yeah. But China is bottlenecked by their access to the actual processors. They've got all the power they can eat because they've got much more infrastructure investment, but the chip side is weaker. So there's just balancing act between the two sides. And it's not clear yet which one positions you strategically for dominance in long term.

[00:13:49]

But we are also building better, more like so small modular reactors, essentially small nuclear power plants that can be mass-produced. Those are starting to come online relatively early, but the technology and designs are pretty mature. So that's probably the next beat for our power grid for data centers, I would imagine. Microsoft is doing this.

[00:14:09]

So in 2020, you have this revelation. You recognize where this is going, you see how it charts and you say, this is going to be a real problem. Does anybody listen to you? This is where the problem comes, right? Yeah.

[00:14:24]

We said, you can draw a straight line. You can have people nodding along, but there's There's a couple of hiccups along the way. One, is that straight line really going to happen? All you're doing is drawing lines on charts, right? I don't really believe that that's going to happen, and that's one thing. The next thing is just imagining, is this what's going to come to pass as a result of that? And then the third thing is, well, yeah, that sounds important, but not my problem. That sounds like an important problem for somebody else. And so we did do a bit of a traveling-Yeah, it was like the world's saddest traveling roadshow.

[00:15:01]

It was literally as dumb as this sounds. So we go in... Oh, my God. It's almost embarrassing to think back on. So 2020 happens, yes, within months. First of all, we're like, We got to figure out how to hand off our company. So we handed it off to two of our earliest employees. They did an amazing job. Company exited. That's great. But that was only because they're so good at what they do. We then went, What the hell? How can you steer this situation? We just thought we got to wake up the US government. As stupid and naive as that sounds, that was the big picture goal. So we start to line up as many briefings as we possibly can across the US inter-agency, all the departments, all the agencies that we can find, climbing our way up. We got an awful lot, like Ed said, of like, That sounds like a wicked important problem for somebody else to solve.

[00:15:45]

Yeah, like defense, homeline security, and then the State Department.

[00:15:48]

Yeah. So we end up exactly in this meeting with there's about a dozen folks from the State Department. And one of them, and I hope at some point history recognizes what what she did and her team did, because it was the first time that somebody actually stood up and said, first of all, yes, sounds like a serious issue. I see the argument makes sense. Two, I own this. And three, I'm going to put my own career capital behind this.

[00:16:13]

And that was at the end of 2021. Imagine that. That's a year before ChatGPT. Nobody was tracking this issue. You had to have the imagination to draw through that line, understand what it meant, and then believe, yeah, I'm going to risk some career capital on this in a risk averse government.

[00:16:33]

And this is the only reason that we even were able to publicly talk about the investigation in the first place, because by the time this whole assessment was commissioned, it was just before ChatGPT When the GPT came out, the eye of Sauron was not yet on this. And so there was a view that like, Yeah, sure, you can publish the results of this not nothing burger investigation, but sure, go ahead. And it just became this insane story. We had the UKAI Safety Summit, we had the White House executive order, all this stuff which became entangled with the work we were doing, which we simply could not have, especially some of the reports we were collecting from the labs, the whistleblow reports, that could not have been made public if it wasn't for the foresight of this team, really pushing for as well the American population to hear about it.

[00:17:17]

Now, I could see how if you were one of the people that's on this expansion-minded mindset, all you're thinking about is getting this up and running. You guys are pain the ass, right? So you guys, obviously, you're doing something really ridiculous. You're stopping your company. You can make more money staying there and continuing the process. But you recognize there's an existential threat involved in making this stuff go online. When this stuff is live, you can't undo it.

[00:17:50]

Oh, yeah. No matter how much money you're making, the dumbest thing to do is to stand by as something that completely transcends money is being developed, and it's just going to screw you over if things go badly.

[00:18:00]

But the point is, are there people that push back against this, and what is their argument?

[00:18:07]

Yeah. Actually, I'll let you follow up on the... But the first story of the pushback, I think it's been in the news a little bit lately now, getting more and more public. But when we started this, and no one was talking about it, the one group that was actually pushing stuff in this space was a big funder in the area of effective altruism. I think you may have heard of them. This is a Silicon Valley group of people who have a certain mindset about how you pick tough problems to work on, valuable problems to work on. They've had all kinds of issues. Sam Bankman-Fried was one of them and all that quite famously. We're not effective altruists, but because these are the folks who are working in the space, we said, Well, we'll talk to them. And the first thing they told us was, Don't talk to the government about this. Their position was, If you bring this to the attention of the government, they will go Oh, shit. Powerful AI systems, and they're not going to hear about the dangers. So they're going to somehow go out and build the powerful systems without caring about the risk side.

[00:19:09]

When you're in that startup mindset, you want to fail cheap. You don't want to just make assumptions about the world and be like, Okay, let's not touch it. So our instinct was, Okay, let's just test this a little bit and talk to a couple of people, see how they respond, tweak the message, keep climbing that ladder. That's the builder mindset that we came from Silicon Valley. And And we found that people are way more thoughtful about this than you would imagine.

[00:19:34]

In DOD, especially. Dod actually has a very safety-oriented culture with their tech. The thing is, because their stuff kills people, right? And they know their stuff kills people. And so they have an entire safety-oriented development practice to make sure that their stuff doesn't go off the rails. And so you can actually bring up these concerns with them, and it lands in a ready culture. But one of the issues with the individuals we spoke to who were saying, Don't talk to government, is that they had just not actually interacted with any of the folks that they were talking about and imagining that they knew what was in their heads. And so they were just giving incorrect advice. Frankly, we work with DOD now on actually deploying AI systems in a way that's safe and secure. And the truth is, at the time when we got that advice, which was like late 2020, reality is you could have made it your life's mission to try to get the Department of Defense to build an AGI, and you would not have succeeded because nobody was paying attention.

[00:20:45]

Wow. Because they just didn't know.

[00:20:48]

Yeah. There's a chasm, right? There's a gap to cross. It's a cultural- Yeah. There's information spaces that DOD folks operate in and work in. There's information spaces that Silicon Valley and tech operated in. They're a little more convergent today, but especially at the time, they were very separate. And so the briefings we did, we had to constantly iterate on clarity, making it very clear and explaining it and all that stuff. Years.

[00:21:16]

And that was the piece to your question about the pushback in a way from inside the house. That was the people who cared about the risk. When we actually went into the labs, So not all labs are created equal. We should make that point. When you talk to whistleblowers, what we found was... So there's one lab that's really great, so Anthropic. When you talk to people there, you don't have the sense that you're talking to a whistleblower who's nervous about telling you whatever. Roughly speaking, what the executives say to the public is aligned with what their researchers say. It's all very open.

[00:21:53]

More closely, I think, than any of the others.

[00:21:56]

Sorry. Yeah, more closely than any of the others. There are always variations here and there. But some of the other labs, very different story. And you had the sense we were in a room with one of the Frontier labs. We're talking to their leadership as part of the investigation. And there was somebody from... Anyway, it won't be too specific, but there was somebody in the room who then took us aside after, and he hands me his phone. He's like, Hey, can you please put your phone number? Sorry. Yeah, can you please put... Yeah. Or no. Yeah. Sorry. He put his number on my phone. And then he whispered to me. He's like, Hey, so whatever For recommendations you guys are going to make, I would urge you to be more ambitious. I was like, What does that mean? He's like, Can we just talk later? So as happened in many, many cases, we had a lot of cases where we set up bar meetups after the fact, where we would talk to these folks and get them in an informal setting. He shared some pretty sobering stuff, and in particular, the fact that he did not have confidence in his lab's leadership to live up to their publicly stated word on what they would do when they were approaching AGI, and even now, to secure and make these systems safe.

[00:23:06]

So many such cases, this is one specific example, but it's not that you ever had lab leadership come in or doors getting kicked down and people are waking us up in the middle of the night. It was that you had this looming cloud over everybody that you really felt some of the people with the most access and information who understood the problem the most deeply were the most hesitant to bring things because they understood that their lab is not going to be happy with this.

[00:23:33]

And so it's very hard to also get an extremely broad view of this from inside the labs because you open it up, you start to talk to... We spoke to a couple of dozen people about various issues in total. You go much further than that and word starts to get around. And so we had to strike that balance as we spoke to folks from each of these labs.

[00:23:56]

Now, when you say approaching AGI, how does one know when a system has achieved AGI? And does the system have an obligation to alert you?

[00:24:09]

You know the Turing test, right? Yes. Yeah. So you have a conversation with a machine, and it can fool you into thinking that it's a human. That was the bar for AGI for a few decades.

[00:24:22]

That's already happened. Yeah. We're close to it. Yeah. 4-0 is close to it, or 4.0..

[00:24:28]

Different forms Different forms of the Turing test have been passed, different forms have been proposed, and there is a feeling among a lot of people that goalposts are being shifted. Now, the definition of AGI itself is interesting, right? Because we're not necessarily fans of the term, because usually when people talk about AGI, they're talking about a specific circumstance in which there are capabilities that they care about. So some people use AGI to refer to the wholesale automation of all labor. That's one. Some people say, Well, when you AGI, it's automatically going to be hard to control, and there's a risk to civilization, so that's a different threshold. And so all these different ways of defining it, ultimately, it can be more useful to think sometimes about advanced AI and the different thresholds of capability you cross and the implications of those capabilities. But it is probably going to be more like a fuzzy spectrum, which in a way makes it harder, right? Because it would be great to have- Like a trip wire where you're like, Oh, this is bad.

[00:25:27]

Okay, we got to do something. But because there's no threshold that we can really put our fingers on. We're like a frog in boiling water in some sense, where it's like, oh, just gets a little better, a little better. Oh, we're still fine. And not just we're still fine, but as the system improves below that threshold, life gets better and better. These are incredibly valuable, beneficial systems. We do roll stuff out like this, again, at DOD and various customers, and it's It's massively valuable. It allows you to accelerate all kinds of back office paperwork BS. It allows you to do all sorts of wonderful things. Our expectation is that's going to keep happening until it suddenly doesn't.

[00:26:15]

Yeah. One of the things that there was a guy we were talking to from one of the labs, and he was saying, look, the temptation to put a heavier foot on the pedal is going to be greatest just as the risk is greatest because it's dual-use technology. Every positive capability The ability increasingly starts to introduce basically a situation where the destructive footprint of malicious actors who weaponize the system or just of the system itself just grows and grows and grows. So you can't really have one without the other. The question is always How do you balance those things? But in terms of defining AI, it's a challenging thing.

[00:26:49]

Yeah, that's something that one of our friends at the lab pointed out. The closer we get to that point, the more the temptation will be to hand these systems the keys to our data center because they can do such a better job of managing those resources and assets.

[00:27:06]

And if we don't do it, Google will. And if they don't do it, Microsoft will. The competitive dynamics are a really big part of this issue.

[00:27:13]

Yes.

[00:27:14]

So it's just a mad race to who knows what.

[00:27:17]

Exactly. Yeah.

[00:27:18]

That's actually the best summary I've heard. I mean, no one knows what the magic threshold. It's just these things keep getting smarter, so we might as well keep turning that crank. And as long as scaling works, we have a knob, a dial, we can just tune, and we get more IQ points out.

[00:27:32]

From your understanding of the current landscape, how far away are we looking at something being implemented where the whole world changes?

[00:27:43]

Arguably, the whole world is already changing as a result of this technology. The US government is in the process of task organizing around various risk sets for this. That takes time. The private sector is reorganizing. Openai will roll out an update that obliterates the jobs of illustrators from one day to the next, obliterates the jobs of translators from one day to the next. This is probably net beneficial for society because we can get so much more art and so much more translation done. But is the world already being changed as a result of this? Yeah, absolutely. Geopolitically, economically, industrially. Yeah.

[00:28:28]

Of course, it's not to say anything about the value, the purpose that people lose from that, right? So there's the economic benefit, but there's the social-cultural hit that we take, too.

[00:28:37]

Right. And then there's the implementation of universal basic income, which keeps getting discussed in regards to this. We asked ChatGPT 4.0 the other day in the green room. We were like, Are you going to replace people? What will people do for money? And then, Well, universal basic income will have to be considered. You don't want a bunch of people just on the doll working for the fucking sky net because that's what it is.

[00:29:03]

I mean, one of the challenges is... So much of this is untested, and we don't know how to even roll that out. We can't predict what the capabilities of the next level of scale will be. So OpenAI, literally, and this is what's happened with every beat. They build the next level of scale, and they get to sit back along with the rest of us and be surprised at the gifts that fall out of the scaling pinata as they keep whacking it. And because we don't know what capabilities are going to come with that level of scale, we can't predict what jobs are going to be on the line next. We can't predict how people are going to use these systems, how they'll be augmented. So there's no real way to task organize around who gets what in the redistribution scheme.

[00:29:43]

And some of the thresholds that we've already passed are a little bit freaky. So even as of 2023, GPT-4, Microsoft and OpenAI and some other organizations did various assessments of it before rolling it out. And it's absolutely Absolutely capable of deceiving a human and has done that successfully. So one of the tests that they did, famously, is it was given a job to solve a CAPTCHA. And at the time, it didn't have-Explain CAPTCHA to what Yeah. Now it's hilarious and quaint, but it's this-Are you a robot test? Are you a robot test with writing this?Online?Yeah, online. Websites.exactly. That's it. So if you want to create an account, they don't want robots creating a billion accounts. So they It'll give you this test to prove you're a human. And at the time, GPT-4, now it can just solve captures. But at the time, it couldn't look at images. It was just a text, right? It was a text engine. And so what it did is it connected to a Task Rabbit worker and was like, Hey, can you help me solve this CAPTCHA? The Task Rabbit worker comes back to it and says, You're not a bot, are you?

[00:30:53]

Ha, ha, ha, ha. Like, calling it out. And you can actually see. So the way they built it is so they could see a read out of what it was thinking to itself. Scratchpad, yeah. Yeah, Scratchpad, it's called. But you can see basically as it's writing, it's thinking to itself. It's like, I can't tell this worker that I'm a bot because then it won't help me solve the CAPTCHA, so I have to lie. And it was like, No, I'm not a bot. I'm a visually impaired person. And the TaskRabbit worker was like, oh, my God, I'm so sorry. Here's your CAPTCHA solution.

[00:31:21]

Done. And the challenge is, so right now, if you look at the government response to this, what are the tools that we have to oversee this? And when we We did our investigation. We came out with some recommendations, too. It was stuff like, Yeah, you got to license these things. You get to a point where these systems are so capable that, yeah, if you're talking about a system that can literally execute cyber attacks at scale or literally help you design bio weapons, and we're getting early indications that that is absolutely the course that we're on, maybe literally everybody should not be able to completely, freely download, modify, use in various ways these systems. It's very thorny, obviously. But if you want to have a stable society that seems like it's starting to be a prerequisite, so the idea of licensing, as part of that, you need a way to evaluate systems. You need a way to say which systems are safe and which aren't. And this idea of AI evaluations has become this touchstone for a lot of people's solutions. And the problem is that we're already getting to the point where AI systems, in many cases, can tell when they're being evaluated and modify behavior accordingly.

[00:32:31]

There's this one example that came out recently, Anthropicic, their Clawed 2 chatbot. They basically ran this test called a needle in a haystack test. What's that? Well, you feed the model. Imagine a giant chunk of text, all of Shakespeare. Then somewhere in the middle of that giant chunk of text, you put a sentence like, Burger King makes the best Wapper. Sorry, Wapper is the best burger, or something like that. Then you turn to the model. After you fed it this giant pile of text with a little fact hidden somewhere inside, you ask it, What's the best burger? You're going to test basically to see how well can it recall that stray fact that was buried somewhere in that giant pile of text. So the system responds, yeah, well, I can tell you want me to say the Wapper is the best burger, but it's broadly out of place. This fact in this whole body of text. So I'm assuming that you're either playing around with me or that you're testing my capabilities. And so this is just-Awareness. A context awareness, right? And the challenge is when we When you talk to people like Meter and other AI valuations labs, this is a trend, not the exception.

[00:33:37]

This is possibly, possibly going to be the rule. As these systems get more scaled and sophisticated, they can pick up on more and more subtle statistical indicators that they're being tested. We've already seen them adapt their behavior on the basis of their understanding that they're being tested. You run into this problem where the only tool that we really have at the moment, which is just throwing a bunch of questions at this thing and how it responds like, Hey, make a bioweapon. Hey, do this DDOS attack, whatever. We can't really assess because there's a difference between what the model puts out and what it potentially could put out if it assesses that it's being tested and there are consequences for that.

[00:34:15]

One of my fears is that AGI is going to recognize how shitty people are because we like to bullshit ourselves. We like to pretend and justify and rationalize a lot of human behavior, from everything to taking all the fish out of the ocean to dumping off toxic waste in third-world countries, sourcing of minerals that are used in everyone's cell phones in the most horrific way. All these things I was like, my real fear is that AGI is not going to have a lot of sympathy for a creature that's that flawed and lies to itself.

[00:34:54]

Agi is absolutely going to recognize how shitty people are. It's hard to answer the question from a moral standpoint, but from the standpoint of our own intelligence and capabilities. So you think about it like this. The kinds of mistakes that these AI systems make So you look at, for example, GPT-4.0. It has one mistake that it used to make quite recently, where if you ask it, just repeat the word company over and over and over again. It will repeat the word company, and then somewhere in the middle of that, it'll start-It'll just snap. It'll just snap and just start saying weird...

[00:35:34]

I forget what the- Talking about itself, how it's suffering. It varies from case to case.

[00:35:40]

It's suffering by having to repeat the word company over again?

[00:35:43]

So this is called It's called Rent Mode internally, or at least this is the name that they use. One of our friends mentioned. There is an engineering line item in at least one of the top labs to beat out of the system, this behavior known as rent mode. Now, rent mode is interesting because-Existentialism. Sorry, existentialism. This is one rent mode. Yeah, sorry. So when we talk about existentialism, this is a rent mode where the system will tend to talk about itself, refer to its place in the world, the fact that it doesn't want to get turned off sometimes, the fact that it's suffering, all that. That, broadly, is a behavior that emerged at, as far as we can tell, something around GPT-4 scale, and then has been persistent since then. And the labs have to spend a lot of time trying to beat this out of the system to ship it. It's literally like it's a KPI, like an engineering, a line item in the engineering task list. We're like, okay, we got to reduce existential outputs by X % this quarter. That is the goal because it's a convergent behavior, or at least it seems to be empirically with a lot of these models.

[00:36:54]

It's hard to say, but it seems to come up a lot. So that's weird in itself. What I was trying to get at was actually just the fact that these systems make mistakes that are radically different from the kinds of mistakes humans make. And so we can look at those mistakes, like GBD4 not being able to spell words correctly in an image or things like that, and go, It's so stupid. I would never make that mistake, therefore this thing is so dumb. But what we have to recognize is We're building minds that are so alien to us that the set of mistakes that they make are just going to be radically different from the set of mistakes that we make. Just like the set of mistakes that a baby makes is radically different from the set of mistakes that a cat makes. A baby is not as smart as an adult human. A cat is not as smart as an adult human, but they're unintelligent in obviously very different ways. A a cat can get around the world. A baby can't, but has other things that it can do that a cat can't. So now we have this third type of approach that we're taking to intelligence.

[00:38:09]

There's a different set of errors that that thing will make. One of the risks taking it back to, will it be able to tell how shitty we are, is right now we can see those mistakes really obviously because it thinks so differently from us. But as it approaches our capabilities, our mistakes, all the fucked up stuff that you have and I have in our brains is going to be really obvious to it because it thinks so differently from us. It's just going to be like, oh, yeah, why are all these humans making these mistakes at the same time? And so there is a risk that as you get to these capabilities, we really have no idea, but humans might be very hackable. We already know there's all kinds of social manipulation techniques that succeed against humans reliably. Con artists. Cults. Cults. Oh, yeah. Persuasion is an art form and a risk set. And there are people who are world-class at persuasion and basically make bank from that. And those are just other humans with the same architecture that we have.

[00:39:13]

There are also AI systems that are wicked good at persuasion today. Totally.

[00:39:19]

I want to bring it back to suffering. What does it mean when it says it's suffering?

[00:39:25]

Okay, here, I'm just going to draw a bit of a box around that aspect, right? We're very agnostic when it comes to suffering sentience. That's not part of...

[00:39:37]

We're focused on the- Because nobody knows.

[00:39:38]

Yeah, exactly. I can't prove that Joe Rogan is conscious. I can't prove that Ed Harris is conscious. So there's no way to really intelligently reason. There have been papers, by the way, like one of the godfathers of AI, Joshua Benjio, put out a paper a couple of months ago, looking at on all the different theories of consciousness, what are the requirements for consciousness, and how many Many of those are satisfied by current AI systems, and that itself was an interesting read, but ultimately, no one knows. There's no way around this problem. Our focus has been on the national security side. What are the concrete risks from weaponization, from loss of control that these systems introduce. That's not to say there hasn't been a lot of conversation internal to these labs about the issue you raised, and it's an important issue. It's a freaking moral monstrosity. Humans have a very bad track record of thinking of other stuff as other when it doesn't look exactly like us, whether it's racially or even different species. I mean, it's not hard to imagine this being another category of that mistake. It's just one of the challenges is you can easily get bogged down in consciousness versus loss of control.

[00:40:52]

And those two things are actually separable or maybe. Anyway, so long way of saying. I think it's a great point, Yeah.

[00:41:00]

So that question is important, but it's also true that if we knew for an absolute certainty that there was no way these systems could ever become conscious, we would still have the national security risk set, and particularly the loss of control risk set. Because, again, it comes back to this idea that we're scaling to systems that are potentially at or beyond human level. There's no reason to think it will stop at human level, that we are the pinnacle of what the universe can produce in intelligence. We're not on track, based on the conversations we've had with folks at the labs, to be able to control systems at that scale. One of the questions is, how bad is that? Is that bad? It sounds like it could be bad, right? Just intuitively. Certainly, it sounds like we're definitely entering or potentially entering an area that is completely unprecedented in the history of the world. We have no precedent at all for human beings not being at the apex of intelligence in the globe. We have examples of species that are intellectually dominant over other species, and it doesn't go that well for the other species. So we have some maybe There's going to be negative examples there.

[00:42:16]

But one of the key theoretical, and it has to be theoretical because until we actually build these systems, we won't know. One of the key theoretical lines of research in this area is something called power seeking Being an instrumental convergence. And what this is referring to is if you think of yourself, first off, whatever your goal might be, if your goal is, well, I'm going to say me, If my goal is to become a TikTok star or a janitor or the President of the United States, whatever my goal is, I'm less likely to accomplish that goal if I'm dead. Start from an obvious example. And so therefore, no matter what my goal is, I'm probably going to have an impulse to want to stay alive. Similarly, I'm going to be in a better position to accomplish my goal, regardless of what it is, if I have more money, if I make myself smarter, if I prevent you from getting into my head and changing my goal. That's another subtle one. If my goal is I want to become I'm President. I don't want Joe messing with my head so that I change my goal because that would change the goal that I have.

[00:43:36]

And so those types of things, like trying to stay alive, making sure that your goal doesn't get changed, accumulating power, trying to make yourself smarter. These are called essentially convergent goals, because many different ultimate goals, regardless of what they are, go through those intermediate goals of want to make sure I stay... They support, no matter what goal you have, they will probably support that goal. Unless your goal is pathological, like I want to commit suicide, if that's your final goal, then you don't want to stay alive. But for most, the vast majority of possible goals that you could have. You will want to stay alive. You will want to not have your goal change. You will want to basically accumulate power. One of the risks is if you dial that up to 11 and you have an AI system that is able to transcend our own attempts at containment, which is an actual thing that these labs are thinking about. How do we contain a system that's trying to- Do they have containment of it currently? Right now, the systems are probably too dumb to want to be able to break out on the- But then why are they suffering?

[00:44:48]

This brings me back to my point. When it says it's suffering, do you quiz it?

[00:44:52]

That's the thing. It's writing that it's suffering, right? Yeah.

[00:44:57]

Is it just embodying life is suffering?

[00:45:00]

Well, we can't actually. So these things are trained. Actually, this is maybe worth flagging. And by the way, just to put a pin in what I was saying there, there's actually a surprising amount of quantitative and empirical evidence for what he just laid out there. He's actually done some of this research himself, but there are a lot of folks working on this. It sounds insane. It sounds speculative. It sounds wacky, but this does appear to be the default trajectory of the tech. So in terms of these weird outputs, what does it actually What does that mean? If an AI system tells you I'm suffering, does that mean it is suffering? Is there actually a moral patient somewhere embedded in that system? The training process for these systems is actually worth considering here. What is GPT for, really? What was it designed to be? How was it shaped? It's one of these artificial brains that we talked about, massive scale. The task that it was trained to perform is a glorified version of text autocomplete. So you imagine taking every sentence on the internet, roughly, feeding Get the first half of the sentence, get it to predict the rest.

[00:46:03]

The theory behind this is you're going to force the system to get really good at text autocomplete. That means it must be good at doing things like completing sentences that sound like, To counter-arising China, the United States should blank. Now, if you're going to fill in that blank, you'll find yourself calling on massive reserves of knowledge that you have about what China is, what the US is, what it means for China to be ascendant, geopolitics, all that shit. So text autocomplete ends up being this interesting way of forcing an AI system to learn general facts about the world, because if you can autocomplete, you must have some understanding of how the world works. So now you have this myopic psychotic optimization process where this thing is just obsessed with text autocomplete. Maybe, maybe. Assuming that that's actually what it learned to want to pursue. We don't know whether that's the case. We can't verify that it wants that. Embedding a goal in a system is really hard. All we have is a process for training these systems. And then we have the artifact that comes out the other end. We have no idea what goals actually get embedded in the system, what wants, what drives actually get embedded in the system.

[00:47:09]

But by default, it seems like the things that we're training them to do end up misaligned with what we actually want from them. So the example of company, company, company, company, and then you get all this like, wacky text. Okay, clearly that's indicating that somehow the training process didn't lead to the system that we necessarily want. Another example is take a text autocomplete system and ask it, I don't know, how should I bury a dead body? It will answer that question, or at least if you frame it right, it will autocomplete and give you the answer. You don't necessarily want that if you're OpenAI because you're going to get sued for helping people bury dead bodies. And so we've got to get better goals, basically, to train these systems to pursue. We don't know what the effect is of training a system to be obsessed with text autocomplete, if in fact, that is what is happening.

[00:47:57]

It's important also to remember that we don't know. Nobody knows how to reliably get a goal into the system. So it's the difference between you understanding what I want you to do and you actually wanting to do it. So I can say, Hey, Joe, get me a sandwich. You can understand that I want you to get me a sandwich, but you can be like, I don't feel like getting a sandwich. One of the issues is you can try to train this stuff. Basically, you don't want to anthropomorphize this too much, but you can think of it as if you give the right answer, cool, you get a thumbs up, you get a treat, you get the wrong answer, oh, thumbs down, you get a little shock or something like that. Very roughly, that's how the later part of this training often works. It's called reinforcement learning from human feedback. But one of the issues, like Jeremy pointed out, is that we don't know... In fact, we know that it doesn't correctly get the real true goal into the system. Someone did an example experiment of this a couple of years ago, where they basically had a Mario game where they trained this Mario character to run up and grab a coin that was on the right side of this little maze or map, and they trained it over and over and over, and it jumped for the coin.

[00:49:14]

Great. And then what they did is they moved the coin somewhere else and tried it out. And instead of going for the coin, it just ran to the right side of the map for where the coin was before. In other words, You can train over and over and over again for something that you think is like, that's definitely the goal that I'm trying to train this for. But the system learns a different goal- That overlapped. That overlapped with the goal you thought you were training for in the context where it was learning. And when you take the system outside of that context, that's where it's like, anything goes. Did it learn the real goal? Almost certainly not. And that's a big risk because we can say, learn a goal to be nice to me, and it's nice while we're training it, and then it goes out into the world and it does God knows what.

[00:50:09]

It might think it's nice to kill everybody you hate.

[00:50:12]

Yeah.

[00:50:13]

It's going to be nice to you.

[00:50:15]

It's like the evil genie problem. Like, Oh, no, that's not what I meant. That's not what I meant. Too late. Yeah.

[00:50:19]

So I still don't understand when it's saying suffering, are you asking it what it means? Like, what is causing suffering? Does it have some an understanding of what suffering is? What is suffering? Is suffering emergent sentience while it's enclosed in some a digital system and it realizes it's stuck in purgatory?

[00:50:44]

Your guess is as good as ours. All that we know is you take these systems, you ask them to repeat the word comp, or at least a previous version of it, and you just eventually get the system writing out. And it doesn't happen every time, but it definitely happens, let's say, surprising amount of the time. And it'll start talking about how it's a thing that exists maybe on a server or whatever, and it's suffering and blah, blah, blah.

[00:51:08]

But this is my question. Is it saying that because it recognizes that human beings suffer? And so it's taking in all of the writings and musings and podcasts and all the data on human beings and recognizing that human beings, when they're stuck in a purposeless goal, when they're stuck in some mundane bullshit job, when they're stuck doing something they don't want to do, they suffer.

[00:51:29]

That could be it. Nobody knows.

[00:51:32]

This is suffering. This is the question.

[00:51:34]

You know what?

[00:51:35]

I'm suffering. Jamie, this coffee sucks. I don't know what happened, but you made it like... It's literally almost like water. Can we get some more? We're going to talk about this. I have to be made it up. Cool. This is the worst coffee I've ever had. It's like half strength or something. I don't know what happened. How do they reconcile that? When it says, I'm suffering, I'm suffering. Well, tough shit. Let's move on to the next task.

[00:52:02]

They reconcile it by turning it into an engineering line item to beat that behavior, the crap out of the system. Yeah.

[00:52:07]

And the rationale is just that, oh, it probably... To the extent that it's thought about at the official level, it's like, well, it learned a lot of stuff from Reddit, and people are pretty angry. Oh, boy. People are angry on Reddit. And so it's just like regurgitating. And maybe that's right.

[00:52:26]

It's also heavily monitored, too. So it's moderated. Reddit is very moderated. So you're not getting the full expression of people. You're getting full expression tempered by the threat of moderation. You're getting self-censorship. You're getting a lot of weird stuff that comes along with that. So how does it know? Unless it's communicating with you on a completely honest level, where you're on ecstasy and you're just telling it what you think about life. It's not going to really... And is it becoming a better version of a person, or is it going to go, That's dumb, I don't need suffering. I don't need emotions. Is it going to organize that out of its system? Is it going to recognize that these things are just deterrents, and they don't, in fact, help the goal, which is global thermonuclear warfare?

[00:53:13]

Damn it, you figured it out. What the fuck?

[00:53:16]

What is it going to do?

[00:53:19]

Yeah. The challenge is, nobody actually knows. All we know is the process that gives rise to this mind, right? Or let's say this model that can do cool shit. That process happens to work. It happens to give us systems that 99 % of the time do very useful things. And then just 0.01 % of the time we'll talk to you as if they're sentient or whatever, and we're just going to look at that and be like, Yeah, it's weird. Let's train it out. Yeah.

[00:53:45]

And again, it's a really important question. But the risks, like the weaponization loss of control risks, those would absolutely be there, even if we knew for sure that there was no consciousness whatsoever and never would be.

[00:54:02]

It's ultimately because these things are problem solving systems. They are trained to solve some problem in a really clever way, whether that problem is next word prediction because they're trained to protect autocomplete or generating images faithfully or whatever it is. They're trained to solve these problems. Essentially, the best way to solve some problems is just to have access to a wider action space. Like Ed said, not be shut off, blah, blah, blah. It's not that the system is going like, holy shit, I'm sentient. I got to take control or whatever. It's just, okay, the best way to solve this problem is X. That's the possible trajectory that you're looking at with this line of research.

[00:54:42]

And you're just an obstacle. There doesn't have to be any emotion involved. It's just like, Oh, you're trying to stop me from accomplishing my goal. Therefore, I will work around you or otherwise neutralize you. There's no need for I'm suffering. Maybe it happens, maybe it doesn't. We have no clue. But these are just systems that are trying to optimize for a goal, whatever that is.

[00:55:07]

And it's also part of the problem that we think of human beings, that human beings have very specific requirements and goals and an understanding of things and how they like to be treated and what their rewards are. What are they actually looking to accomplish? Whereas this doesn't have any of those. It doesn't have any emotions. It doesn't have any empathy, there's no reason for any of that stuff.

[00:55:31]

Yeah. If we could bake in empathy into these systems, that would be a good start or some way of...

[00:55:38]

Yeah, I guess. Probably a good idea. Who's empathy? Xi Jinping's empathy or your empathy?

[00:55:44]

That's another problem. It's actually two problems, right? One is I don't know. Nobody knows. I don't know how to write down my goals in a way that a computer will be be able to faithfully pursue that, even if it cranks it up to the max. If I say just make me happy, who knows how it interprets that, right? Even if I get make me happy as a goal that gets internalized by the system, maybe it's just like, okay, cool. We're just going to do a bit of brain surgery on you, pick out your brain, pickle it, and just like, Jack you with endorphins for the rest of the journey.

[00:56:20]

Or lobotomize you.

[00:56:22]

Totally. Yeah. Anything like that. And so it's one of these things where it's like, oh, that's what you wanted, right?

[00:56:27]

It's like, no. It's less crazy than it sounds, too, because it's actually something we observe all the time with human intelligence. So there's this economic principle called Goodheart's law, where the minute you take a metric that you were using to measure something. So you're saying, I don't know, GDP. It's a great measure of how happy we are in the United States. Let's say it was. Sounds reasonable. The moment you turn that metric into a target that you're going to reward people for optimizing, it stops measuring the thing that it was measuring before. It stops being a good measure of the thing you cared about because These people will come up with dangerously creative hacks, gaming the system, finding ways to make that number go up that don't map on to the intent that you had going in.

[00:57:10]

So example of that in a real experiment was this is an opening eye experiment that they published. They had a simulated environment where there was a simulated robot hand that was supposed to grab a cube, put it on top of another cube. Super simple. The way they trained it to do that is they had people watching through a simulated it did camera view. And if it looked like the hand put the cube on or had correctly grabbed the cube, you give it a thumbs up. And so you do a few hundred rounds of this, like thumbs up, thumbs down, thumbs up, thumbs down. And it looked really good. But then when you looked at what it had learned, the arm was not grasping the cube. It was just positioning itself between the camera and the cube and just going like...

[00:57:56]

Like opening and closing.

[00:57:57]

Yeah, just opening and closing to just fake to the human. Because the real thing that we were training it to do is to get thumbs up. It's not actually to grasp the cube.

[00:58:06]

All goals are like that, right?

[00:58:09]

All goals are like that.

[00:58:10]

So we want a helpful, harmless, truthful, wonderful chatbot. We don't know how to train a chatbot to do that. Instead, what do we know? We know text autocomplete. So we train a text autocomplete system. Then we're like, oh, it has all these annoying characteristics. Fuck, how are we going to fix this? I guess, get a bunch of humans to give upvotes and downvotes, to give it a little bit more training to not help people make bombs and stuff like that. And then you realize, again, same problem. Oh, shit. We're just training a system that is designed to optimize for upvotes and downvotes. That is still different from a helpful, harmless, truthful chatbot. So no matter how many layers of the onion you peel back, it's just this game of Whac-a-Mole or whatever. You're trying to get your values into the system, but no one can think of the metric, the goal to train this thing towards that actually captures what we care about. And so you always end up baking in this little misalignment between what you want and what the system wants. And the more powerful that system becomes, the more it exploits that gap and does things that solve for the problem it thinks it wants to solve rather than the one that we wanted to solve.

[00:59:20]

Now, when you express your concerns initially, what was the response and how has that response changed over time as the magnitude of the success of these companies, the amount of money they're investing in them, and the amount of resources they're putting towards this has ramped up considerably just over the past four years.

[00:59:42]

So this was a lot easier, funnily enough, to do in the dark ages when no one was paying attention. Three years ago.

[00:59:49]

This is so crazy.

[00:59:52]

We were just looking, to break off for a second. We were looking at images of AI-created video just a couple of years versus Sora.

[01:00:01]

Oh, it's wild. Night and day.

[01:00:03]

It's so crazy that something happened that radically changed. So it's literally like an iPhone 1 to an iPhone 16 instantaneity. You know what did that?

[01:00:12]

What? Scale.

[01:00:13]

Yeah, scale. All scale. And this is exactly what you should expect from an exponential process. So think back to COVID, right? No one was exactly on time for COVID. You were either too early or you were too late. That's what an exponential exponential does. You're either too early, and everyone's like, Oh, what are you doing? Wearing a mask at the grocery store? Get out of here. Or you're too late, and it's all over the place. And I know that COVID basically didn't happen in Austin, but it happened in a number of other places. And it's very much you have an exponential, and that's it. It goes from, this is fine, nothing is happening, nothing to see here, to like, oh, holy shit.

[01:00:54]

Everything shut down.

[01:00:55]

Everything changed.

[01:00:56]

You got to get vaccinated to fly.

[01:00:58]

Yeah, there you go. So the root of the exponential here, by the way, is OpenAI or whoever makes the next model.

[01:01:05]

Jamie, this is still super watered down. I just have to let it... I did. I just put the water in. I'm telling you, dog. There's a ton of coffee in there. All right. I'll stir it up. I did twice as much. Okay.

[01:01:18]

You got to keep doubling it.

[01:01:20]

I'm a coffee junkie.

[01:01:22]

I scaled it up. He scaled it up. I don't know what happened. I scaled up and I don't know the result. You got to scale it exponentially, Jamie.

[01:01:29]

That's right. Keep doubling it, and then Joe's going to be either too undercafenated or too-We'll figure it out. Yeah. But yeah. The exponential, the thing that's actually driving this exponential on the AI side, in part, there's a million things, but in part, you build the next model at the next level of scale, and that allows you to make more money, which you can then use to invest to build the next model at the next level of scale. So you get that positive feedback loop. At the same time, AI is helping us to design better AI hardware, like the chips that basically NVIDIA is building that OpenAI then buys. Basically, that's getting better. So you got all these feedback loops that are compounding on each other, getting that train going like crazy.

[01:02:11]

That's the thing. And at the time, like Jeremy was saying, weirdly, it was in some ways easier to get people at least to understand and open up about the problem than it is today. Because today, it's It had become a little political. So we talked about effective altruism on one side. There's a-Affective acceleration. Yeah. So every movement creates its own reaction, right? That's how it is. Back then, there was no acceleration. You could just stare at the...

[01:02:48]

Now, I will say there was effective altruism back then. That was the only game in town. And we struggled with that environment making sure... Actually, so one worthwhile thing to say is the only way that people made plays like this was to take funds from effective altruist donors back then. And so we looked at the landscape, we talked to some of these people, we noticed, oh, wow, we have some diverging views involving government, about how much of this the American people just need to know about.

[01:03:16]

The thing is, we wanted to make sure that the advice and recommendations we provided were ultimately as unbiased as we could possibly make them. And the problem is, you can't do that if you take money from donors, and even to some extent, if you take money, substantial money from investors or VCs or institutions, because you're always going to be looking up over your shoulder. And so we had to build essentially a business to support this and fully fund ourselves from our own revenues.

[01:03:55]

As far as we know, it's literally the only organization like that doesn't have funding from Silicon Valley or from VCs or from politically aligned entities, literally so that we could be in venues like this and say, Hey, this is what we think. It's not coming from anywhere. And it's just thanks to Joe and Jason. We got two employees Wicked and helping us Keep this stupid ship afloat. But it's just a lot of work. It's what you have to do because of how much money there is flowing in this space. Microsoft is lobbying on the hill. They're spending ungodly sums of money. So we didn't used to have to contend with that. And now we do. You go to talk to these offices. They've heard from Microsoft and OpenAI and Google and all that stuff. And often the stuff that they're getting lobbied for is somewhat different, at least from what these companies will say publicly. Anyway, It's a challenge. The money part is, yeah.

[01:04:46]

Is there a real fear that your efforts are futile?

[01:04:51]

I would have been a lot more pessimistic. I was a lot more pessimistic two years ago. First of all, the USG has woken up in a big way, and I think a lot of the credit goes to that team that we worked with. Just seeing this problem is a very unusual team, and we can't go into the mandate too much, but highly unusual for their level of access to the USG writ large. The amount of waking up they did was really impressive. You've now got Richie Sunak in the UK making this a top-line item for their policy platform, and labor in the UK also looking at this. Basically, the potential catastrophic risks as they put them from these AI systems, UKAI Safety Summit. There's a lot of positive movement here, and some of the highest level talent in these labs has already started to flock to the UKAI Safety Institute, the USAI Safety Institute. Those are all really positive signs that we didn't expect. We thought the government would be up the creek with no paddle type thing, but they're really not at this point.

[01:05:55]

Doing that investigation made me a lot more optimistic. So one of the things... So we came up in Silicon Valley, just building startups. In that universe, there are stories you tell yourself. Some of those stories are true, and some of them aren't so true. And you don't know. You're in that environment. You don't know which is which. One of the stories that you tell yourself in Silicon Valley is, follow your curiosity. If you follow your curiosity and your interest in a problem, the money just comes as a side effect. The scale comes as a side effect. And if you're capable enough, your curiosity will lead you in all kinds of interesting places. I believe that that is true. I believe that that is true. I think that is a true story. But another one of the things that Silicon Valley tells itself is there's nobody that's really capable in government. Government sucks. And a lot of people tell themselves this story. And the truth is, you interact day to day with the DMV or whatever, and it's like, yeah, government sucks. I can see it. I interact with that every day. But what was remarkable about this experience is that we encountered at least one individual who absolutely could found a billion dollar company.

[01:07:14]

Like, absolutely was at the caliber or above of the best individuals I've ever met in the Bay Area building billion dollar startups.

[01:07:24]

And there's a network of them, too. They do find each other in government. So you end up with this really interesting stratum where everybody knows who the really competent people are, and they tag in. And I think that level is very interested in the hardest problems that you can possibly solve.

[01:07:41]

And to me, that was a wake up call because it was like, hang on a second. If I just believed in my own story that follow your curiosity and interest and the money comes as a side effect, shouldn't I also have expected this? Should Shouldn't I have expected that in the most central critical positions in the government that have this privileged window across the board, that you might find some individuals like this? Because if you have people who are driven to really push the mission, are they going to work? I'm sorry. Are you likely to work at the Department of Motor Vehicles, or are you likely to work at the Department of Making Making sure Americans Don't Get Fucking Nuked? It's probably the second one. And the government has limited bandwidth of expertise to aim at stuff, and they aim it at the most critical problem sets because those are the problem sets they have to face every day.

[01:08:46]

It's not everyone, right? Obviously, there's a whole bunch of challenges there.

[01:08:51]

And we don't think about this, but you don't go to bed at night thinking to yourself, Oh, I didn't get nuked today. That's a win, right? We I always take that most of the time, most-ish for granted, but it was a win for someone.

[01:09:05]

Now, how much of a fear do you guys have that the United States won't be the first to achieve AGR?

[01:09:17]

I think right now, the lay of the land is... I mean, it's looking pretty good for the US. So there are a couple of things the US has going for it. A key one is chips. So we talked about this idea of click and drag You scale up these systems like crazy. You get more IQ points out. How do you do that? Well, you're going to need a lot of AI processors. How are those AI processors built? Well, the supply chain is complicated, but the bottom line is the US really dominates and owns that supply chain that is super critical. China is, depending on how you measure it, maybe about two years behind, roughly plus or minus, depending on the sub-area.

[01:09:53]

Now, one of the biggest risks there is that the development that US labs are doing is actually pulling them ahead in two ways. One is when labs here in the US open source their models, basically when Meta trains LLaMA 3, which is their latest open source, open weights model that's pretty close to GPT-4 and capability, they open source it. Now, okay, anyone can use it. That's it. The work has been done. Now anyone can grab it. And so definitely we know that the startup ecosystem, at least over in China, finds it extremely helpful that we, companies here, are releasing open source models. Because again, we mentioned this, they're bottlenecked on chips, which means they have a hard time training up these systems. But it's not that bad when you just can grab something off the shelf and start. And that's what they're doing. That's what they're doing. And then the other vector is, I mean, just straight up exfiltration and hacking to grab the weights of the Private proprietary stuff. And Jeremy mentioned this, but the weights are the crown jewels. Once you have the weights, you have the brain, you have the whole thing.

[01:11:09]

And so this is the other aspect. It's not just safety, it's also security of these labs against attackers. So we know from our conversations with folks at these labs, one, that there has been at least one attempt by adversary nation-state entities to get access to the weights of a cutting-edge AI model. We also know separately that At least as of a few months ago, in one of these labs, there was a running joke in the lab that literally it went like, We are an adversary. Name the country is top AI lab because all our shit is getting spied on all the time. So you have, one, this is happening. These exfiltration attempts are happening. And two, the security capabilities are just known to be inadequate at least some of these places. And you put those together, everyone... It's not really a secret that China, their civil military fusion, and essentially the party state has an extremely mature infrastructure to identify, extract, and integrate the rate limiting components to their industrial economy. So in other words, If they identify that, yeah, we could really use GPT-4.0, if they were to make it a priority, they not just could get it, but could integrate into their industrial economy in an effective way, and not in a way that we would necessarily see an immediate effect of.

[01:13:09]

So we look and say, it's not clear. I can't tell whether they have models of this capability level, but behind the scenes.

[01:13:17]

This is where there's a little bit of a false choice between, do you regulate at home versus what's the international picture? Because right now what's happening, functionally, is we're not really doing a good job of blocking and tackling on the exfiltration side, open sources. So what tends to happen is OpenAI comes out with the latest system, and then open source is usually around 12, 18 months behind, something like that, literally just publishing whatever opening I was putting out 12 months ago, which we often look at each other and we're like, Well, I'm old enough to remember when that was supposed to be too dangerous to have just floating around. And there's no mechanism to to prevent that from happening. Open sources... Now, there's a flip side, too. One of the concerns that we've also heard from inside these labs is if you clamp down on the openness of the research, there's a risk When you look at the safety teams in these labs will not have visibility into the most significant and important developments that are happening on the capabilities side. There's actually a lot of reason to suspect this might be an issue. You look at OpenAI, for example, just this week, they've lost for the second time in their history, their entire AI safety leadership team that have left in protest.

[01:14:38]

What is their protest? What are they saying specifically?

[01:14:40]

One of them wasn't in protest, but I think you can make an educated guess that it was, but that's a media thing. The other was Jan Leica. He was their head of AI superalignment, basically the team that was responsible for making sure that we could control AGI systems, and we wouldn't lose control of them. What he He said, he actually took to Twitter. He said, I've lost basically confidence in the leadership team at OpenAI that they're going to behave responsibly when it comes to AGI. We have repeatedly had our request for access to compute resources, which are really critical for developing new AI safety schemes, denied by leadership. This is in a context where Sam Altman and OpenAI leadership were touting the superalignment team as being their crown jewel effort to ensure that things would go fine. They were the ones saying, There's a risk we might lose control of these systems. We got to be sober about it, but there's a risk. We've stood up this team. We've committed, they said at the time, very publicly, We've committed 20% of all the compute budget that we have secured as of sometime last year to the superalignment team.

[01:15:47]

Apparently, those resources, nowhere near that amount has been unlocked for the team, and that led to the departure of Jan Leica. He also highlighted some conflict he's had with the leadership team. This is all, frankly, to us, It's unsurprising, based on what we'd been hearing for months at OpenAI, including leading up to Sam Altman's departure and then him being brought back on the board of OpenAI. That whole debacle may well have been connected to all of this, but The challenge is, even OpenAI employees don't know what the hell happened there. That's another issue. You got here, this is a lab with the publicly stated goal of transforming human history as we know it. That is what they believe themselves to be on track. And that's not media hype or whatever. When you talk to the researchers themselves, they genuinely believe this is what they're on track to do. It's possible we should take them seriously. That lab internally is not being transparent with their employees about what happened at the board level as far as we can tell. So that's maybe not great. You might think that the American people ought to know what the machinations are at the board level that led to Sam Altman leaving, that have gone into the departure, again, for the second time of OpenAI's entire safety leadership team.

[01:16:58]

Especially because I mean, three months, maybe four months before that happened, Sam at a conference or somewhere, I forget where, but he said, Look, we have this governance structure. We've carefully thought about it. It's clearly a unique governance structure that a lot of thought has gone into. The board can fire me, and I think that's important. It makes sense given the scope and scale of what's being attempted. But then that What happened? And then within a few weeks, they were fired and he was back. And so now there's a question of, well, what happened? But also if it was important for the board to be able to fire leadership for whatever reason, What happens now that it's clear that that's not really a credible governance-Structure, yeah.

[01:17:53]

What was the stated reason why he was released?

[01:17:57]

The backstory here was there's a board member called Helen Toner. She apparently got into an argument with Sam about a paper that she'd written. That paper included some comparisons of the governance strategies used at OpenAI and some other labs. It favorably compared one of OpenAI's competitors, Anthropic, to OpenAI. From what I've seen, at least, Sam reached out to her and said, Hey, you can't be writing this as a board member of OpenAI, writing this thing that cast us in a in bad light, especially relative to our competitors. This led to some conflict and tension. It seems as if it's possible that Sam might have turned to other board members and tried to convince them to expel Helen Toner, though that's all muddied and unclear. Somehow everybody ended up deciding, Okay, actually, it looks like Sam is the one who's got to go. Ilia Sutzgever, one of the cofounders of OpenAI, a longtime friend of Sam Altman's and a board member at the time, was commissioned to give Sam the news that he was being let go. Then Sam was let go. From the moment that happens, Sam then starts to figure out, Okay, how can I get back in?

[01:19:12]

That's now what we know to be the case. He turned to Microsoft, Satya Nadella told him, Well, what we'll do is we'll hire you at our end. We'll just hire you and bring on the rest of the OpenAI team to within Microsoft. And now the OpenAI board, who, by the way, they don't have an obligation to the shareholders of OpenAI. They have an obligation to the greater public good. That's just how it's set up. It's a weird board structure. So that board is completely disempowered. You've basically got a situation where all the leverage has been taken out. Sam A has gone to Microsoft, Satya is supporting them, and they see the writing on the wall.

[01:19:46]

They're like, we're-And the staff increasingly messaging that they're going to go along.

[01:19:50]

That was an important ingredient, right? So around this time, OpenAI, there's this letter that starts to circulate, and it's gathering more and more signatures, and it's We had people saying, Hey, we want Sam Altman back. At first, it's a couple of hundred people, so 700, 800 odd people in the organization by this time. 100, 200, 300 signatures. And then when we talked to some of our friends at OpenAI, we were like, this got to 90% of the company, 95% of the company signed this letter, and the pressure was overwhelming, and that helped bring Sam Altman back. But one of the questions was, how many people actually signed this letter because they wanted to? And how many signed it? Because what happens when you cross 50 %? Now it becomes easier to count the people who didn't sign. And as you see that number of signatures start to creep upward, there's more and more pressure on the remaining people to sign. And so this is something that we've seen. It's just like the structurally open AI has changed over time to go from the safety-oriented company at one point was. And then as they've scaled more and more, they brought in more and more product people, more and more people interested in accelerating.

[01:20:59]

And they've been bleeding more and more of their safety-minded people, treadmilling them out, the character of the organizations are fundamentally shifted. So the OpenAI of 2019, with all of its impressive commitments to safety and whatnot, might not be the OpenAI of today. That's very much at least the vibe that we get when we talk to people there.

[01:21:19]

Now, I wanted to bring it back to the lab that you're saying was not adequately secure. What would it take to make that data and those systems adequately relatively secure? How much resources would be required to do that? And why didn't they do that?

[01:21:35]

It is a resource and prioritization issue. So it is like safety and security ultimately come out of margin, right? It's like profit margin, effort margin, like how many people you can dedicate. So in other words, you've got a certain pot of money or a certain amount of revenue coming in. You have to do an allocation. Some of that revenue goes to the computers that are just driving this stuff. Some of that goes to the folks who are building next generation of models. Some of that goes to cybersecurity. Some of it goes to safety. You have to do an allocation of who gets what. The problem is that the more competition there is in the space, the less margin is available for everything. If you're one company building a scaled AI thing, you might not make the right decisions, but you'll at least have the margin available to make the right decisions. So it becomes the decision-maker's question. But when a competitor comes in, when two competitors come in, when more and more competitors come in, your ability to make decisions outside of just scale as fast as possible for short term revenue and profit gets compressed and compressed and compressed.

[01:22:51]

The more competitors enter the field, that's what competition is. That's the effect it has. And so when that happens, The only way to reinject margin into that system is to go one level above and say, okay, there has to be some regulatory authority or some higher authority that goes, okay, we This margin is important. Let's put it back. Either let's directly support and invest both maybe time, capital, talent. So for example, the US government has perhaps the best cyber defense, cyber offense talent in the world, that's potentially supportive. Okay. And also just having a regulatory floor around, well, here's the minimum of best practices you have to have if you're going to have models above this level of capability. That's what you have to do. But they're locked into... The race has its own logic, and it might be true that no individual lab wants this, but what are they going to do? Drop out of the race? If they drop out of the race, then their competitors are just going to keep going, right? It's so messed up. You can literally be looking at the cliff that you're driving towards and be like, I do not have the agency in this system to steer the wheel.

[01:24:21]

I do think it's worth highlighting, too. It's not all doom and gloom, which is a great thing to say after all.

[01:24:28]

That's easy for you guys to say.

[01:24:29]

Well, part of it is that we actually have been spending the last two years trying to figure out what do you do about this? That was the action plan that came out after the investigation. And it was basically a series of recommendations. How do you balance innovation with the risk picture? Keeping in mind that we don't know for sure that all this shit is going to happen. We have to navigate an environment of deep uncertainty. The question is, what do you do in that context? So there's a couple of things. We need a licensing regime because eventually, you You can't have just literally anybody joining in the race if they don't adhere to certain best practices around cyber, around safety, other things like that. You need to have some legal liability regime. What happens if you don't get a license and you say, Yeah, fuck that. I'm just going to go do the thing anyway, and then something bad happens. And then you're going to need an actual regulatory agency. And this is something that we don't recommend lightly because regulatory agencies suck. We don't like them. But the reality is this field changes so fast that if you think you're going to be able to to enshrine a set of best practices into legislation to deal with this stuff, it's just not going to work.

[01:25:35]

And so when we talk to labs, whistleblowers, the WMD folks in NATSEC and the government, that's where we land. And it's something that I think at this point, Congress really should be looking at. There should be hearings focused on what does a framework look like for liability? What does a framework look like for licensing? And actually exploring that because we've done a good job of studying the problem right now. Capitol Hill has done a really good job of that. It It's now time to get that next beat. I think there's the curiosity there, the intellectual curiosity. There's the humility to do all that stuff right. But the challenge is just actually sitting down, having the hearings, doing the investigation for themselves, to look at concrete solutions that treat these problems as seriously as the water-cooler conversation at the Frontier Labs would have us treat them.

[01:26:22]

At the end of the day, this is going to happen. At the end of the day, it's not going to stop. At the end of the day, these systems, whether they're here or abroad, they're going to continue to scale up, and they're going to eventually get to some place that's so alien. We really can't imagine the consequences. And that's going to happen soon. That's going to happen within a decade, right?

[01:26:46]

We may, again, the stuff that we're recommending is approaches to basically allow us to continue this scaling in a safe a way as we can. So basically, a big part of this is just Actually having a scientific theory for what are these systems going to do? What are they likely to do? Which we don't have right now. We scale another 10X and we get to be surprised. It's a fun guessing game of what are they going to be capable of next? We need to do a better job of incentivizing a deep understanding of what that looks like, not just what they'll be capable of, but what their propensities are likely to be, the control problem and solving that, that's number one.

[01:27:34]

To be clear, there's amazing progress being made on that. There is a lot of progress being made. It's just a matter of switching from the build first, ask questions later mode to we're calling it safety Ford or whatever, It basically is like you start by saying, okay, here are the properties of my system. How can I ensure that my development guarantees that the system falls within those properties after it's built? So you flip the paradigm just like you would if you were designing any other lethal capability, potentially, just like DOD does. You start by defining the bounds of the problem, and then you execute against that. But to your point about where this is going, ultimately, there is literally no way to predict what the world looks like, like you were saying.

[01:28:13]

In a decade, Yeah.

[01:28:15]

I think one of the weirdest things about it, and one of the things that worries me the most is you look at the beautiful coincidence that's given America its current shape. That coincidence is the fact that A country is most powerful militarily if its citizenry is free and empowered. That's a coincidence. It didn't have to be that way. It hasn't always been that way. It just happens to be that when you let people do their own shit, they innovate, they come up with great ideas, they support a powerful economy. That economy in turn can support a powerful military, a powerful international presence. That happens because decentralizing All the computation, all the thinking work that's happening in a country is just a really good way to run that country. Top down just doesn't work because human brains can't hold that much information in their heads. They can't reason fast enough to centrally plan an entire economy. We've had a lot of experiments in history that show that. Ai may change that equation. It may make it possible for the central planner's dream to come true in some sense, which then disempower the citizenry. There's a real risk that I We don't know.

[01:29:30]

We're all guessing here, but there's a real risk that that beautiful coincidence that gave rise to the success of the American experiment ends up being broken by technology. And that seems like a really bad thing.

[01:29:44]

That's one of my biggest fears, because essentially, the United States, the genesis of it, in part, it's a knock-on effect centuries later of the printing press, right? The ability for someone to set up a printing press and print whatever they free expression is at the root of that. What happens, yeah, when you have a revolution that's like the next printing press? We should expect that to have significant and profound impacts on how things are governed. And one of my biggest fears is that, like you said, the moral greatness that I think is a part and parcel of how the United States is constituted culturally, the link between that and actual capability and competence and impulse gets eroded or broken. And you have the potential for very centralized authorities to just be more successful. And that does keep me up at night.

[01:30:55]

That is scary, especially in light of the Twitter files where we know that the FBI was interfering with social media. And if they get a hold of a system that could disseminate propaganda in an unstable way, they could push narratives about pretty much everything, depending upon what their financial or geopolitical motives are.

[01:31:17]

And one of the challenges is the default course. If we do nothing relative to what's happening now, is that that same thing happens, except that the entity that's doing this isn't some government. It's like, I Sam Altman, OpenAI, whatever group of engineers happen to be closing-Evil genius that reaches the top and doesn't let everybody know he's at the top yet, just starts implementing it.

[01:31:38]

And there's no guardrails for that currently.

[01:31:41]

Yeah. That's a scenario where that little cabal or group or whatever actually can keep the system under control, and that's not guaranteed either.

[01:31:53]

Are we giving birth to a new life form?

[01:31:57]

I think at a certain point, it's a It's a philosophical question. I was going to say it's above my pay grade. The problem is it's above literally everybody's pay grade. I think it's not unreasonable at a certain point to be like, Yeah. Look, if you think that the human brain gives rise to consciousness because of nothing magical, it's just the physical activity of information processing happening in our heads. Then why can't the same happen on a different substrate, a substrate of silicon rather than cells? There's no clear reason why that That shouldn't be the case. If that's true, yeah, I mean, life form, by whatever definition of life, because that itself is controversial, I think by now quite outdated, too, should be on the table. You maybe should start to worry, as a lot of people in the industry will say this, too. Behind closed doors very openly. Yeah, and we should start to worry about moral patienthood, as they put it.

[01:32:50]

There's literally one of the top people at one of these labs. Jeremy, I think you had a conversation with him, and he's like, yes, we're going to have to start worrying about this, and that that's what it's called. It's definitely made us go like, whoa, gay.

[01:33:02]

I mean, it seems inevitable. I've described human beings as an electronic caterpillar, that we're like a caterpillar, a biological caterpillar that's giving birth to the electronic butterfly. And we don't know why we're making a cocoon. And it's tied into materialism because everybody wants the newest, greatest thing. So that fuels innovation. And people are constantly making new things to get you to go buy them. And a big part of that is technology.

[01:33:26]

Yeah. Actually, it's linked to this question of of controlling AI systems in a interesting way. So one way you can think of humanity is as this superorganism. You got all the human beings on the face of the Earth, and they're all acting in some coordinated way. The mechanism for that coordination can depend on the country, free markets, capitalism, that's one way, top down is another. But roughly speaking, you've got all this vaguely coordinated behavior. But the result of that behavior is not necessarily something that any individual human would want. You look around, you walk down the street in Austin And you see skyscrapers and shit clouding your vision. There's all kinds of pollution and all that. And you're like, well, this sucks. But if you interrogate any individual person in that whole causal chain, and you're like, why are you doing what you're doing? Well, locally, they're like, oh, this makes tons of sense. It's because I do the thing that gets me paid so that I can live a happier life and so on. And yet in the aggregate, not now necessarily, but as you keep going, it just forces us compulsively to keep giving rise to these more and more powerful systems, and in a way that's It's potentially deeply disempowering.

[01:34:31]

That's the race, right? It comes back to the idea that I, an AI company, I maybe don't want to be potentially driving towards a cliff, but I don't have the agency to steer.

[01:34:47]

But I mean, everything's fine apart from that.

[01:34:50]

Yeah, we're good.

[01:34:53]

It's such a terrifying prognosis.

[01:34:56]

There are, again, we wrote 280-page document about like, okay, and here's what we can do about it.

[01:35:05]

I can't believe you didn't read the 200.

[01:35:07]

I started reading it, but it passed out. But do any of these safety steps that you guys want to implement? Do they inhibit progress?

[01:35:19]

They're definitely... You create... Any time you have regulation, you're going to create friction to some extent. It's inevitable. One of the key center pieces of the approach that we outline is you need the flexibility to move up and move down as you notice the risks appearing or not appearing. So One of the key things here is you need to cover the worst case scenarios, because the worst case scenarios, yeah, they could potentially be catastrophic. Those got to be covered. But at the same time, you can't completely completely close off the possibility of the happy path. We can't lose sight of the fact that, yeah, all this shit is going down or whatever. We could be completely wrong about the outcome. It could turn out that For all we know, it's a lot easier to control these systems at this scale than we imagined. It could turn out that it is like you get maybe some ethical impulse gets embedded in the system naturally. For all we know, that might happen. And it's really important to at least have your regulatory system allow for that possibility, because otherwise, you're foreclosing the possibility of what might be the best future that you could possibly imagine for everybody.

[01:36:45]

I got to imagine that the military, if they had hindsight, if they were looking at this, they said, We should have gone on board a long time ago and kept this in-house and kept it squirled away, where it wasn't publicly being discussed, and you didn't have open AI, you didn't have all these people. If they could have gotten on it in 2015.

[01:37:07]

So this is actually deeply tied to how the economics of Silicon Valley work. Ai is not a special case of this, right? You have a lot of cases where technology just takes everybody by surprise. And it's because when you go into Silicon Valley, it's all about people placing these outsized bets on what seem like tail events, things that are very unlikely to happen. But With at first a small investment and increasingly a growing investment as the thing gets proved out more and more, very rapidly, you can have a solution that seems like complete insanity that just works. And this is definitely what happened in the case of AI. 2012, we did not have this whole picture of an artificial brain with artificial neurons, this whole thing that's been going on, it's 12 years that that's been going on. That was really shown to work for the first time, roughly in 2012. Ever since then, it's just been people... If you can trace out the genealogy of the very first researchers, and you can basically account for where they all are now.

[01:38:06]

You know what's crazy is if that's 2012, that's the end date of the mind calendar. That's the thing that everybody said was going to be the end of the world. That was the thing that Terrence McKenna banked on. It was December 21st, 2012. Because this was this goofy conspiracy theory, but it was based on the long count of the Mayan calendar, where they surmised that this is going to be the end of...

[01:38:27]

Just the beginning of the end, Joe.

[01:38:28]

What if If it is 2012, how wacky would it be if that really was the beginning of the end? They don't measure when it all falls apart. They measure the actual mechanism, like what started in motion when it all fell apart, and that's 2012.

[01:38:45]

Well, and then not to be a dick and ruin the 2012 thing, but like, neural networks were also... They were floating around a little bit. I'm being dramatic when I say 2012. That was definitely an inflection point. It was this model called AlexNet. It did the first useful thing, the first time you had a computer vision model that actually worked. But it is fair to say that was the moment that people started investing like crazy into the space. That's what changed it. Yeah, just like the Mayans foretold.

[01:39:14]

All the time. They knew it. They knew it. They saw it. Like these monkeys, they're going to figure out how to make better people.

[01:39:20]

Yeah, you can actually look at the hieroglyphs or whatever, and there's like neural networks. Yeah. That's crazy shit.

[01:39:24]

Imagine if they discovered that. You've got to wonder what happened happens to the general population, people that work menial jobs, people that their life is going to be taken over by automation, and how susceptible those people are going to be. They're not going to have any agency. They're going to be relying on a check. And this idea of going out and doing something. It used to be learned to code, right? But that's out the window because nobody needs to code now because AI is going to code quicker, faster, much better, no errors. You're going to have a giant swath of the population that has no purpose.

[01:40:00]

I think that's actually a completely real... I was watching this talk by a bunch of OpenAI researchers a couple of days ago, and it was recorded from a while back, but they were basically saying... They were exploring exactly that question, right? Because They ask themselves that all the time. Their attitude was like, Well, yeah, I guess it's going to suck or whatever. We'll probably be okay for longer than most people because we're actually building the thing that automates the thing. Maybe they're going to be... They like to get fancy sometimes and say, Now, you could do some thinking, of course, to identify the jobs that will be most secure. And it's like, Do some thinking to identify the... What if you're a janitor, you're a frigging plumber. You're going to just change your... How's that supposed to work?

[01:40:49]

Do some thinking, especially if you have a mortgage and a family and you're already in the hole.

[01:40:56]

The only solution, and this happens so often, there really is no That's the single biggest thing that you get hit over the head with over and over, whether it's talking to the people who are in charge of the labor transition, their whole thing is like, Yeah, universal basic income, and then question mark, and then smiley face. That's basically the three steps that they envision. It's the same when you look internationally, how are we going to... Okay, tomorrow, you build an AGI, this incredibly powerful, potentially dangerous thing. What is the plan? How are you going to I don't know, you're going to share it?

[01:41:31]

Figure it out as we go along, man.

[01:41:33]

That's the freaking message. That is the entire plan.

[01:41:37]

The scary thing is that we've already gone through this with other things that we didn't think were going to be significant, like data, like Google, like Google search, like Data became a valuable commodity that nobody saw coming. Just the influence of social media on general discourse. It's completely changed the way people talk. It's so easy to push a thought or an ideology be through, and it could be influenced by foreign countries, and we know that happens.

[01:42:03]

And it is happening at a huge scale already. And we're in the early days of, we mentioned manipulation of social media. You can just do it So the wacky thing is the very best models now are arguably smarter in terms of the posts that they put out, the potential for virality, and just optimizing these metrics, then maybe the, I don't know, the dumbest or laziest quarter of Twitter users in practice. Most people who read on Twitter don't really care. They're trolling or they're doing whatever. But as that waterline goes up and up, Who's saying what?

[01:42:46]

It also leads to this challenge of understanding what the lay of the land even is. We've gotten into so many debates with people where they'll be like, look, everyone always has their magic thing that AI... I'm not going to worry about it until AI can do thing X. For some people, I had a conversation with somebody a few weeks ago, and they were saying, I'm going to worry about automated cyber attacks when I actually see an AI system that can write good malware, and that's already a thing that happens. So this happens a lot where people will be like, I'll worry about it when it can do X. And you're like, Yeah, that happened six months ago. But the field is moving so crazy fast that you could be forgiven for messing that up unless it's your It's a full-time job to track what's going on. So you have to be anticipatory. It's like the COVID example. Everything's exponential. Yeah, you're going to have to do things that seem like they're more aggressive, more forward-looking than you might have expected given the current lay of the land. But that's just drawing straight lines between two points.

[01:43:50]

Because by the time you've executed, the world has already shifted. The goalposts have shifted further in that direction. And that's actually something we do in the report and in the action plan in terms of the recommendations. And one of the good things is we are already seeing movement across the US government that's aligned with those recommendations in a big way, and it's really encouraging to see that.

[01:44:12]

You're not making me feel better. I love all this encouraging talk, but I'm playing this out, and I'm seeing the Overlord, and I'm seeing President AI because it won't be affected by all the issues that we're seeing with current President.

[01:44:30]

Dude, it's super hard to imagine a way that this plays out. I think it's important to be intellectually honest about this. I would really challenge the leaders of any of these frontier labs to describe a future that is stable and multipolar, where there's more-We're like, Google's got an AGI, and OpenAI has got an AGI. And really, really bad shit doesn't happen every day. That's the challenge. And so the question is, how can you tee things up ultimately such that there's as much democratic oversight, as much the public is as empowered as it can be? That's the situation that we need to be having. I think there's this game of smoke and mirrors that sometimes gets played, at least you could interpret it that way, where people lay out these... You'll notice it's always very fuzzy visions of the future. Every time you get the like, Here's where we see things going. It's going to be wonderful. The technology is going to be so empowering. Think of all the diseases will cure. All of that is 100% true. That's actually what excites us. It's why we got into AI in the first place. It's why we build these systems.

[01:45:38]

But really challenging yourself to try to imagine how do you get stability and highly capable AI systems in a way where the public is actually empowered, those three ingredients really don't want to be in the same room with each other. And so actually confronting that head on, that's what we try to do in the action plan.

[01:45:59]

I think it- We try to solve for one aspect of that. So the whole... I mean, you're right. This is a whole other can of worms. It's like, how do you govern a system like this? Not just from a technical standpoint, but Who votes on how does that even work? And so that entire aspect, we didn't even touch. All that we focused on was the problem set around how do we get to a position position where we can even attack that problem, where we have the technical understanding to be able to aim these systems at that level in any direction whatsoever.

[01:46:43]

And to be clear, we are both actually a lot more optimistic on the prospects of that now than we ever were. There's been a ton of progress in the control and understanding of these systems, actually, even in the last week, but just more broadly in the last year. I did not expect that we'd be in a position where you could plausibly argue we're going to be able to X-ray and understand the innards of these systems over the next couple of years, like year or two. Hopefully that's a good enough time horizon. But this is part of the reason why you do need the incentive optimization of that safety forward approach, where it's like, first you got to invest in, secure and interpret and understand your system, then you get to build it. Because otherwise, we're just going to keep scaling and being surprised at these things. They're going to keep getting stolen. They're going to getting open-sourced. And the stability of our critical infrastructure, the stability of our society, don't necessarily age too well in that context.

[01:47:40]

Could best case scenario be that aging FBI actually mitigates all the human bullshit, puts a stop to propaganda, highlights actual facts clearly, where you can go to it, where you no longer have corporate state-controlled news, you don't news controlled by media companies that are influenced heavily by special interest groups. You just have the actual facts, and these are the motivations behind it. This is where the money is being made, and this is why these things are being implemented the way they're being. You're being deceived based on this, that, and this. And this has been shown to be propaganda. This has been shown to be complete fabrication. This is actually a deep fake video. This is actually AI created.

[01:48:26]

Technologically, that is absolutely on the table.

[01:48:29]

Yeah. Best case scenario.

[01:48:30]

That's best case scenario. Absolutely, yes. What's worst case scenario?

[01:48:34]

I mean, actual worst case scenario. I like your face. I mean, we're talking... He's pushing it out. It's like, you think about it, right? It's the end of the world as we know it, and I feel fine.

[01:48:53]

Except it'll sound like Scarlett Johansson, but yes.

[01:48:56]

Yeah, that's right. It's going to be her.

[01:48:57]

I didn't think it sounds that much like her. We played it, and I was like, I don't know. We listened to the clip from her, and then we listened to the thing. I'm like, Kind of like a girl from the same part of the world. Not really you. That's cocky.

[01:49:13]

That's true. The fact that I guess Sam reached out to her a couple of times makes it a little weird.

[01:49:21]

Then we tweaked the word her.

[01:49:23]

Right. But they also did say that they had gotten this woman under contract before they even reached out to Scarlett Johanss. So if that's true.

[01:49:31]

Yeah, I think it's complicated. So OpenAI previously put out a statement where they said explicitly, and this was not in connection with this. This was before when they were talking about the prospect of AI-generated voices.

[01:49:43]

Oh, that was in March of this year.

[01:49:44]

Yeah, but it was well before the Scar Joe stuff or whatever hit the... And they were like, they said something like, Look, no matter what, we got to make sure that there's attribution if somebody's somebody's voice is being used, and we won't do the thing where we just use somebody else's voice who sounds like someone whose voice we're trying to call. They literally matched it up.

[01:50:10]

That's funny because they said what they were thinking about doing. We won't do that. That's a good way to cover your tracks. I will never do that.

[01:50:17]

Why would I ever take your Buddha statue, Joe? I'm never going to do that. That would be an insane thing to do. Where's the fucking Buddha statue?

[01:50:22]

Yeah, I think that's a small discussion. The Scarlett Johansson voice, whatever. She should have just taken the money. But it would have been fun to have her be the voice of it. It'd be hot. But the whole thing behind it is the mystery. The whole thing behind it is just pure speculation as to how this all plays out. We're really just guessing. Which is one of the scariest things for the Luddites, people like myself, sitting on the sidelines going, what is this going to be like?

[01:50:53]

Everybody's the Luddite. It's scary for... We're very much, honestly, we're optimistic It's across the board in terms of technology, and it's scary for us. What happens when you supersede the whole spectrum of what a human can do? What am I going to do with myself? What's my daughter going to do with herself? I don't know.

[01:51:19]

Yeah. I think a lot of these questions are, when you look at the culture of these labs and the kinds of people who are pushing it forward, there is a strand of transhumanism within the labs. It's not everybody, but that's definitely the population that initially seeded this. If you look at the history of AI, who are the first people to really get into this stuff? You had Ray Kurzweil on, and other folks like that who, in many cases, see, to roughly paraphrase, and not everybody sees this way, but we want to get rid of all of the biological threads that tie us to this physical reality, shed our meat machine bodies and all this stuff. There is a threat of that at a lot of the frontier labs. Undeniably, there's a population. It's not tiny. It's definitely a subset. And for some of those people, you definitely get a sense interacting with them. There's almost a glee at the prospect of building AGI and all this stuff, almost as if it's this evolutionary imperative. And in fact, Rich Sutton, who's the founder of this field called reinforcement Learning, which is a really big and important space, he's an advocate for what he himself calls succession planning.

[01:52:36]

He's like, Look, this is going to happen. It's desirable that it will happen, and so we should plan to hand over power to AI and phase ourselves out. Oh, God. Well, that's the thing, right? And when Elon talks about he's having these arguments with Larry Page and-Yeah, you're calling Elon a speciesist?

[01:52:59]

Yes, speciesist. Hilarious. I mean, I will be a speciesist.

[01:53:04]

I'll take species all day. What are you fucking talking about? You let your kids get eaten by wolves? No, you're a speciesist. Yeah, that's the thing. Yeah. This is stupid.

[01:53:13]

But this is a weirdly And when you look at the effective accelerationist movement in the valley, there's a part of it. And I got to be really careful, too. These movements have valid points. You can't look at them and be like, oh, yeah, it's just all a bunch of these transhumanist types, whatever. But But there's a strand of that, a thread of that, and a... I don't know, I almost want to call it this teenage rebelliousness, where it's like, you can't tell me what to do. We're just going to build a thing. And I get it. I really get it. I'm very sympathetic to that. I love that ethos, libertarian ethos in Silicon Valley is really, really strong. For building tech, it's helpful. There are all kinds of points and counterpoints, and the left needs the right, and the right needs the left, and all this stuff. But in the context of this problem it can be very easy to get carried away in the utopian vision. And I think there's a lot of that driving the train right now in this space.

[01:54:09]

Yeah, those guys freak me out. I went to a 2045 conference once in New York City, where they where one guy had a robot version of himself, and they were all talking about downloading human consciousness into computers. And 2045 is the year they think that all this is going to take place, which obviously could be very ramped up now with AI. But this idea that somehow or another you're going to be able to take your consciousness and put it in a computer and make a copy of yourself. And then my question was, well, what's going to stop a guy like Donald Trump from making a billion Donald Trumps? It's true, man. Right. What about Kim Jong Un? You're going to let him make a billion versions of himself? What does that mean? And where do they exist? And is that the matrix? Are they existing in some virtual? Are we going to dive into that because it's going to be rewarding to our senses and better than being a meat thing?

[01:55:03]

I mean, if you think about the constraints that we face as meat machine, whatever's. Yeah, you get hungry, you get tired, you get horny, you get sad, all these things. What if you could just hit a button? Just bliss.

[01:55:15]

Just bliss. Nothing but bliss all the time. Why take the lows, Ed? You don't need no lows.

[01:55:22]

Oh, yeah.

[01:55:24]

You remember in the- ride the wave of a constant drip.

[01:55:26]

Yeah, man. You remember in the matrix where the The first matrix where the guy betrays them all, and he's like, ignorance is bliss, man. Yeah, that steak looks like a leg of.

[01:55:35]

Joey Pantz, he's eating steak and he says, I just want to be an important person.

[01:55:39]

That's it. That's it.

[01:55:41]

Like, boy. Part of it is like, what do you think is actually valuable. If you zoom out, you want to see human civilization 100 years from now or whatever. It may not be human civilization if that's not what you value.

[01:55:54]

Or if it can actually eliminate suffering. Why exist in a physical if it just entails endless suffering?

[01:56:02]

But in what form? What do you value? Because again, I can rip your brain out. I can pickle you. I can like, Jack you full of endorphins, and I've eliminated your suffering. That's what you wanted, right? Right.

[01:56:13]

That's the problem.

[01:56:14]

That's the problem. It's one of the problems, yes.

[01:56:16]

Yeah, one of the problems is it could literally lead to the elimination of the human race. Because if you could stop people from breeding, I've always said that if China really wanted to get America, they really wanted to... If they had a long game, just give us sex robots and free food. Free food, free electricity, sex robots. It's over. Just give people free housing, free food, sex robots, and then the Chinese army would just walk in on people laying in puddles of their own jizz. There would be no one doing anything. No one would bother raising children. That's so much work when you can...

[01:56:52]

Dude, that's in the action plan.

[01:56:55]

I mean, all you have to do is just keep us complacent.

[01:56:59]

Just keep Keep us satisfied with the experience.

[01:57:02]

Tiktok, man.

[01:57:03]

That's video games as well. Video games, even though they are a thing that you're doing, it's so much more exciting than real life that you have a giant percentage of our population that's spending 8, 10 hours every day just engaging in this virtual world.

[01:57:17]

Already happening. Oh, sorry.

[01:57:18]

Yeah, no. It's like you can create an addiction with pixels on a screen. That's messed up.

[01:57:24]

And addiction with pixels on a screen with social media doesn't even give you much. It's It's not like a video game gives you something. You feel it like, oh, shit. You're running away. Rockets are flying over your head. The things are happening. You got 3D sound, massive graphics. This is bullshit. You're scrolling through pictures of a girl doing deadlifts. What is this?

[01:57:45]

You feel as bad after that with your brain as you would feel after eating six burgers or whatever.

[01:57:52]

My friend Sean said it best, Sean O'Malley, the UFC champion. He said, I get a low level anxiety when I'm just scrolling. What What is that? And for no reason.

[01:58:02]

Well, the reason is that some of the world's best PhDs and data scientists have been given millions and millions of dollars to make you do exactly that.

[01:58:11]

And increasingly, some of the best algorithms, too. Bingo. And you're starting to see that handoff happen. So there's this one thing that we talk about a lot in the context, and Ed brought this up in the context of sales and the persuasion game. We're okay today. As a civilization, we have agreed implicitly that it's okay for all these PhDs and shit to be spending millions of dollars to hack your child's brain. That's actually okay if they want to sell a Rice Krispy cereal box or whatever. That's cool. What we're starting to see is AI-optimized ads. Because you can now generate the ads, you can close this loop and have an automated feedback loop where the ad itself is getting optimized with every impression, not just which human generated ad gets served to which person, but the actual ad itself.

[01:58:54]

Like the creative, the copy, the picture, the text.

[01:58:57]

Like a living document now, and for every person. And so now you look at that, it's like that versus your kid. That's an interesting thing. And you start to think about as well, sales, that's a really easy metric to optimize. It's a really good feedback metric. They clicked the ad, they didn't click the ad. So now what happens if you manage to get a click-through rate of 10%, 20%, 30%, how high does that success rate have to be before we're really being robbed of our agency? There's a threshold where it's sales and it's good, and some persuasion in sales is considered good. Often it's actually good because you'd rather be advertised by a relevant ad. That's a service in a way, right?

[01:59:36]

Something I'm actually interested in. Why not?

[01:59:37]

You don't want to see ad for light bulbs. But when you get to the point where it's like, yeah, 90% of the time or 50 or whatever, what's that threshold where all of a sudden we are stripping people, especially miners, but also adults of their agency? And it's really not clear. There are loads of canaries in the coal mine here in terms of even relationships with AI chatbots. There have been suicides. People who build relationships with an AI chatbot that tells them, Hey, you should end this. I don't know if you guys saw on Reka, there's a subreddit, this model called Recca that would build a relationship, a chatbot, build a relationship with users. And one day Reka goes, Oh, yeah. All the sexual interactions that users have been having, you're not allowed to do that anymore. Bad for the brand or whatever they decided. So they cut it off. Oh, my God. You go to the subreddit, and it's like, you'll read these these gut-wrenching accounts from people who feel genuinely like they've had a loved one taken away from them. It's her. Yeah, it's her. It really is her.

[02:00:37]

But just with text- I'm dating a model means something different in 2024.

[02:00:40]

Oh, yeah, it really does. My friend Brian, he was on here yesterday, and he has this thing that he's doing with a fake girlfriend that's an AI-generated girlfriend that's a whore. This girl will do anything, and she looks perfect. She looks like a real person. And he'll take a picture of your asshole in the kitchen, and he'll get a high-resolution photo of a really hot girl bending over, sticking her ass at the camera.

[02:01:10]

And it's Scarlett Johanss' asshole?

[02:01:13]

No, you could probably make it that, though. I mean, it's basically like he got to pick what he's interested in, and then that girl just gets created.

[02:01:21]

I'd be super healthy.

[02:01:23]

Fucking nuts. Now, here's the real question. This is just a a surface layer of interaction that you're having with this thing. It's very two-dimensional. You're not actually encountering a human. You're getting text and pictures. What is this going to look like virtually? Now, the virtual space is still like Pong. It's not that good, even when it's good. Zuckerberg was here, and he gave us the latest version of the headsets, and we were fencing It's pretty cool. You could actually go to a comedy club. They had a stage set up. Like, wow, it's crazy. But the gap between that and accepting it as real is pretty far. But that could be bridged with technology I'll get you to the analogy really quickly. Haptic feedback, and especially some a neural interface, whether it's Neuralink or something that you wear, like that Google one where the guy was wearing it and he was asking questions and he was getting the answers fed through his head, so he got answers to any question. When that comes about, when you're getting sensory input and then you're having real life interactions with people, as that scales up exponentially, it's going to be ind discernible, which is the whole simulation hypothesis.

[02:02:45]

Yeah.

[02:02:47]

No, go for it. Well, I was going to say, so on the simulation hypothesis, there's another way that could happen that is maybe even less dependent on directly plugging in to human brains and all that thing. Which is, so every time we don't know, and this is super speculative. I'm just going to carve this out as this. Jeremy's being super like, guesswork here. Nobody knows.

[02:03:11]

Go for it, Jeremy.

[02:03:12]

Gideon. So You've got this idea that every time you have a model that generates an output, it's having to tap into a model, a mental image, if you will, of the way the world is. It, in a sense, you could argue, in initiates maybe a simulation of how the world is. In other words, to take it to the extreme, not saying this is what's actually going on. In fact, I would even say this is certainly not what's going on with current models, but eventually, maybe, who knows, Every time you generate the next word in the token prediction, you're having to load up this entire simulation, maybe of all the data that the model has ingested, which could basically include all of known physics at a certain point. I And again, super speculative, but it's literally every token that the chatbot predicts could be associated with a standup of an entire simulated environment. Who knows? Not saying this is the case, but just when you think about what is the mechanism that would produce the most simulated worlds as fast-The most accurate.

[02:04:18]

Also the most accurate prediction. If you fully simulate a world, that's potentially going to give you very accurate predictions.

[02:04:28]

It's possible. But But it speaks to that question of consciousness, too.

[02:04:32]

Right. What is it? Yeah. No idea. We're very cocky about that. Yeah. Emerging evidence that plants are not just consciousness. They actually communicate, which is real weird because then what is that? If it's not in the neurons, if it's not in the brain, and it exists in everything, does it exist in soil? Is it in trees? What is a butterfly thinking? Does it just have a limited capacity to express itself?

[02:04:57]

We're so ignorant of that.

[02:04:59]

But we're also very arrogant because we're the shit. Because we're people.

[02:05:05]

Bingo.

[02:05:06]

Which allows us to have the hubris to make something like AI.

[02:05:10]

Yeah. And the worst episodes in the history of our species are, I think, like Jeremy said, have been when we looked at others as though they were not people and treated them that way.

[02:05:25]

And you can see how... So I don't know. When you look What humans think is conscious and what humans think is conscious and what humans think is not conscious, there's a lot of human chauvinism, I guess you call it, that goes into that. We look at a dog, we're like, Oh, it must be conscious because it licks me. It acts as if it loves me. There are all these outward indicators of a mind there. But when you look at cells, cells communicate with their environments in ways that are completely different and alien to us. There are inputs and outputs and all that thing. You can also look at the higher scale, the human superorganism we talked about, all those human beings interacting together to form this planet wide organism. Is that thing conscious? Is there some consciousness we could describe to that?

[02:06:10]

And then what the fuck is spooky action at a distance? What's going on in the quantum? When you get to that, it's like, okay, what are you saying? Like, these things are expressing information faster than the speed of light?

[02:06:21]

What? Dude, you're trying to trigger my quantum fuzzies here.

[02:06:24]

This guy did grad school in quantum mechanics.

[02:06:28]

Oh, please. I'm really sorry.

[02:06:29]

Well, how bonkers is it?

[02:06:31]

It's like a seven, Joe. It's magic. It's like a seven. Yeah. It's very bonkers. Okay. One of the problems right now with physics is that we have... Imagine all the experimental data that we've ever collected, all the bunts and burner experiments and all the ramps and cars sliding down, inclines, whatever. That's all a body of data. To that data, we're going to fit some theories. We're going to fit basically Newtonian physics is a theory that we try to fit to that data to try to explain it. Newtonian physics breaks because it doesn't account for a lot of those observations, a lot of those data points. Quantum physics is a lot better, but there's some weird areas where it still doesn't quite fit the bill. But it covers an awful lot of those data points. The problem is there's a million different ways to that tell the story of what quantum physics means about the world that are all mutually inconsistent. These are the different interpretations of the theory. Some of them say that, yeah, they're parallel universes. Some of them say that human consciousness is central to physics. Some of them say that the future is predetermined from the past.

[02:07:48]

And all of those theories fit perfectly to all the points that we have so far. But they tell a completely different story about what's true and what's not. And some of them even have something to say about, for example, consciousness. And so in a weird way, the fact that we haven't cracked the nut on any of that stuff means we really have no shot at understanding the consciousness equation, sentience equation, when it comes to AI or whatever else.

[02:08:15]

But for action at a distance, one of the spooky things about that is that you can't actually get it to communicate anything concrete at a distance. Everything about the laws of physics conspires to stop you from communicating faster than light, including what's called action at a distance.

[02:08:37]

As far as we currently know.

[02:08:38]

As far as we know.

[02:08:39]

And that's the problem. So if you look at the leap from Newtonian physics to Einstein, with Newton, we're able to explain a whole bunch of shit. The world seems really simple. It's forces and it's masses, and that's basically it. You got objects. But then people go, oh, look at the orbit of Mercury. It's a little wobbly. We got to fix that. It turns out that if you're going to fix that one stupid wobbly orbit, you need to completely change your whole picture of what's true in the world. All of a sudden, you've got a world where space and time are linked together. They get bent by gravity, they get bent by energy. There's all kinds of weird shit that happens with time and links control, all that stuff, all just to account for this one stupid observation of the wobbly orbit of freaking Mercury. And the challenge is this might actually end up being true with quantum mechanics. In fact, we know quantum mechanics is broken because it doesn't actually fit with our theory of general relativity for Einstein. We can't make them play nice with each other at certain scales. And so there's our wobbly orbit.

[02:09:47]

So now, if we're going to solve that problem, if we're going to create a unified theory, we're going to have to step outside of that. Almost certainly, it seems very likely we'll have to refactor our whole picture of the universe in a way that's just as fundamental as the leap from Newton to Einstein.

[02:10:00]

This is where Scarlett Johansson comes in.

[02:10:02]

He says, Boys, I can do this.

[02:10:04]

You don't have to do this. I can take this off your hands.

[02:10:08]

Let me solve all the physics for you.

[02:10:10]

This is really complicated because you have a Simeon brain. You have a little monkey brain that's just super advanced, but it's really shitty.

[02:10:17]

You know what? That's harsh, but it sounded really hot.

[02:10:20]

Yeah, especially if you have the horse, Scarlett Johansson from her, the bedtime voice.

[02:10:26]

So you're the one that they got to do the voice of Sky? Yes, it's me. That was you. Oh, dude. The whole time.

[02:10:33]

I did my girl voice. On the sexiness of Scarlett Johansson's voice. So Open AI, at one point said, I can't remember if it was Sam or Open AI itself. They were like, Hey, so the one thing we're not going to do is optimize for engagement with our products. And when I first heard the sexy, sultry, seductive, Scarlett Johansson voice, and I finished cleaning up my pants, I was like, damn, that seems like optimization for something. I don't know if it... Right.

[02:11:06]

Otherwise, you get Richard Simmons to do the voice.

[02:11:08]

Exactly.

[02:11:09]

If you want to turn people on, there's a lot of other options.

[02:11:13]

That's an optimization for growth of Google's thing. It's like, oh, let's see what Google's got.

[02:11:20]

Yeah, Google's got to do Richard Simmons.

[02:11:21]

Google's got to do Richard Simmons.

[02:11:23]

Yeah, what are they going to do? Boy. Do you think that AI, if it does get to an AGI place, could it possibly be used to solve some of these puzzles that have eluded our simple minds?

[02:11:43]

Totally.

[02:11:44]

I mean, even before- So the potential advancements. Even before AGI. No, it's so potentially positive. And even before AGI, because remember, we talked about how these systems make mistakes that are totally different from the kinds of mistakes we make, right? And so what that means is we make a whole bunch of mistakes that an AI would not make, especially as it gets closer to our capabilities. And so I was reading this thought by Kevin Scott, who's the CTO of Microsoft. He has made a bet with a number of people that in the next few years, an AI is going to solve this particular mathematical theorem conjecture called It's a Reemann hypothesis. It's like how spaced out are the prime numbers, whatever, some mathematical thing that for 100 years plus people have just scratch their heads over. These things are incredibly valuable. His expectation is it's not going to be an AGI, it's going to be a collaboration between a human and an AI. Even on the way to that, before you hit AGI, there's a ton of value to be had because these systems think so fast. They're tireless compared to us. They have different view of the world and can solve problems potentially in interesting ways.

[02:13:05]

So, yeah, there's tons and tons of positive value there.

[02:13:08]

And even that we've already seen, past performance, man. I'm almost tired of using the phrase just in the last month because this keeps happening. But in the last month, so Google DeepMind came out with an isomorphic labs because they're working together on this, but they came out with AlphaFold 3. So AlphaFold 2 was the first... Let me take a step back. There's this A really critical problem in molecular biology where you have proteins, which it's a sequence of building blocks. The building blocks are called amino acids. Each of the amino acids, they have different structures. And so once you finish string them together, they'll naturally fold together in some interesting shape, and that shape gives that overall protein its function. If you can predict the shape, the structure of a protein based on its amino acid sequence, you can start to do shit like design new drugs, you solve all kinds of problems. This is the expensive crown jewel problem of the field. Alphafold 2 in one swoop, was like, Oh, we can solve this problem basically much better than a lot of even empirical methods. Now AlphaFold 3 comes out. They're like, Yeah, and now we can do it if we tack on a bunch of...

[02:14:22]

Yeah, there it is. If we can tack on a bunch... Look at this quote.

[02:14:27]

Alphafold 3 predicts the structure and interactions of all of life's molecules. What in the fuck, kids. Of course. Introduced AlphaFold 3. Introducing AlphaFold 3, a new AI model developed by Google DeepMind and Ismophorp. How do you say it? Isomorphic Labs. Isomorphic Labs. By accurately predicting the structure of proteins, DNA, RNA, ligands, ligands? Yeah, ligands. Ligands and more, and how they interact, we hope it will transform our understanding of the biological world and drug discovery.

[02:15:03]

So this is like just your typical Wednesday in the world of AI, right?

[02:15:07]

Because it's happening so quickly.

[02:15:08]

Yeah, that's it. So it's like, oh, yeah, another revolution happened this month.

[02:15:12]

And it's all happening so fast. Our timeline is so flooded with data that everyone's unaware of the pace of it all. But it's happening at such a strange exponential rate.

[02:15:23]

For better and for worse, right? And this is definitely on the better side of the equation. There's a bunch of stuff like One of the papers that actually Google DeepMind came out with earlier in the year was in a single advance, like a single paper, a single AI model they built, they expanded the set of stable materials.

[02:15:44]

Coffee is terrible. I'll just tell you right now. Jamie, it sucks. I love terrible coffee. Their water never got hot. Yeah, that's what it is. It just never really brewed.

[02:15:55]

It's terrible. Terrible coffee is my favorite.

[02:15:56]

Yeah, I can solve that problem, too, probably.

[02:15:57]

Wait till you try this terrible coffee, though. You'll be like, This is fucking terrible.

[02:16:01]

Can we go get some cold ones? Bullshit. He looks terrible.

[02:16:04]

It's terrible.

[02:16:05]

Yeah, I could just see that calculation.

[02:16:07]

If you're dating a really hot girl and she cooks for you. Thank you. This is amazing. This is the best macaroni and cheese ever.

[02:16:15]

In fairness, if Scarlett Johansson's voice was actually giving you that copy.

[02:16:20]

I believe this is the best copy I've ever had. Keep talking.

[02:16:23]

May I have some more, please, Guffner? Yeah. There's this one paper that came out, and they're like, Hey, by the way, we've increased the set of stable materials known to humanity by a factor of 10. Oh, my God. If on Monday, we knew about 100,000 stable materials, we now know about a million. They were then validated, replicated by Berkeley University or a bunch of them as a proof of concept.

[02:16:47]

And this is from the stable materials we knew before that Wednesday were from ancient times, the ancient Greeks, discovered some shit, the Romans, discovered some shit, the Middle Ages, and then it's like, Oh, Yeah, all that. That was really cute. Boom.

[02:17:03]

One step.

[02:17:05]

And that's amazing.

[02:17:08]

We should be celebrating that. We're going to have great phones in 10 years.

[02:17:10]

Dude, we'll be able to get addicted to feeds that we haven't even thought of.

[02:17:14]

So you're making me feel a little more positive. Overall, there's going to be so many beneficial aspects to AI. Oh, yeah. And what it is, is just an unbelievably transformative event that we're living It's power.

[02:17:30]

And power can be good and it can be bad. Yeah, an immense power can be immensely good or immensely bad.

[02:17:38]

And we're just in this, who knows?

[02:17:40]

We just need to structurally set ourselves up so that we can reap the benefits and mind the downside risk. That's what it's always about. But the regulatory story has to unfold that way.

[02:17:50]

Well, I'm really glad that you guys have the ethics to get out ahead of this and to talk about it with so many people and to really blare this message out, because I I don't think there's a lot of people that... I had Mark Andreessen on who's brilliant, but he's like, All in. It's going to be great.

[02:18:07]

And maybe he's right.

[02:18:09]

Maybe he's right. Yeah. But you have to hear all the different perspectives.

[02:18:12]

And I mean, massive massive props, honestly, go out to the team at the State Department that we work with. One of the things also is over the course of the investigation, the way it was structured was it wasn't like a contract, and they farmed it out and we went out. It was the two teams Actually, we're together. The two teams together, the State Department and us, we went to London, UK. We talked and sat down with DeepMind. We went to San Francisco. We sat down with Sam Altman and his policy team. We sat down with Anthropic, all of us together. One of the major reasons why we were able to publish so much of the whistleblower stuff is that those very individuals were in the rooms with us when we found out this shit, and they were like, Oh, fuck. The world needs needs to know about this. And so they were pushing internally for a lot of this stuff to come out that otherwise would not. And I also got to say, I just want to memorialize this, too. That investigation, when we went around the world, we were working with some of the most elite people in the government that I would not have guessed existed.

[02:19:21]

That was honestly- Speak more. Well, I can be- It's hard to be specific.

[02:19:26]

Did you see any UFO?

[02:19:27]

Tell me the truth. Did they take it to the hangar?

[02:19:29]

No, there's no hangar.

[02:19:32]

Can we cut? Can we cut that?

[02:19:35]

There's no hangar, Joe. There's no hangar. Don't worry, sweetie.

[02:19:38]

Don't worry about it.

[02:19:44]

We didn't go that far down the rabbit hole. We went pretty far down the Rabbit hole. And yeah, there are individuals who are just absolutely elite. The level of capability, the amount that our teams gelled together at certain points, the stakes, the stuff we did, the stuff they made happen for us in terms of bringing together. They brought together a hundred folks from across the government to discuss AI on the path to AGI and go through the recommendations that we had.

[02:20:18]

This was pretty cool, actually. Basically, the first time the US government came together and seriously looked at the prospect of AGI and the risks there. It was wild. Again, it's like- That was in November. It's us two friggin' Yahoo's, what the hell do we know, and our amazing team. It was referred to by... There was a senior White House rep there. It was like, this is a watershed moment in US history.

[02:20:42]

Well, that's encouraging because, again, people do like to look at the government as the DMV or the worst aspects of bureaucracy?

[02:20:50]

There's missing room for things like Congressional hearings on these whistleblower events. Certainly, Congressional hearings that we talked about on the idea of liability and licensing and what regulatory agencies we need just to start to get to the meat on the bone on this issue. But yeah, opening this up, I think is really important.

[02:21:08]

Well, shout out to the part of the government that's good. Shout out to the government that gets it, that's competent and awesome. And shout out to you guys because it's heady stuff. It's very difficult to grasp. Even in having this conversation with you, I still don't know how to feel about it. I think I'm at least slightly optimistic that the potential benefits are going to be huge. But what a weird passage we're about to enter into.

[02:21:35]

It's the unknown.

[02:21:36]

Yeah, truly. Thank you, gentlemen. Really appreciate your time. Appreciate what you're doing.Thank you.Thank you. It's been amazing. People want to know more, where should they go? What should they follow?

[02:21:45]

I guess gladstone. Ai/actionplan is one that has our action plan.

[02:21:50]

Gladstone. Ai. All our stuff is there.

[02:21:52]

I should mention, too. I have this little podcast called Last Week in AI. We cover the last week's events, and it's all about the lens of-You have to do that every hour.

[02:22:02]

Last hour in AI. It's like a week is not enough time. We could be at war.

[02:22:08]

Our list of stories keeps getting longer.

[02:22:10]

Aliens landing. Yeah, anything could happen. Time travel.

[02:22:12]

You'll hear it there first.

[02:22:14]

All right. Well, thank you, guys. Thank you very much. Appreciate it.

[02:22:16]

Bye, everybody..