Transcribe your podcast
[00:00:00]

Overtake your expectations with the arrival of the brand new, fully electric Audi Q6 e-tron, available exclusively for 242 ordering. The new launch edition Audi Q6 e-tron, with a range of up to 570 kilometers, also features a complimentary upgrade design package, included in the monthly rate of €698. All roads lead to your new Audi Q6 e-tron. Terms of conditions apply. Audi, Vorsprung doch Technik.

[00:00:31]

From the New York Times, I'm Natalie Kytrola, and this is The Daily. Outmanned and outgunned in what has become a war of attrition against Russia, Ukraine has looked for any way to overcome its vulnerabilities on the battlefield. That search has led to the emergence of killer robots. Today, my colleague Paul Moser on how Ukraine has become a Silicon Valley for autonomous weapons and how artificial intelligence is reshaping modern warfare. It's Tuesday, July ninth. Paul, when we've talked on the show about the applications of advanced artificial intelligence, one of the scary your ideas has been that militaries around the world could use it to make autonomous killing machines, i. E. Killer robots. Your reporting shows that this may already be happening. Tell me about it.

[00:01:45]

When I first got started on my reporting, I thought this was the stuff of sci-fi. You think about AI hunting and killing somebody, and you think of Terminator and Arnold Schwarzenegger hunting people as a robot, or You think of how in 2001, a space odyssey, an all-knowing robot that can kill people on a spaceship. But the thing is, the early versions of this technology that will get us there are already being developed, and they're being developed in Ukraine. In some ways, Ukraine has become a nexus for the development of this type of autonomous military technology, writ large. They're taking basically artificial intelligence and finding all kinds of new military applications for it.

[00:02:28]

Why is Ukraine become that nexus.

[00:02:31]

Perhaps most importantly, they're outgunned. Weisons don't necessarily come quickly or predictably from the United States or Europe. They have to reach to anything they can use to fight this war. So they turn to consumer technology and emerging technologies like AI to build new effective weapons. The second point is that they're outmanned. As you're facing the prospect of defending all these trenches and you just don't have as many people as the Russians have, you need to come up with things that solve that problem. What better than something like an automated machine gun or a drone? Then perhaps something people don't realize is that Ukraine has been a bit of a back office for the global technology industry for a long time. Many of the apps you use every day were probably in some part, coded by engineers in Ukraine. You have a lot of coders and a lot of skilled experts taking their abilities and saying, Well, now we need to turn from building a dating app to figuring out how to stop the Russians. That means building these new weapons. Then finally, extremely importantly, this is a war of attrition. Every day there's fighting going on, and that means you have the ability to test these weapons each day and to use the Silicon Valley term, iterate on them, tweak them and make them better.

[00:03:45]

Having what is effectively a laboratory to experiment and find out ways to make AI ever more deadly really helps.

[00:03:55]

Yeah, it sounds like you have all of these conditions that line up to make Ukraine a perfect incubator to build this type of technology. I'm wondering, Paul, what this actually looks like on the ground. What weapons are we talking about here?

[00:04:09]

I went to Ukraine in May. I met with all kinds of different tech startups and developers, troops who use this technology. Perhaps the most startling moment wasn't near the front lines or anything. It was actually in a park skepticism here, I think many of us interact with AI through ChatGPT or Gemini, and we've seen the hilariously bad answers those systems can produce. We can Don't even depend on AI to solve a crossword puzzle. I'm just wondering, how can Ukrainians rely on this technology for much higher stakes stuff? I mean, hitting the right targets in war, are they Are you finding that their AI is making mistakes?We don't really know the answer to that. We do know that the systems work pretty well, and part of the point is that they're supposed to be cheap, so they don't always have to work. If they work 80% of the time and they're cheap, that's okay. I will say that another reason why they want the human in the loop who can turn off the AI is that they're afraid of friendly fire and hitting the wrong target. There's an ethical consideration, but there's also just a very practical one that this tech could go wrong and go with the wrong person or identify the wrong thing, and they need to be able to turn it off. There are still humans in the mix, but even with this bad technology, perhaps it's even scarier, it's super simple to just take the human out of the mix.How close are we to that? I mean, is Ukraine's military contemplating a scenario where the humans really go away and the machines and their judgments are entirely responsible for killing?That was a question I posed to everyone working on this stuff. The general answer is a no, that the human will stay in the loop for the foreseeable future. But people had different takes on it. One guy who had a particularly interesting answer was an executive at the firm that made that automated machine gun, Robeneers. His name is Anton Skripnik, and I met him at his offices in Ukraine. So, realistically, the pace that we're seeing, when do you think we're going to start seeing the first automated killing? Is it When I asked him how or when the first automated killing on the front lines might occur. Maybe it was already done.Most likely it was already done.He said it probably, honestly, has already happened. He had no way of knowing. He wasn't sure.People very often just do something to survive, to complete a mission without sharing information. This is like, Not bad.In a fast-paced, high stress environment on the front where life and death are oftentimes a matter of an instant decision, whether or not a soldier flipped a switch that allowed something to just go fully automated in this way or autonomous is very possible. Is there any thought of changing that, or is that something that you guys are stand by? I asked him, Okay, but you guys aren't doing this.There is not a single request about having that.He said, No. For us, you have to hit this trigger every time the gun sees a target so that it shoots. How long would it take you to do it if you wanted to? I said, Okay, but if wanted to make it fully autonomous, how long would that take? Tomorrow. Tomorrow.Because today is already like-Yeah, I've taken 2 hours of your time.Basically, no time at all. It's a matter of a few lines of code Because these things are already effectively doing the auto-targeting, it just has the human pulling the trigger. To make the computer pull the trigger is almost so easy as trivial.Wow. What Anton is saying essentially is that he already has the technology to a robot that makes the decision to kill on its own. There's a human operator for now, but that's not a necessity.Exactly. His answer really hit home for me because still, I think even sitting there had thought that this was the stuff of science fiction. But what I realized at that moment is that the era of the killer robot is already upon us. We're already here. That raises just a huge number of ethical and moral questions about the future of warfare, the future of its accessibility to these kinds of weapons, and what it requires to kill a human being in the future.We'll be right back.Every Audi is designed with you in mind. From the fully charged electric driving of the all new Audi Q6 e-tron to the hybrid petrol and diesel engines of the Audi A3 Sportback and A6 Saloon. Because how you drive is very much a personal choice. And of course, with every Audi, that choice is never a compromise. Book your test drive of the 242 Range at your local Audi dealer. Audi. Vorsprung doch Technik.Hi, I'm Robert Vinluen from New York Times Games. I'm here talking to people about Werdel and showing them as a new feature. You all play Wurtl? Yeah. I have something exciting to show you. It's the Wurtle Archive. If I miss it, I can go back. A hundred %. Oh, that's sick. Now you can play every Wurtle that has ever existed. It was like a thousand puzzles? Oh my God, I love it. Actually, that's really great. What day would you pick? May 17th. Okay. That's her birthday. What are some of your habits for playing Wurtle? I wake up, I make a cup of coffee, I do the Wurtle, and I send it to my friends in a group chat. Amazing. Thanks so much for coming by and talking to us and playing. New York Times game subscribers can now access the entire Wurtle archive. Find out more at nytimes. Com/games. You don't understand how much Wurtle means to us. We need to take a selfie.Paul, you started to get into the potential moral questions raised by a world where robots need no human input at all to make decisions about killing people. Let's talk about those questions.I guess the first big consideration is who has access to this and where it will spread. The first group that does is powerful countries. I mean, the United States is developing swarms of drones that can accompany its fighter jets. Other major military powers in Europe and China, for instance, also are developing this thing. But it's also not just those guys. Ukraine, for instance, has channels where it's been sharing tips on drone warfare. We did a story earlier in the air out of Myanmar where we found Burmese drone pilots were training on Ukrainian software that taught them to use Kamikaze drones. The question becomes, how long until these automated targeting softwares are shared. Perhaps what's maybe scarier is that Russia also is developing very similar solutions to the Ukrainians. Who will they share it with? Will they share it with North Korea, Iran, certain fighters in Sudan? The point is, it's very easy to spread software. I mean, this isn't even a piece of hardware. This is just something that plugs into a piece of hardware. You can send it over an email. The guys that we talk to in the field who are flying the drones One of the problems they had with their technology when they showed it to the Ukrainian military is that it wasn't encrypted.So the Ukrainians were afraid that if their drone crashed behind enemy lines without blowing up, the Russians could take the little mini-computer, download the code, and use it to build their own system and hit the Ukrainians. So software spreads incredibly fast and incredibly easily, and it's going to be extremely hard once these solutions are developed to stop them from going almost anywhere. It's not hard to imagine a dark website that allows you to sell all manner of autonomous drone attack systems. There was one US official I was speaking with who has huge concerns about the terrorism implications of this. So it's not hard to imagine. I mean, take a drone, for example, you could fly something in 20, 30 miles away and becomes extremely difficult to defend against.That raises a lot of questions, obviously. I have to assume that ethicists and human rights officials are asking some of them. For example, Is there any way to regulate AI weapons? Can we put limits on their use?Yeah. This is something that's been debated in the UN by panels of experts for years, but we never really get to anything particularly concrete, in part because countries are already in an arms race to develop these things. Every time anybody proposes some a rule, it's vetoed, if not by the United States, by China, by Russia, by other countries in Europe. But there are some basic principles that ethicists rally behind, things like keeping a human in the loop so that the human makes the ultimate decision, even if there's automated targeting going on. That's the line the Ukrainians are standing behind. But again, There's really not much out there to stop any of this from going wherever it wants to go. Honestly, it feels like we're already heading in that direction.Paul, in listening to this whole time, I've been wondering how we should think about this idea of software making the decision about who lives and dies. Because on the one hand, I have to say the idea of robots hunting down humans is truly frightening. But on the other hand, it's not as if humans are known for their restraint in war. I mean, right now we're witnessing two wars in Ukraine and in Gaza, where human-led military campaigns have killed tens of thousands of people, many of them civilians. I'm wondering, in your reporting, have you come to think of this technology as in any way better or a more precise form of warfare?Yeah. I think what was interesting is some of the technologists building this that I spoke with did make this case. Their logic goes something like, if we have robots fighting robots, humans aren't dying. If we can put rules inside the software, say, children will be killed by this weapon. We can prevent it from doing certain things that maybe a really bad human would do. You could maybe even create spaces like the front line where just nobody can step foot for 4 kilometers on either side because the weapons are so deadly. How can you move forward in either way? You just create a perfect stalemate. But I guess history shows us, in my understanding of history, at least, that that's not the way this will probably go, and that every time in the past, we've seen a breakthrough in weaponry, oftentimes it's just meant more devastating weapons get created. I thought back to Alfred Nobel, who famously thought dynamite would end war, and of course, it simply made more powerful deadly bombs. It feels like that is the future that we are treading into. But I just think that it's very hard to sit where we are in a place that's not at war and tell people building these things that are weapons to defend their families and their friends who are going off to war against an invader to stop doing it because it could make us unsafe in the future.Even some of the ethicists I spoke with who are very opposed to have dedicated their careers to fighting against autonomous weapons, they would throw up their hands and say, Well, I can't really argue with the Ukrainians. One of the Ukrainians I spoke with who's making autonomous drone systems said, You show me a hypothetical victim, and I'll show you a real dead soldier and a family that now has to live without him. That leaves you in a very hard place because you have what's a runaway train. You can't morally argue for people to stop building things to defend themselves, yet what they're building basically secures a future that will be far more dangerous than the present that we live in. As long as this war in Ukraine goes on, we are going to see more advanced systems get developed. I just don't know how we avoid a future in which we have ever more powerful, ever more autonomous weapons. That's pretty scary.Paul, thank you so much.Thank you.On Monday, Russia launched one of the deadliest assaults on Kyiv since the first months of the war, striking Ukraine's largest children's hospital as part of a barrage of bombings across the country. At least 38 people were killed in the attacks, and more than 100 were injured. We'll be right back.Overtake your expectations with the arrival of the brand new, fully electric Audi Q6 e-tron, available exclusively for 242 ordering. The new launch edition Audi Q6 e-tron, with a range of up to 570 km, also features a complementary upgrade design package, included in the monthly rate of €698. All roads lead to your new Audi Q6 e-tron. Terms of conditions apply. Audi, Vorsprung doch Technik.Here's what else you should know today.The bottom line here is that we're I am not going anywhere. I am not going anywhere.On Monday, in a move to save his candidacy, President Biden told Congressional Democrats in a letter and on MSNBC's Morning Joe that he would not withdraw from the race and accused those asking him to step aside of being routinely wrong about politics.I don't care what those big names things. They're wrong in 2020. They're wrong in 2022 about the red wave. They're wrong in 2024. And come out with me. Watch people react. You make a judgment.Biden faces what could be the most crucial week of his candidacy, as he contends with growing concern among Democratic lawmakers about his age and ability to win re-election. He also spoke directly to some of his biggest fundraisers and donors in a private call, telling them Democrats needed to shift the focus away from him and back to Trump. And as Tropical Storm Barrow battered Houston and its suburbs on Monday, at least two people were killed by fallen trees, and nearly 3 million homes and businesses lost power in Texas. The storm is expected to move across the Eastern half of the United States over the next several days. Today's episode was produced by Will Reid, Claire Tennis-Getter, and Stella Tan. It was edited by Lisa Chou, contains original music by Dan Powell, Alisha Baetup, and Sophia Landman, and was engineered by Alyssa Moxley. Our theme music is by Jim Brunberg and Ben Lansberg of WNDYRLE. That's it for The Daily. I'm Natalie Kittrowet. See you tomorrow.

[00:12:15]

skepticism here, I think many of us interact with AI through ChatGPT or Gemini, and we've seen the hilariously bad answers those systems can produce. We can Don't even depend on AI to solve a crossword puzzle. I'm just wondering, how can Ukrainians rely on this technology for much higher stakes stuff? I mean, hitting the right targets in war, are they Are you finding that their AI is making mistakes?

[00:12:49]

We don't really know the answer to that. We do know that the systems work pretty well, and part of the point is that they're supposed to be cheap, so they don't always have to work. If they work 80% of the time and they're cheap, that's okay. I will say that another reason why they want the human in the loop who can turn off the AI is that they're afraid of friendly fire and hitting the wrong target. There's an ethical consideration, but there's also just a very practical one that this tech could go wrong and go with the wrong person or identify the wrong thing, and they need to be able to turn it off. There are still humans in the mix, but even with this bad technology, perhaps it's even scarier, it's super simple to just take the human out of the mix.

[00:13:29]

How close are we to that? I mean, is Ukraine's military contemplating a scenario where the humans really go away and the machines and their judgments are entirely responsible for killing?

[00:13:44]

That was a question I posed to everyone working on this stuff. The general answer is a no, that the human will stay in the loop for the foreseeable future. But people had different takes on it. One guy who had a particularly interesting answer was an executive at the firm that made that automated machine gun, Robeneers. His name is Anton Skripnik, and I met him at his offices in Ukraine. So, realistically, the pace that we're seeing, when do you think we're going to start seeing the first automated killing? Is it When I asked him how or when the first automated killing on the front lines might occur. Maybe it was already done.

[00:14:25]

Most likely it was already done.

[00:14:27]

He said it probably, honestly, has already happened. He had no way of knowing. He wasn't sure.

[00:14:34]

People very often just do something to survive, to complete a mission without sharing information. This is like, Not bad.

[00:14:46]

In a fast-paced, high stress environment on the front where life and death are oftentimes a matter of an instant decision, whether or not a soldier flipped a switch that allowed something to just go fully automated in this way or autonomous is very possible. Is there any thought of changing that, or is that something that you guys are stand by? I asked him, Okay, but you guys aren't doing this.

[00:15:13]

There is not a single request about having that.

[00:15:18]

He said, No. For us, you have to hit this trigger every time the gun sees a target so that it shoots. How long would it take you to do it if you wanted to? I said, Okay, but if wanted to make it fully autonomous, how long would that take? Tomorrow. Tomorrow.

[00:15:35]

Because today is already like-Yeah, I've taken 2 hours of your time.

[00:15:41]

Basically, no time at all. It's a matter of a few lines of code Because these things are already effectively doing the auto-targeting, it just has the human pulling the trigger. To make the computer pull the trigger is almost so easy as trivial.

[00:15:54]

Wow. What Anton is saying essentially is that he already has the technology to a robot that makes the decision to kill on its own. There's a human operator for now, but that's not a necessity.

[00:16:07]

Exactly. His answer really hit home for me because still, I think even sitting there had thought that this was the stuff of science fiction. But what I realized at that moment is that the era of the killer robot is already upon us. We're already here. That raises just a huge number of ethical and moral questions about the future of warfare, the future of its accessibility to these kinds of weapons, and what it requires to kill a human being in the future.

[00:16:49]

We'll be right back.

[00:16:55]

Every Audi is designed with you in mind. From the fully charged electric driving of the all new Audi Q6 e-tron to the hybrid petrol and diesel engines of the Audi A3 Sportback and A6 Saloon. Because how you drive is very much a personal choice. And of course, with every Audi, that choice is never a compromise. Book your test drive of the 242 Range at your local Audi dealer. Audi. Vorsprung doch Technik.

[00:17:26]

Hi, I'm Robert Vinluen from New York Times Games. I'm here talking to people about Werdel and showing them as a new feature. You all play Wurtl? Yeah. I have something exciting to show you. It's the Wurtle Archive. If I miss it, I can go back. A hundred %. Oh, that's sick. Now you can play every Wurtle that has ever existed. It was like a thousand puzzles? Oh my God, I love it. Actually, that's really great. What day would you pick? May 17th. Okay. That's her birthday. What are some of your habits for playing Wurtle? I wake up, I make a cup of coffee, I do the Wurtle, and I send it to my friends in a group chat. Amazing. Thanks so much for coming by and talking to us and playing. New York Times game subscribers can now access the entire Wurtle archive. Find out more at nytimes. Com/games. You don't understand how much Wurtle means to us. We need to take a selfie.

[00:18:17]

Paul, you started to get into the potential moral questions raised by a world where robots need no human input at all to make decisions about killing people. Let's talk about those questions.

[00:18:30]

I guess the first big consideration is who has access to this and where it will spread. The first group that does is powerful countries. I mean, the United States is developing swarms of drones that can accompany its fighter jets. Other major military powers in Europe and China, for instance, also are developing this thing. But it's also not just those guys. Ukraine, for instance, has channels where it's been sharing tips on drone warfare. We did a story earlier in the air out of Myanmar where we found Burmese drone pilots were training on Ukrainian software that taught them to use Kamikaze drones. The question becomes, how long until these automated targeting softwares are shared. Perhaps what's maybe scarier is that Russia also is developing very similar solutions to the Ukrainians. Who will they share it with? Will they share it with North Korea, Iran, certain fighters in Sudan? The point is, it's very easy to spread software. I mean, this isn't even a piece of hardware. This is just something that plugs into a piece of hardware. You can send it over an email. The guys that we talk to in the field who are flying the drones One of the problems they had with their technology when they showed it to the Ukrainian military is that it wasn't encrypted.

[00:19:50]

So the Ukrainians were afraid that if their drone crashed behind enemy lines without blowing up, the Russians could take the little mini-computer, download the code, and use it to build their own system and hit the Ukrainians. So software spreads incredibly fast and incredibly easily, and it's going to be extremely hard once these solutions are developed to stop them from going almost anywhere. It's not hard to imagine a dark website that allows you to sell all manner of autonomous drone attack systems. There was one US official I was speaking with who has huge concerns about the terrorism implications of this. So it's not hard to imagine. I mean, take a drone, for example, you could fly something in 20, 30 miles away and becomes extremely difficult to defend against.

[00:20:35]

That raises a lot of questions, obviously. I have to assume that ethicists and human rights officials are asking some of them. For example, Is there any way to regulate AI weapons? Can we put limits on their use?

[00:20:51]

Yeah. This is something that's been debated in the UN by panels of experts for years, but we never really get to anything particularly concrete, in part because countries are already in an arms race to develop these things. Every time anybody proposes some a rule, it's vetoed, if not by the United States, by China, by Russia, by other countries in Europe. But there are some basic principles that ethicists rally behind, things like keeping a human in the loop so that the human makes the ultimate decision, even if there's automated targeting going on. That's the line the Ukrainians are standing behind. But again, There's really not much out there to stop any of this from going wherever it wants to go. Honestly, it feels like we're already heading in that direction.

[00:21:38]

Paul, in listening to this whole time, I've been wondering how we should think about this idea of software making the decision about who lives and dies. Because on the one hand, I have to say the idea of robots hunting down humans is truly frightening. But on the other hand, it's not as if humans are known for their restraint in war. I mean, right now we're witnessing two wars in Ukraine and in Gaza, where human-led military campaigns have killed tens of thousands of people, many of them civilians. I'm wondering, in your reporting, have you come to think of this technology as in any way better or a more precise form of warfare?

[00:22:22]

Yeah. I think what was interesting is some of the technologists building this that I spoke with did make this case. Their logic goes something like, if we have robots fighting robots, humans aren't dying. If we can put rules inside the software, say, children will be killed by this weapon. We can prevent it from doing certain things that maybe a really bad human would do. You could maybe even create spaces like the front line where just nobody can step foot for 4 kilometers on either side because the weapons are so deadly. How can you move forward in either way? You just create a perfect stalemate. But I guess history shows us, in my understanding of history, at least, that that's not the way this will probably go, and that every time in the past, we've seen a breakthrough in weaponry, oftentimes it's just meant more devastating weapons get created. I thought back to Alfred Nobel, who famously thought dynamite would end war, and of course, it simply made more powerful deadly bombs. It feels like that is the future that we are treading into. But I just think that it's very hard to sit where we are in a place that's not at war and tell people building these things that are weapons to defend their families and their friends who are going off to war against an invader to stop doing it because it could make us unsafe in the future.

[00:23:51]

Even some of the ethicists I spoke with who are very opposed to have dedicated their careers to fighting against autonomous weapons, they would throw up their hands and say, Well, I can't really argue with the Ukrainians. One of the Ukrainians I spoke with who's making autonomous drone systems said, You show me a hypothetical victim, and I'll show you a real dead soldier and a family that now has to live without him. That leaves you in a very hard place because you have what's a runaway train. You can't morally argue for people to stop building things to defend themselves, yet what they're building basically secures a future that will be far more dangerous than the present that we live in. As long as this war in Ukraine goes on, we are going to see more advanced systems get developed. I just don't know how we avoid a future in which we have ever more powerful, ever more autonomous weapons. That's pretty scary.

[00:25:05]

Paul, thank you so much.

[00:25:08]

Thank you.

[00:25:14]

On Monday, Russia launched one of the deadliest assaults on Kyiv since the first months of the war, striking Ukraine's largest children's hospital as part of a barrage of bombings across the country. At least 38 people were killed in the attacks, and more than 100 were injured. We'll be right back.

[00:25:53]

Overtake your expectations with the arrival of the brand new, fully electric Audi Q6 e-tron, available exclusively for 242 ordering. The new launch edition Audi Q6 e-tron, with a range of up to 570 km, also features a complementary upgrade design package, included in the monthly rate of €698. All roads lead to your new Audi Q6 e-tron. Terms of conditions apply. Audi, Vorsprung doch Technik.

[00:26:24]

Here's what else you should know today.

[00:26:27]

The bottom line here is that we're I am not going anywhere. I am not going anywhere.

[00:26:33]

On Monday, in a move to save his candidacy, President Biden told Congressional Democrats in a letter and on MSNBC's Morning Joe that he would not withdraw from the race and accused those asking him to step aside of being routinely wrong about politics.

[00:26:48]

I don't care what those big names things. They're wrong in 2020. They're wrong in 2022 about the red wave. They're wrong in 2024. And come out with me. Watch people react. You make a judgment.

[00:27:05]

Biden faces what could be the most crucial week of his candidacy, as he contends with growing concern among Democratic lawmakers about his age and ability to win re-election. He also spoke directly to some of his biggest fundraisers and donors in a private call, telling them Democrats needed to shift the focus away from him and back to Trump. And as Tropical Storm Barrow battered Houston and its suburbs on Monday, at least two people were killed by fallen trees, and nearly 3 million homes and businesses lost power in Texas. The storm is expected to move across the Eastern half of the United States over the next several days. Today's episode was produced by Will Reid, Claire Tennis-Getter, and Stella Tan. It was edited by Lisa Chou, contains original music by Dan Powell, Alisha Baetup, and Sophia Landman, and was engineered by Alyssa Moxley. Our theme music is by Jim Brunberg and Ben Lansberg of WNDYRLE. That's it for The Daily. I'm Natalie Kittrowet. See you tomorrow.