Transcribe your podcast
[00:00:00]

It is time now for our new weekly segment, AI Decoded. Welcome to AI Decoded. It is that time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. Now, last week, we looked at how artificial intelligence could threaten human jobs in the future. But what about those on the battlefield? Well, The Guardian is calling it AI's Oppenheimer moment due to the increasing appetite for combat tools that blend human and machine intelligence. This has led to an influx of money to companies and government agencies that promise they can make warfare smarter, cheaper, and faster. Here in the UK, leading military contractor BAE systems are ramping up efforts to become the first in their industry to create an AI-powered learning system meant to make military trainees mission ready sooner. Now, our BBC AI correspondent, Mark Chisack, went to meet all those involved. We will be showing you his piece in just a moment. But with me, I'm very pleased to say, is our regular AI contributor and presenter, Priya Lacani, who's CEO of AI-powered education company Century Tech. Now, Priya, this is a fascinating area, but perhaps one of the most controversial, and people have huge concerns about it.

[00:01:31]

Yeah, that's absolutely right, because this is using AI to potentially have unmanned military drones. What you're going to see in Mark's incredible piece is unmanned military warcraft, potentially. Then there's all these questions about, well, hang on, obviously it's great if there aren't humans being harmed out there on the field, but does that mean that actually war could escalate much quicker? A decision is then going to be made by these AI systems. If both parties have AI systems, systems, what happens then? It's a race as to who can escalate further. There's all sorts of ethical considerations, but you're also going to see learning systems and how BAE systems are approaching using AI to improve learning in terms of training the military and soldiers. It's a fascinating area. Then we'll do a bit of a deep dive into the ethics a little bit later in the program.

[00:02:20]

Lots to talk about, Priya. Let's take a look as we were just talking about this report by Mark Chuzack. Then stay with us because we've got lots to discuss afterwards.

[00:02:33]

Up, down, flying, or hovering around. For 75 years, the Farnborough Air Show has showed off aircraft, both civilian and military, often inviting pilots to put their airplanes through their paces to the delight of the assembled attendees, including plane buffs, even new prime ministers. In recent years, Farnborough has played host to a lot more of these unmanned air vehicles or drones, as they're commonly known. Drones with military application with fixed wings that behave like an airplane, or rotors capable of hovering like a helicopter, are in abundance. But all have something in common. A human being involved in the command and control of these aircraft at some stage, it's a process that's called human in the loop.

[00:03:31]

It's critical from a moral and ethical point of view to ensure that there is a human judgment that is always at the heart of selection of the course of action.

[00:03:44]

Military application of AI is extremely controversial. Images of killer robots and the idea of AI run amok are frequent additions to stories in the press about the risks the technology poses. Nevertheless, militaries around the world are already using artificial intelligence. One area where it's particularly useful is training pilots to fly aircraft like these. Flight simulators are an integral part of a pilot's training. They save time and money, allowing prospective pilots to gain valuable skills from the comfort and safety of TerraFirma. Before, formerly with the RAF, Jim Whitworth is a pilot instructor, experienced in flying military jets like the Hawk and Tornado. As soon as you see that, I want you to just pull the stick back, set an attitude, as we discussed. This simulator rig is for a Hawk jet, the Royal Air Force's preferred trainer. What feedback have you given to the team developing this in terms of its realism? Really, it's about the feedback from the controls. I would like it to feel as much like a Hawk as possible. Where does the AI come into the mix? We can record everything a trainee does in this environment, in this simulator. We can give some metrics with which to measure the performance and then score each performance.

[00:05:13]

And then as we start to build up data on each trainee, artificial intelligence can then start to analyze that data for us and show us where our pinch points in the syllabus are. And by that, I mean where each trainee is struggling, where perhaps we might want to refine a piece of training, either courseware material or technique from the instructor to try and make that training as successful as possible. Greatest advantage of learning to fly like this is that when I need to get back down on the ground, I can hit a few keys, take the headset off, and I'm good to go. Synthetic training isn't exclusive to aircraft. Nearly every element of the battlefield and its surrounding environment can be simulated. The software powering these tools has evolved from the same tech as video games. The addition of AI allows the environments to behave in a much more realistic way, even replicating civilian activity. How does AI help in simulation?

[00:06:15]

It's really difficult to replicate real-life scenarios. It's very difficult to get enough space to do the training in. It's very difficult to get enough assets available, particularly if they're on operations. We can make them incredibly complicated scenarios, and the AI can then create the complexity frequency that they need to train against.

[00:06:31]

When it comes to aerial combat training, new AI-powered adversaries are proving to be a challenge, even for experienced pilots.

[00:06:41]

Definitely puts you through your paces. It puts you in position that you've not traditionally seen before. It fights a different doctrine that we've not necessarily trained against. I think it's going to become the future.

[00:06:53]

Okay, Piers, headset on and put it through its paces. Piers Dudley used to fly the RAF's the most advanced fighter, the typhoon. He's about to fly a virtual version of the same jet in aerial combat against a system created by developers from Cranfield University. It's called the AI-aided tactics engine. If your opponent is also a human being, there's something at stake for both of you, your lives are at stake. But if your opponent in the real world isn't a human being, does that change things for you as a human pilot?

[00:07:30]

The AI is learning and is adapting to your reactions. So therefore, it becomes quite difficult to train against. If you're fighting against other real-world air crew, you potentially know the training that they've been through. You know almost what to expect, whereas against this, you just don't know what to expect with it.

[00:07:51]

The AI engine has come out on top. Now it's my turn to take on an AI Top Gun. Where did he go? I lost him. Got to get some altitude. Outmaneuvered at every turn, the AI made quick work for this novice pilot. He's just Too elusive. Pilots aren't just learning from the AI. In turn, it's learning from them, too. It's refining skills which one day may be used to pilot drones in real-world situations. A scenario that for many presents a significant moral and ethical risk.

[00:08:35]

That risk associated with technology is a critical area. It's not new. Every technology that's been deployed in defense has a risk associated with it, and there's a very well-established moral, ethical, and legal framework around how we evaluate the risk of any new capability alongside the operational capability and the imperative to use it.

[00:08:59]

But what happens if an adversary doesn't play by the rules, if they don't play by the rules of engagement, or they don't play by the same ethical frameworks?

[00:09:08]

We don't assume that our adversaries will play by the same rules that we do. But because we understand the technology, we understand how you would go about deploying autonomy outside of that framework. When we understand the technology and the approaches they would use, we can understand the techniques we would use to counter that, to defeat that threat.

[00:09:29]

This is a glimpse of the future. It's called Tempest, a joint collaboration between the UK, Italy, and Japan. This proposed 6th-generation stealth combat jet will have advanced radar and weapon systems, as well as flying with its own mini squadron of drones, the Tempest acting as a flying command and control center at a distance, while the drones perform missions semi-autonomously. Which begs the question, how long will the human being remain in the loop?

[00:10:10]

We're told it was interesting. That was Mark Chesack reporting use lethal force and determining what is a valid and lawful target in armed conflict. And we've been working at the UN for more than a decade, trying to get a treaty there. But there's, of course, many different kinds of applications, and there's been a lot of debate around exactly how to define these systems. As we just heard from the video and the previous speaker, there's a lot of different ways to integrate these into the complex operations of the military, which involves a lot of data, a lot of computers, a lot of people making decisions at different levels of command and control. So it's It's challenging to find ways to really regulate how that happens and ensure that humans remain in control.Mike, I have a question for you because presumably, one of the areas that the military is trying to achieve here, achieve better, is less civilian harm. We know from the UN that the civilian casualty ratio is about nine to one, so nine civilians to one combatant. But making precision targeting theoretically more possible doesn't necessarily mean that the impact on missing investigating risks to civilians is more probable because when it comes to using artificial intelligence, it's about speed. So if both parties have artificially intelligent trained weapons or drones, and they're using this technology, speed is key in the process. We saw from, for example, Lavender, which is the AI system used by the IDF, sources alleged that actually they increased the number of civilians that they were permitted to kill when they were targeting a potential low-risk militants to 15 to 20 civilians, and they would drop a bomb on an entire house and flatten it to try and achieve their goals. What do you think about AI makingcould all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.It's not automatic that these systems will improve warfare and any impact on civilians.Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.

[00:14:19]

use lethal force and determining what is a valid and lawful target in armed conflict. And we've been working at the UN for more than a decade, trying to get a treaty there. But there's, of course, many different kinds of applications, and there's been a lot of debate around exactly how to define these systems. As we just heard from the video and the previous speaker, there's a lot of different ways to integrate these into the complex operations of the military, which involves a lot of data, a lot of computers, a lot of people making decisions at different levels of command and control. So it's It's challenging to find ways to really regulate how that happens and ensure that humans remain in control.

[00:15:06]

Mike, I have a question for you because presumably, one of the areas that the military is trying to achieve here, achieve better, is less civilian harm. We know from the UN that the civilian casualty ratio is about nine to one, so nine civilians to one combatant. But making precision targeting theoretically more possible doesn't necessarily mean that the impact on missing investigating risks to civilians is more probable because when it comes to using artificial intelligence, it's about speed. So if both parties have artificially intelligent trained weapons or drones, and they're using this technology, speed is key in the process. We saw from, for example, Lavender, which is the AI system used by the IDF, sources alleged that actually they increased the number of civilians that they were permitted to kill when they were targeting a potential low-risk militants to 15 to 20 civilians, and they would drop a bomb on an entire house and flatten it to try and achieve their goals. What do you think about AI makingcould all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.It's not automatic that these systems will improve warfare and any impact on civilians.Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.

[00:20:31]

could all just happen automatically. And we've seen this already with online trading and flash crashes that have occurred in stock markets where different algorithms will interact with each other and lead to a stock market crash, and they have to turn off the whole system. We don't want this happening with autonomous systems in warfare. But I think to the question you asked before about precision weapons, we know this is automation. And automation increases speed. It also reduces cost. By reducing the cost of bombing each individual target, that means you can afford to bomb a lot more targets. So if you're only killing a certain percentage of civilians with each strike, but now you can strike many, many more things, you can actually wind up having a much larger impact on the civilian population, even though you've increased precision.

[00:21:20]

It's not automatic that these systems will improve warfare and any impact on civilians.

[00:21:27]

Dr. Peter Azaro, I'm going to have to stop you there. I'm sure we could talk about this all evening. It's an absolutely fascinating subject. We really appreciate your time. Dr Peter Azaro, Mikey Kay, thank you. Here in the studio, Priya, thank you so much for joining us. That's it. We are out of time. Ai decoded. We'll be taking a well-deserved break for the month of August. But don't worry, we will be back in full force at the beginning of September. Do please join us then.