Transcribe your podcast
[00:00:00]

You are live with BBC News, and you're watching The Context. It is time now for our weekly segment, AI Decoded. Each week on AI Decoded, we look in-depth at some of the more eye-catching stories in the world of artificial intelligence. We're going to start. Here in the UK, a counter extremist think tank says, We need new laws that reflect the danger of AI being used to recruit terrorists, says the BBC. In a recent experiment on Character AI, that's a website where people can have conversations with chat bots created by other users. Several bots were found to mimic extremist groups such as Islamic State. A recent report also warns that by 2025, generative AI could be used to research how to carry out terrorist attacks. To the US, then, the Chief Justice also outlined his concerns on the threats posed by AI to the courts, pointing to situations this year where lawyers have used AI to submit briefs that cite nonexistent cases. But he also points to the potential AI has to help with legal research and the overall functioning of the judicial system. Microsoft has announced the biggest change to its keyboard in three decades with the introduction of an artificial intelligence key.

[00:01:27]

The key will allow users to access Microsoft AI tool to help them with research, writing emails, and creating images. The Financial Times reports on a trial for an AI stethoscope here in the UK. Now, the medical tool can instantly detect if a patient has a heart condition. Experts say it could prevent thousands of deaths a year and save millions of pounds for the NHS, the National Health Service. Well, with me is Stephanie Hair, AI commentator and author of Technology is not Neutral: A Short Guide to Technology Ethics.Thanks very much for coming on the programme.Hi. So lots to get through. Let's start with, I suppose, the slightly more serious and certainly potentially worrying aspect. This is the link potentially between AI, artificial intelligence, and terrorism. What's the concern here?

[00:02:19]

It's such a weird story in the sense that this company, character. Ai, which is a website where, as you mentioned, people can have AI-generated conversations with chatbots created by other users, in other words, other human beings. The problem is that some of them were impersonating Islamic State and other entities that you would not necessarily want to be in conversations with people and in an experiment, recruited someone. So the question is, who would be responsible in this case? Because in theory, it's not a human being that would be doing the recruiting. It's a machine. But at the same time, that argument doesn't really hold water because ultimately, it's humans who have written the code come up with the data sets on which the algorithm is trained. There's investors who are involved, there's the CEO who's involved, and it violates the terms of service of the company. What we need is potentially new legislation, but that's going to get really tricky because the Labor Party has said that it wants to say that training AI to incite violence or radicalize the vulnerable would be an offense, but that could be women's rights, that could be climate change protesters.

[00:03:29]

So who's vulnerable? What does it mean to be radicalized by AI? All of this is so murky.

[00:03:33]

Just more broadly, the attempts to legislate around this takes long time and it's complicated anyway. Yeah, exactly. And part of the worry, too, is that, well, okay, if you're legislating and getting groups together in certain countries, say the UK is getting or the EU, you still have the rest of the world, and there's no global framework.

[00:03:57]

Well, exactly. That's the big thing is that right now you're seeing patchwork of AI legislation either being proposed or enforced. As you say, it doesn't really know borders. But at the same time, we wouldn't want to let the perfect be the enemy of the good. We're going to have to start somewhere. If the UK leads on this, for example, or if the EU did, other countries might follow suit and you would at least get some agreement among, say, liberal democracies to not allow this thing. Right now, we can just say that the technology doesn't work because the company says this violates our terms of service and we have reasons that you shouldn't be able to do it, but you clearly can. So it fails on the tech level.

[00:04:32]

Interesting. Okay, let's head to the US. We'll spend a couple more minutes on potential downsides of AI as people are starting to use it. This is lawyers. Talk us through what's going on here.

[00:04:45]

This is quite funny for all the lawyers out there who are worried about being replaced. Maybe it's a bit reassuring because if you are advising, for instance, a future presidential candidate such as Donald Trump, and you're giving your lawyer bogus legal citations using Google Bard, which is an AI thing, it's making things up. It's pretending that the cases existed that never had. They call this hallucinations. But what it means for everybody else is it just makes things up. That's terrible. What you need as an actual human lawyer to verify that. So don't use AI for your lawsuits without actually having a human being check it. If anything, this should reassure the lawyers out there that their jobs are good for at least several more years, I reckon.

[00:05:27]

Yeah, because we first heard about this thing, but it was referencing, I think it was homework or students making up quotes and linking to papers that didn't exist. But it's slightly more consequential if it's a legal argument and lawyers actually using that.

[00:05:43]

Yeah, and the thing is, it's not a search engine. Generative AI, ChatGPT and the like, it's not a search engine, so don't use it for that. It makes things up. If we can just get that message clear, we're going to have CART 2024 on a great note.

[00:05:55]

That is a good public service broadcast there. I appreciate that. There There are some references here in this article, though, to potential upside in the legal sphere. What are they?

[00:06:05]

We're looking at things like streamlining costs, hopefully making things faster, giving information to lawyers and non-lawyers alike. In a way, democratizing the whole process. That's really good. What it raises here, though, is really interesting, which is this something called the human-AI fairness gap. Most humans would feel that a human judge is going to have more compassion and thus more fairness towards them than a machine, which is really interesting when you think about things like bias and discrimination. What's backing that study up? Would you want to be judged by a machine? Computer says no, maybe. Or a human being who can judge from the sincerity of your voice, your facial expressions, for instance, if you're likely to reoffend or if you really want to turn a new leaf?

[00:06:49]

I certainly don't have the answer to that. That's a big question. Interesting stuff. Okay, lots for the lawyers to be contemplating. For the rest of us, slightly more simple issues, keyboards. Most of us use them, and they don't change very much, but they're changing now.

[00:07:06]

Well, there's exciting news happening in the world of keyboards. I hadn't realized this myself until I read the BBC article that the keyboard has changed so little in 30 years, not since 1994. So there you go. That might actually mean, though, that the design of the keyboard is perfect as it is. Here we go. Why improve? But what it means effectively, as those who've been using Apple products will know, there's It's already been an AI button in the world of Apple, but Microsoft is now catching up and has its own button. So when you click on it, it will just take you straight into its AI-powered suite of tools. People who use them will know things like copilot. This is about drafting emails, coming up with new images, all the things that are really handy. And instead of having to do a Control C function, you just press the button and it's there. So it's just a little hack to make your life a little bit easier.

[00:07:55]

Will this be... I don't want to get over the top here, but is this one of the moments that we look on and go, Oh, that was when it just became so mainstream and so normal, and we can't even imagine a world without it now, like mobile phones or the Internet before. Good question.

[00:08:10]

I think the proof is going to be in the pudding this year, really. Last year was the year of hype and promise of AI. This year, it's going to be the proof is in the pudding. So for certain people who are using certain types of functions in their jobs, maybe. Let's find out from them this year. But for other people, has your life changed? If you could already press a button or not? I don't know.

[00:08:30]

In terms of that practical people, there are people using to generate images in whatever. But are you expecting 2024 to be a year where we actually ordinary people who are not in specific jobs or doing specific areas, actually it becomes far more integrated into our daily lives, or is it still 2025, 2026?

[00:08:54]

No, I think it's going to start becoming more integrated into people's lives. Anybody who's a a keyboard warrior and is having to use these technologies all the time in their day to day, yes. But there's also all sorts of jobs that are not really affected by this yet. So it's like you're either on the bus or off the bus. People who are on the bus, yes.

[00:09:12]

Interesting. Right, let's move to our final story, and potentially a pretty good news story this one.

[00:09:19]

I love this story because here we are in the UK, and we know, for those of us who live here, we've got quite a big backlog for patients to be seen and our very long suffering and hard working medical staff are working hard to clear that backlog. And finally, we're giving them a tool to help them do it. So the big thing is using a stethoscope to detect heart disease. So really scary stats. You're supposed to be getting the diagnosis within 6-8 weeks, but currently it's 8-12 months, which is not ideal. This is leading to 30,000 excess deaths per year. Anything that we can do to cut that because people are dying needlessly is great. And here we have it. This is a stethoscope that's got a remarkable accuracy rate. It then has to be confirmed by a blood test. It must be said, but that's only a few weeks, and can tell people if they've got heart disease or not. Incredible.

[00:10:07]

And what's so arresting is the image of the stethoscope. It's so old-school, I suppose, isn't it? It's So traditional. We're used to it for so long. And yet that's a really interesting juxtaposition, actually, the very latest technology looking like something that's been around forever.

[00:10:27]

I know. It almost looks like a little tiny phone, right? And that might actually become the way that a lot of this technology is going to go down. And what's exciting is you've got different kinds of heart diseases, right? And so a lot of the symptoms that people will present with might be fatigue or abdominal bloating, which can cover so many things. To have something this simple that could get them into that line to get the blood test and get them treated, get them on drugs straight away, would be wonderful.

[00:10:53]

And just before I let you go more broadly on the issue of health, because it looks like certainly from the outside of this world, that Not only are we hearing regular innovations like this, but it seems like an area where there could be really concrete, quite soon, advances that are really going to change people's lives.

[00:11:11]

Yeah. Ai's strength is in pattern detection and in using huge amounts of data to look for patterns. So think about it. If you've got all the data that you would have on, for instance, heart attack symptoms are different in women than in men. And it's been a very under-researched area of health. If you get that data and then train it up to get the algorithms right and then put that in the hands of your GPs, you can be improving female health, which is a win. So all these different things that right now we're operating in the dark so much with our bodies, it could be a game changer.

[00:11:44]

We hope it will be in so many ways. Stephanie Hayer, thank you so much. Great to have you on. Thank you for that. That is it. We are out of time. We'll do it all again same time next week.