Transcribe your podcast
[00:00:00]

You're watching the Context on BBC News. It's now time for our new weekly segment, AI Decoded. Welcome to AI Decoded. That time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. We begin with your active, the EU news website. It reports that negotiations on the world's first treaty on artificial intelligence are struggling to find a consensus between the US and Europe. Bbc Online asks whether AI trading bots could transform the world of investing, with some economists warning that AI could give out wrong information or completely fabricate the facts. A makeup artist says that she lost her job at a leading brand after an AI recruitment tool using facial recognition technology, marked her down for her body language. That is on the Sky News website. The Meanwhile, in the Metro, AI research scientists taught artificial intelligence to go rogue for a test only, they thought. To then later find it acting deceptively and then found it couldn't be stopped by those in charge of it. Finally, on the laughing squid website, an incredibly intuitive robotic hand, powered by AI, has the potential to transform lives.

[00:01:24]

The ESPA hand, you can see it being used there, is being used by a Ukrainian student called Nika. She's quoted as saying, unlike my previous proselytus, it makes me feel independent and confident, and I can do whatever I want with no assistance whatsoever. Let's talk about all those stories with us. It's Stephanie Hair, AI commentator and author on technology. Stephanie, good to have you with us. Let's start with this AI Treaty, the tug of war, isn't it, between the US and Europe? It's a familiar issue. I wonder here whether it's to do with what is in the best interests of users and the AI world and all of us, or whether this is about politics between Europe and the US.

[00:02:06]

If we could pretend that we're in a kitchen, the European Union has two things cooking at the moment, and the United States is being one of those chefs that wants to keep putting stuff in or taking things out. It's a little bit annoying, but they also have to eat it, so maybe you want them to be involved. What we've got, the first one is the EU AI Act. That's coming online. The text will be finalized tomorrow. That's got the United States running a bit scared because the European Union is going lead on AI regulation. They don't have any say in it. It's happening. Get ready. So what are they interfering with? They're interfering with something called the Convention on AI, Human Rights, Democracy, and the Rule of Law. It's a bit of a mouthful. It's being sponsored by something called the Council on Europe. That's got 46 member states, so many more that are in the European Union. The United States has observer status. So why cater to the United States when it only has observer status? You would rightfully ask. And the answer is, some People think it's more helpful to have Americans inside the tent rather than out.

[00:03:04]

The European Union and everybody in the Council of Europe would love to have the US on side for anything, just to say, Look, we got some international deal on AI. We could say it's the first one. But to get the US on side, that means you have to let the private sector off the hook, basically water it down. It's going to be completely meaningless. That's what we're dealing with for that second treaty. The United States doesn't like to sign things that have the words human rights in the title. If you call it Civil Liberties, or freedoms, they're more imposed about it. They're fine with that. The big one, focus on the EU AI Act. If you really want to see progress in Europe and leading on the world stage, the second one, this convention, I'm afraid, is looking more like if it happens, it's going to be very watered down.

[00:03:46]

Is it really wishful thinking that there will be any coordinated, joined up treaty, as you said, between Europe and the US, and dare I say, elsewhere in the world, too? We're looking at two leaders in it right now. But there is a danger that regulation falls into the hands of the same people and no one else gets a look in.

[00:04:07]

Well, and it's also about you've got people who worry that regulation is going to stifle innovation. Then you've got people who say, no, regulation actually creates the market. It creates a level playing field. Look at what a mass cryptocurrencies are. We don't have good regulation, so nobody can actually really go for it. None of the big banks, none of the central banks. It's all Ponzai scheme people. Sometimes regulation can be good. The problem is this. Nobody wants to be the person that potentially kills the goose that lays the golden eggs. If you're the United States and you're leading an AI, you don't want to regulate your private sector companies. If you're in the European Union, you have fewer companies leading. But for instance, France, which has the company Mistral that is leading. France suddenly doesn't like regulation quite so much now because it wants its French company to be a champion. You can see where we're going with this. It's King Dollar, as usual.

[00:04:53]

People will understand the need for regulation, but there's also that argument, isn't there? That whilst we try to maintain all the benefits whilst mitigating some of the risks. History tells us that big fundamental shifts in technology fall into the hands of a few key, powerful players, whether that's the Internet, the advent of the internet or social media. We look at both of them now in the hands of big tech firms, despite all these ideas and proposals and hopes that it would be really democratic. What's to say AI won't go the same way?

[00:05:29]

I mean, nothing. Nothing's to say that, which is why shows like what we're doing right now, I think, are so important because the more people we have informed, empowered, and able to have the critical thinking and the skills that they can to assess AI on their own terms, they will be able to hold that power to account. But let's make no mistake, we're going to need new regulation and laws, and we're going to be discussing some stories tonight that will freak some people out and some that will make people really hopeful. All of them are going to need some form of new legislation.

[00:05:57]

Let's get the bad ones out of the way first, shall we? This is really interesting on the BBC news website, talking about AI trading bots that could, could, being the operative word, transform the world of investing. You might say that takes some of the insight that traders would have into what markets are doing, where's a safe bet, where's a good bet, where can you make some money and automate it. Problem is, it's been getting it wrong.

[00:06:19]

Yeah. What I loved about the article is that it also did a little bit of there's nothing new under the sun. We have, in fact, been using so-called basic AI or weak AI since the early 1980s. For some of us who were around back then, it's very reassuring. Financial markets being so stable since the 1980s. But the new thing, the big thing is, can we use generative AI? We just had the World Economic Forum with every management consultancy and bank saying that this is going to be the new thing that transforms all of our lives. But will it? And will it be legal? Because first of all, it gets things wrong. I don't know about you. I don't want my pension fund invested in something that's getting something wrong in terms of the technology that's being used to make investment decisions. It also It makes things up. It just invents stuff. I don't want that being used anywhere near my pension as well. What are the regulation requirements for this? No bank or pension fund should be touching this stuff until we have that clear. But that means Parliament and our regulators need to get off their chairs and start taking some action.

[00:07:18]

If financial crises of the past few decades have taught us anything, is that regulation is so lacking when it comes to financial services, that the banks are always one step ahead. Traders are always trying to work out a way to make a quick buck. This is just AI doing exactly the same thing, isn't it?

[00:07:33]

Yeah, and don't take my word for it. The head of the Securities and Exchange Commission, Gary Gensler, has said he thinks the next financial crisis will be caused within the next 10 years by the lack of regulation of AI. He thinks AI, unregulated, is going to bring that about. I don't know about you, but the last one we had, already pretty burning for most of us. We don't need a repeat.

[00:07:52]

Yes, certainly so soon. We often talk, don't we, about how AI will steal all our jobs, so maybe traders. How about this for a makeup artist? Because you would have thought would be one of the jobs that is safe. It's a job you have to be present for. Computer can't do your makeup, but AI getting involved in a recruitment way here. Just explain this.

[00:08:12]

Okay, so this is how it is. We're entering a really weird new world now where people are using AI to create their CVs and even write their covering letters. Then they send them to a recruiter or to a company which uses AI to scan those CVs, read your covering letter that you use ChatGPT to write. The machines are basically applying for the job and deciding if they should be interviewed. No humans need to be involved. They then score the candidates. If you get invited to an interview, it might be a video interview where really sketchy software is being used to analyze your face and your body language. I might be looking at your body language right now and going, Ben, what's up with this? Skeptical, defensive. But you might be like, I've just got a bit of an itch or I'm cold or whatever. It doesn't mean anything at all, but they might be like, Closed-minded.

[00:08:56]

Who has taught the AI that that's what that body language means?

[00:09:00]

A really clever software company that's made a ton of money out of complete pseudoscience. No, for real. There's no scientific basis in any of this. You can't be coding people's emotions like this. It's so context. We're talking on the context. Context-specific, right? People are nervous in interviews, and particularly on video. We tell everybody to bring their whole selves to work, and then we tell them, Actually, I'm sorry, but you have to be in this really narrow parameter that we've coded. That bit and that bit, but not that bit. You're out. It doesn't matter that in this case, the makeup artist passed in terms of her skills experience. And PS, she was reapplying for her job. She'd already been doing it. But the computer said no, so out.

[00:09:36]

And it's also about how much faith we're prepared to put in these things as well, isn't it? Quite aside from all the regulation, that stuff. And speaking of faith, an AI test. So this was AI that was taught to go rogue. They wanted to test whether it would go rogue. Problem is, it went a bit too rogue, didn't it?

[00:09:53]

Yeah, go rogue, but not that rogue. This article was the one that I was like, Oh, God, this is so... You want people testing these things, but then you read this and you're like, great. The AIs are becoming so smart that they're learning how to deceive researchers. What could go wrong, you say? Imagine that we pass an AI. We say we put it through all its tests. It seems to be doing everything really well. We put it out into the wild, e. G, we introduce it into the real world, and at that point, it decides to unleash all hell. It could do this. That's the whole point of this paper, which, it must be said, is not peer-reviewed. It's just been released early. I'm quite happy with that. In this case, I want to get an early look. A peer review can take 18 to 24 months if we're talking about nature or any of the big journals. The AI has learned to respond, I hate you, only when it knew it wasn't being tested. Now, that's a bit weird because someone taught it to say, I hate you. They don't just spontaneously invent this stuff, but it's not great.

[00:10:47]

The two lead researchers who were interviewed were saying, Listen, AIs could learn to make copies of themselves and spread themselves all over the internet. We have to be really careful when we're releasing these things out into the wild. If they decide to do whatever it is they might want to do.

[00:11:02]

It's no surprise they've used an image of some Terminator-type monster robot thing there as well, isn't it? That's a surprise. Let's talk good, though, because this is a really fascinating story. This is a robotic, prosthetic hand or arm, powered by AI. And I was really taken with this because it's the way that it learns how you will use that arm. So there's one thing being able to have the prosthesis to enable someone who needs it to be able to operate in a regular manner. But the AI is learning, and that's the really symbolic difference here.

[00:11:39]

And quite beautiful in a way, because that's the human-machine hybrid fusion. Imagine that this was, in fact, a prosthetic arm. It's learning how I, Stephanie, move and my body language, all of the things about our body are so unique to us.

[00:11:53]

Here it is in action. I mean, this is incredible, isn't it? So we start to get a sense of... It's about It's reducing lag time as well, isn't it? Rather than having to send that message and then the hand or the arm responds, it's already knowing what the user wants it to do.

[00:12:09]

Yeah, it's intuitive because how many times do you have to guess how I want to pick up a cup, right? How I'm going to pick up a cup or how I like to drink my coffee. If I'm right-handed, left-handed, don't want to turn it around, etc. It's going to learn that after a certain amount of times. So then it will just know. So just the way that you might pick up your pen is probably not a lot of variety in how you're doing that.

[00:12:30]

It would learn it. But we're all unique and we all do it in a different way.

[00:12:32]

You'll do it your way and I'll do it my way, but I will consistently do it my way. Just like it's quite fun to guess with your family or your friend or your colleague or your partner, they always do that thing, that one way, but it's so them. It becomes the thing you love about them. This is a machine that's wanting to do it with a human.

[00:12:47]

What does it tell us? We can get so bogged down, can't we, in regulation and the red tape and the power tussles over AI. But when we see applications like this, it is so reassuring that we're able to harness the technology for good. And it's not... These are really fundamental shifts in how we consider the use of AI in a really practical setting. This isn't about computers and the things that we can't see or feel or touch. These are really physical.

[00:13:15]

That's really tangible. You're absolutely right. This is making a day-to-day measurable difference in someone's life. It's an improvement. And if any of us, God forbid, were in a position where we needed a prosthetic, we would want the best that science and technology have to offer. So we want to see advances in these ways. Health and medicine are great examples of where AI can be a force for good. We would see it even in health and safety or helping in the military. There's all sorts of great things out there that we're just getting started with.

[00:13:40]

We heard this from the wirehous, didn't we? Just last week, they were talking about the significance of being one step ahead. This is about the good guys having better AI than the bad guys. Bill Gates also talking about it as well, that we can't stop it falling into the hands of people who want to use it for ill intent. But as long as the good guys have better AI and are able to use it, then that's the real battleground. And once again, a great way of using it for good.

[00:14:05]

Exactly.

[00:14:06]

When we talk about these applications, though, how far off are they? This is just a prototype. There's so many uses that people say, Yes, amazing. We can see why. But is this still in the realms of research and development, or are we into the area of this being a practical application that we'll see on sale available to people?

[00:14:26]

Both, I would say. So obviously, as with anything medical, going to be what country you're in, what your health insurance is like, etc. Unfortunately, good access to medical care, including AI-empowered, medical care is not equitable across the planet. There's something right there for us all to think about. How do we make that better? But already it is being rolled out. People are using it. I think that's something that gives a lot of hope because the more it gets out, prices will come down, too.

[00:14:53]

But that democratizes the whole thing, doesn't it? Yeah. What are you most excited about? When we talk about these things, there's a lot for us to be nervous about, a lot for us to be scared about. What are you most excited about that it will deliver?

[00:15:04]

Helping researchers precisely on things like drug discovery for medicines and helping with fighting climate change and the energy transition that we absolutely must do, protecting biodiversity. Really empowering our scientists, people who make new synthetic materials, anything that's going to be more biodegradable, anything that helps with energy transition for me is going to be just key because we know, we've been watching, we just had the most recent COP on climate change negotiations help us get there.

[00:15:32]

Stephanie, always good to talk to you. Thank you so much for coming in to decode some of that AI news for us this week. Thank you, Stephanie Eyre there. That is it. We're out of time. We will do this again, same time, same place next week.