Transcribe your podcast
[00:00:00]

You're watching the context. It's that time for AI Decoded. This is the time of the week when we look in-depth at some of the most eye-catching stories in the world of artificial intelligence. We're going to begin tonight with this story from the team at the New York Times who report that the technology is now advancing so rapidly and in so many directions that legislators in Europe and the United States are now struggling to keep track and to regulate it with big implications down the line. The Guardian reports that Google's AI offering, Gemini, which is only yet available in the United States, outperforms ChatGPT in most tests and will perform with advanced reasoning. Gemini will soon be folded into most Google products, including your search engine. Fortune Magazine looks at how AI has developed in the workplace. They report that in some sectors it's become so indispensable that nearly 60% of the workers who use it say they'd rather take a 10% pay cut than go without it. The BBC news website has a piece today on the rise of AI-powered clones. The idea is that very soon we'll be able to deploy our own digital clones to do the more mundane jobs that take up most of the day, freeing up more time.

[00:01:21]

These clones will learn from you, all your habits and your behaviors, and they tell us that you would be indistinguishable to the client or the drone would be indistinguishable at least. Scientific American raises the alarm on maligned chatbots, which have been trained to find ways through the built-in restrictions of other chatbots. Supposedly, these maligned actors, that's the one with the horns on the right, would learn prompts that would unlock information on how to build a bomb or how to synthesize methamphetamine or how to launder money, all the bad stuff that they try and lock away. Finally, a warning from the new scientist says, Beware what you type if you attend virtual reality meetings. The AI can now work out subtle hand movements to reveal sensitive information, your passwords, and your log-on details. Good job. I'm not in virtual meetings very often. Stephanie Hair is here to talk us through all those stories. She is, of course, author and commentator on technology and artificial intelligence. Lovely to see you. This story in The New York Times, we are losing the battle to regulate AI because it is moving so fast. In fact, if you talk to legislators in the US, which we have done actually on this program, they would openly concede that they barely understand how it works now.

[00:02:43]

How likely is it that they can draw up legislation for things we don't even know about?

[00:02:48]

Well, certainly the American legislators are not going to be passing any laws anytime soon, not until the election is well and over. But over on this side of the Atlantic, we may see some movement tomorrow from the European Union. So we were hoping we'd get it today. The negotiators have been haggling for 22 hours straight over the EU AI Act. And then they called time, sent everybody home. They're starting again tomorrow at 9:00 AM. So we may get a landmark legislation tomorrow.

[00:03:17]

They already had a draft of this, which they've worked on for three years, and it was already out of date. They didn't even mention OpenAI's ChatGPT. So great, they've worked through it tonight and they've got to something, but next week, it could already be out of date.

[00:03:33]

Well, they've already got something that's been added since ChatGPT came in on foundation models. We're looking at the models that are used to train large language models. We're looking at the data that's used, and we know that's really contentious because most of it's stolen. People aren't getting compensated for it. Openai, etc, have gotten a mark on that. We might see some movement, but there's a lot of things that are up for ground. I suspect we'll probably talk about it more next week because anything we say now could all be rendered useless by tomorrow and the haggling.

[00:04:03]

If there is a vacuum, as some of the policymakers think there is, then clearly it is left to the AI companies themselves to police the rules.

[00:04:14]

They're.

[00:04:15]

Not very good at that when it comes to social media. It's like they could go wrong. Well, exactly. That is what I think will concern a lot of people.

[00:04:22]

Social media has given us all, I think, a sanitary lesson in how these companies cannot be trusted to run themselves for the benefit of society. They have a duty to maximize shareholder value, and that's what they're going to go for. That said, there's quite a lot of pressure already coming from things like the United States and the United Kingdom having their AI safety institutes, which were created last month, that's pushing a lot of these companies to put their AI models to be tested first before they get released, particularly the ones that look a bit risky. We're starting. I mean, you're right. The technology is leaps ahead of regulation, but we're catching up. That doesn't mean that there won't be coming down the line in 2024, be it lawsuits, regulatory action, or just new norms being set.

[00:05:06]

I'm glad you mentioned the UK's newly formed AI Safety Institute because they are in discussions at the moment with Google about this new AI model, Gemini. Have you seen it? Have you.

[00:05:17]

Been on it? I have seen it. No, I have not been on it. I've been watching it, looking at it, reading all around it. It's quite interesting because they've come out with a three-tiered model. One of them, the little one, Nano, is going to be able to work on your phone. Then there's one that's a little bit bigger. Then there's going to be the big one. That's not out until next year. That's the one that needs to be tested by the Safety Institute. Already, we're seeing different types of models for different types of users. Available in the United States and many other countries around the world, not yet available here in the UK or in the European economic area. When people talk about saying, We can't have regulation and might slow down innovation, that's already happening de facto, even without that regulation in place.

[00:06:00]

There'll be lots of people wondering, Well, how is it going to make my life different? If it's on my phone? Let's talk about the small version first, what difference will it provide?

[00:06:10]

I'm delighted you ask. It's super nerdy and it's fun. It's a multimodal model. What does that mean? It means text, audio, images, video, and code. It can hover them up and do cool things with them simultaneously. It's only going to be coming back to people in terms of the outputs it generates in terms of text and code, which is already going to be very useful. You could, for instance, point your phone at your physics homework. That's one of the models or examples that they gave. It could look at it and mark it. That's crazy. It's like having a teacher in your phone. Again, this will be a massive challenge for actual teachers who are out there trying to check. But could be an incredible learning tool for students worldwide.

[00:06:54]

Yeah, and for my wife, who couldn't turn her phone off in church last week at Carroll's, It's a great one. Actually, just before we move on, can I just read this? It says that the model, this is Gemini. This is the ultra one, the big one that they're testing. It said it outperforms human experts with a score of 90 %. They covered 57 subjects: maths, physics, law, medicine, and ethics, and 90 % of the time, it beat humans. It's that good. Wow. Okay, let's move on to this story from Fortune. So indispensable in certain industries. Which industries are we talking about? We're talking about market intel, financial services, IT consulting, so valuable to the employees who are already using it in these sectors that they would take a 10% pay cut rather than do without it.

[00:07:40]

Which is an incredible endorsement, isn't it? If you're a software developer and you're using AI to help you improve your productivity, if it gets you 70% there, and then you just have to tweak, fantastic. How much of our life is wasted doing tasks that are slow, repetitive, etc, if we can be using these tools as a productivity boost, UK productivity just in this country alone has been flat since 2008. We need every tool we can get. I think it's exciting.

[00:08:07]

Well, what's really interesting about that, this article says 72% of CEOs ranked investing in AI as their top priority, even amid the uncertain economic conditions we're in. The important thing is that they see how it links and how it could surge productivity. In fact, it's already moving markets. They're already telling... I'm a journalist on these market calls, Yeah, we can increase our productivity to such an extent with AI. It's coming, it's coming, it's coming that it's already moving share prices.

[00:08:37]

It is. That's where the risk of hype comes in. What we're going to really want, what researchers will be looking for, and hopefully good journalists too, is to test and see if that actually happens. The proof will be in the pudding. We've had great technology again for the past 15 years. Why has productivity been so flat? Because productivity is really complicated. It's investment, it's training, it's skills. It's not just tech, but tech is a big.

[00:09:00]

Part of it. Yeah, a link to that is this story on the BBC web page about people cloning themselves to create more time. So this is a guy called Rob Dicks. He's a property expert. And this AI model has been learning from him, his techniques, his behaviors, his idiosynchronacies, I guess, such that when he leaves the office, this thing will answer questions. It will deal with clients, and they won't know it's not him. I love this.

[00:09:26]

Well, so that's the ethical bit, which is you should always let people know when they're dealing with an AI. -that would be a disappointment. -most people, I know. -you have to do that? I know. I think most people are actually fine with it as long as they know, but they would feel quite annoyed to genuinely pissed off if they just couldn't tell the difference and they thought you were fooling them. But what's really interesting in this case is we see a lot of movements in technology about authenticity. It's really important to be authentic to distinguish yourself. How do you be both authentic and a digitized bot version of yourself? Yeah.

[00:09:59]

You can imagine a scenario, though, if it looked like you and it moved like you, where this could be pretty convincing and maybe it wouldn't matter to the client as long as the answers are right and it gives you the information and you can get it after hours.

[00:10:15]

That's what we're looking for. Well, I loved it because the article talks about who was using it, and one of the clients was a marriage counselor. You can imagine you think that you're talking to a marriage counselor, really pouring your heart out, and you're basically just getting standard responses because the problems are always the same. Yeah.

[00:10:30]

This one worries me, this story about AI chatbots raiding other AI chatbots, because this is where computers are talking to computers, and we don't know what they're saying to one another. But the idea is that the one with the horns there is finding ways to get through the security of the other AI model such that it could extract really dangerous information.

[00:10:54]

It's such a nightmare. The thing is they don't really know how to stop it. Today's AI chatbots all have rules in them to make it so that they can't tell you how to make a bomb. They can't tell you how to launder money. But researchers have been using AI chatbots to teach other chatbots how to break the rules that they have to prevent them from doing bad things. It's not just humans breaking AI. It's humans using AI to break AI. That's what we don't know how to stop. They can't patch it? No, not yet. Why? They don't even.

[00:11:28]

Know why. Because the AI will be, again, like we were talking about legislation, they'd be working so quickly, finding things so fast that you wouldn't be able to patch it.

[00:11:36]

In real time. Yeah, it's called prompt injection threats. It's this term jailbreaking as you're helping the model break out of its model's rules. Then it goes rogue.

[00:11:47]

Yeah, that is scary. That is scary. Have you ever been in a virtual room?

[00:11:52]

I have. I get motion sickness, though. A lot of people do.

[00:11:55]

Particularly women. You put the headset on, you go into a room, your avatar's there alongside. Why wouldn't you just go on Zoom?

[00:12:01]

I mean, it's a great question. I interviewed for a really big company that specializes in the metaverse, and the interview was conducted on Zoom for exactly this reason. It's not very useful. But it could be in some cases. If you were, for instance, doing a safety and training simulation on an oil rig, you might not want to send people out to do that. It's expensive. It's also dangerous. It's all sorts of things you would just not want to do. If you could do it in a virtual simulation, or for pilots, for firefighters, for training environments, the metaverse is quite cool.

[00:12:30]

New scientists say if you do this, you should be aware because AI is so smart now that it can interpret your hand movements, which are changed slightly in these virtual arenas. But it can interpret how your hands move such that it would be able to tell what you're typing, and that would mean your passwords and presumably your logins.

[00:12:54]

When you go into virtual reality, when you go into the metaverse, you have to understand you're effectively entering a room in which everything is a sensor or a camera on you. So of course, if you're making movements with your hands to type a password, it's picking up on that. None of this is secure at the moment.

[00:13:12]

Okay, well, I learn so much from this every week. I hope our viewers do, incidentally, because it's changing so quickly. There are stories I like, and I think, Wow, that's going to be really exciting. Then there are other stories I think that is terrifying. In fact, I was reading in that story about employees and how they feel about it that, in fact, there are 50% of Americans right now who are worried about AI and 10% who are purely excited. It's probably because of these scarier stories that we do that people.

[00:13:45]

Are reserving judgment. That's the thing is AI has so much promise for efficiency and productivity, but the scary stories are so scary that it makes us want to proceed with caution.

[00:13:54]

Stephanie, lovely to see you. That's it. We are out of time. Thanks to Stephanie for all her thoughts and her analysis on that. We are here every Thursday with AI Decoded. We'll do it again same time next week.