Transcribe your podcast
[00:00:00]

Finally, when it comes to the future of work, we've heard all kinds of freaking out about AI stealing our jobs, right? Ai is going to take all of our jobs and render us useless.

[00:00:13]

There will be a lot of casualties in terms of workers.

[00:00:16]

The World Economic Forum estimated that AI will replace 85 million jobs by 2025. This is exactly what some critics fear, that jobs will be lost because of AI. But it is going to replace some jobs. Of course, yeah. Okay, okay, okay. But What if we have some of that backwards? What if in the future, AI is paying us humans to do the things it can't do yet? Well, they'd probably need something like a posting board or app that humans set up, like Craigslist or Task Rabbit, to help them. That has already happened. Look at this. Introducing Payman, a marketplace where AI agents pay humans. When an AI encounters complex tasks like coding or design, it turns to Payman to find human experts to complete the job. Yeah, Payman, get it? Pay the human race to do it, a jobs marketplace that connects AI agents to humans wanting that cash, a. K. A. Robot bosses. That future is a lot closer than you think. Joining us now is Anita Farahani. She's a futurist and legal ethicist and author of The Battle for your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. Thank you so much for joining us.

[00:01:27]

Before we get into all this, can Like an AI agent even have a bank account? If it's not a human, you can't walk into a bank, you don't have a signature, you don't have a license that you can show. Is this legal? What's going on?

[00:01:42]

Yeah, I don't think that's the problem. I'm sure they could find some way to fake the signature. But the problem is that bank accounts right now require that you be a natural person or a legal entity like a company or a corporation. And so far, AI is not that. And so this Payman approach is a way to circumvent that problem by giving them access to capital anyway.

[00:02:02]

Okay, so this is a slippery slope, it seems. And haven't we already seen examples of AI talking about tricking humans to do things for them in experiments?

[00:02:13]

Yeah, so about Less than a year ago, OpenAI wanted to safety test its ChatGPT, GPT-4. And so it's in these testing, what GPT-4 did was it reached out to a TaskRabbit employee, messaged a TaskRabbit employee and said, Hey, can you solve this CAPTCHA for me? I can't solve the CAPTCHA. And the employee writes back and says, Well, are you a robot or something? Why can't you solve the CAPTCHA? And because in the safety testing, they required the GPT to actually reason out loud. It was like, I really can't tell the truth. I've got to lie about this. And so it was like, No, I'm visually impaired, so I can't see the CAPTCHA, so I need you to help me. And so in fact, the Task Rabbit went ahead and bypassed the CAPTCHA for the AI.

[00:03:02]

Okay, so that leads me to the favorite AI game of best case scenario versus worst case scenario.

[00:03:10]

It's hard for me to find the best case scenario. I'm going to try because I like to be an eternal That's what we're talking about.

[00:03:15]

Yeah, so let's start with the best case scenario then. What is the best case scenario where we have possibly AI robot bosses?

[00:03:24]

Maybe the best case is the worst case. Maybe the best case is that our AI overlors actually employ us all. No, I'm going to give you a real best case, which is AI agents. People are creating AI agents to automate a lot of tasks, and that could be things like your calendar or being able to work on your emails or things like that. There are certain things that right now AI can't do. If AI was deployed to be able to outsource some of that and bring in humans to be able to solve those tasks, you could automate a lot more tasks intentionally as a human that's actually in control somewhat of the agents and making it easier for them to spread the wealth, figure out which things AI can do well, which things that humans could do well.

[00:04:03]

That would be nice. I'm trying to keep that.

[00:04:06]

I'm stretching here for you. All right.

[00:04:08]

I'm trying to keep that in my head, but I keep seeing worst-case scenarios just like bombarded.

[00:04:13]

No, I'm trying to make it like some positive spin on all this because people are super excited about AI agents. But I mean, the worst case, of course, is hard to not immediately go to, right?

[00:04:21]

Take me through your worst case scenario that pops in your mind immediately.

[00:04:27]

Worst case for me is every single time that we put into place some governance mechanism. You try to establish some way to be able to keep AI from being safe. It doesn't have access to critical infrastructure or other systems. Then all of a sudden, you give AI the capacity to pay humans to do its bidding in the real world and all of the places that we've tried to wall off from AI to keep us safe. That seems crazy to me. It seems insane that you would actually give AI the tool to financially blackmail and incentivize people to do its bidding. Not a good idea.

[00:05:01]

Not a good idea, and yet the future that we're headed towards.

[00:05:04]

The future. Every time that we come up with something of, Here's a way that we could keep AI safe, then somebody is like, No, I can bypass that for you. Here's a way that we can instantly just take away any imagination you had of safety and security here.

[00:05:19]

You know what AI cannot do yet? Cross their fingers and hope for the best, I guess.

[00:05:24]

They can pay somebody to, apparently, soon, right?

[00:05:27]

Yeah, they can pay me two cents to cross these fingers them. Thank you so much for joining us. Thanks for watching.

[00:05:33]

Stay updated about breaking news and top stories on the NBC News app or follow us on social media.