meta_pixel
Tapesearch Logo
Probably Science

Re-release - AI Alignment with Dr. Stuart Russell

Probably Science

Andy Wood, Matt Kirshen

Jessecase, Comedy, News, Mattkirshen, Standup, Andywood, Science & Medicine, Science, Brookswheelan

4.8707 Ratings

🗓️ 23 December 2023

⏱️ 61 minutes

🧾️ Download transcript

Summary

While the gang take a little holiday break, we thought it was worth revisiting Andy's conversation with AI researcher and UC Berkeley Professor of Computer Science Stuart Russell from wayyyyy back in 2019. Now that we're well into the era of generative artificial intelligence, it's interesting to look back at what experts were saying about AI alignment just a few years ago, when it seemed to many of us like an issue we wouldn't have to tackle directly for a long time to come. As we face down a future where LLMs and other generative models only appear to be getting more capable, it's worth pausing to reflect on what needs to be done to usher in a world that's more utopian than dystopian. Happy holidays!

Transcript

Click on a timestamp to play from that location

0:00.0

Probably Science

0:02.0

I am here with Stuart Russell, the author of Human Compatible, Artificial Intelligence, and the Problem of Control.

0:15.0

Thank you for joining me.

0:17.0

Nice to be here.

0:18.0

So this book was great. I've been fascinated with all of the possible pitfalls and great things that AI might bring us.

0:24.6

And there are so many that I hadn't even crossed my mind until reading this book.

0:28.6

There are lots more that I didn't get into.

0:31.6

So to a layperson who thinks this isn't even an issue worth bothering ourselves with right now. What is the,

0:39.0

what is your short way of convincing them that this is worth anybody's time?

0:43.7

So two way. One is it's, uh, it's already happening in the sense that we're already building

0:48.9

and deploying AI systems, uh, the wrong way that are having serious negative effects. And the social media catastrophe

0:56.7

is probably the easiest example to understand because that's, so on the social media platforms,

1:04.0

whenever you read something or watch something, it's been fed to you by an algorithm

1:10.7

that is trying to maximize click-through

1:14.4

or eyeball time or some other metric. And in doing so, they forgot to include a whole bunch of

1:22.2

other things like not turning the world population into neo-fascists.

1:33.2

And so since the algorithm wasn't told that it wasn't supposed to do that, that's what it did.

1:46.9

By modifying our preferences through feeding us sequences of content that gradually lead us to be much more extreme and therefore much more predictable versions of ourselves.

1:48.4

So the algorithm only cares about whether you're predictable, because the more predictable

1:53.0

you are, the more money it can make off you.

1:55.4

But it so happens as a side effect that it seems to have turned many people into much more extreme versions of

2:02.6

themselves.

...

Transcript will be available on the free plan in -460 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from Andy Wood, Matt Kirshen, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Andy Wood, Matt Kirshen and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.