meta_pixel
Tapesearch Logo
Curious Cases

Rutherford and Fry on Living with AI: A Future for Humans

Curious Cases

BBC

Technology, Science

4.84.1K Ratings

🗓️ 23 December 2021

⏱️ 28 minutes

🧾️ Download transcript

Summary

As huge tech companies race to develop ever more powerful AI systems, the creation of super-intelligent machines seems almost inevitable. But what happens when, one day, we set these advanced AIs loose? How can we be sure they’ll have humanity’s best interests in their cold silicon hearts? Inspired by Stuart Russell’s fourth and final Reith lecture, AI-expert Hannah Fry and AI-curious Adam Rutherford imagine how we might build an artificial mind that knows what’s good for us and always does the right thing. Can we ‘programme’ machine intelligence to always be aligned with the values of its human creators? Will it be suitably governed by a really, really long list of rules - or will it need a set of broad moral principles to guide its behaviour? If so, whose morals should we pick? On hand to help Fry and Rutherford unpick the ethical quandaries of our fast-approaching future are Adrian Weller, Programme Director for AI at The Alan Turing Institute, and Brian Christian, author of The Alignment Problem. Producer - Melanie Brown Assistant Producer - Ilan Goodman

Transcript

Click on a timestamp to play from that location

0:00.0

Hello, I'm Adam Rutherford.

0:04.2

And I'm Hannah Frye and this is the final part of our miniseries investigating AI and

0:09.7

how we will live in a world of intelligent machines.

0:12.8

Yes, and we've been tracking the themes of the re-thlectures given this year by AI

0:16.8

Supremote Stuart Russell, so far we've covered war, work, and what he described as possibly

0:22.1

the most important and maybe the last event in human history, the emergence of machines

0:27.8

that have general intelligence and will be able to accomplish all manner of tasks, basically

0:32.7

better than us.

0:33.7

Well, that's all very jolly, but this last episode is really about how we manage the development

0:38.2

of intelligence and machines, how we build systems that not only share our objectives,

0:43.0

but understand them too.

0:44.4

Yeah, and as ever, we've got two mega-mind experts with us to shuffle me and Hannah through.

0:49.7

We've got Brian Christian from the University of California at Berkeley.

0:52.9

He is the author of the alignment problem and we're going to come to that title in just

0:57.0

a minute.

0:58.0

And we also have Agent Weller, research fellow in machine learning at the University of

1:01.4

Cambridge, and program director for Artificial Intelligence at the Alan Turing Institute.

1:06.1

Welcome to both of you.

1:07.5

Now Brian, the title of your book is the alignment problem, so I think it makes probably the most

1:12.6

sense to come to you, to help us with a definition.

1:16.3

What does it actually mean?

1:18.3

The alignment problem is the question of whether the objective that you've put into a machine

...

Transcript will be available on the free plan in -1148 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from BBC, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of BBC and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.