meta_pixel
Tapesearch Logo
The Ezra Klein Show

Best Of: Is A.I. the Problem? Or Are We?

The Ezra Klein Show

New York Times Opinion

Society & Culture, Government, News

4.611K Ratings

🗓️ 20 December 2022

⏱️ 77 minutes

🧾️ Download transcript

Summary

This past year, we’ve witnessed considerable progress in the development of artificial intelligence, from the release of the image generators like DALL-E 2 to chat bots like ChatGPT and Cicero to a flurry of self-driving cars. So this week, we’re revisiting some of our favorite conversations about the rise of A.I. and what it means for the world. Brian Christian’s “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do. So this conversation, originally recorded in June 2021 is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more. Mentioned: “Human-level control through deep reinforcement learning” “Some Moral and Technical Consequences of Automation” by Norbert Wiener Recommendations: "What to Expect When You're Expecting Robots" by Julie Shah and Laura Major "Finite and Infinite Games" by James P. Carse "How to Do Nothing" by Jenny Odell Thoughts? Email us at [email protected]. Guest suggestions? Fill out this form. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. “The Ezra Klein Show” is produced by Annie Galvin, Jeff Geld and Rogé Karma; fact-checking by Michelle Harris; original music by Isaac Jones; mixing by Jeff Geld; audience strategy by Shannon Busta. Special thanks to Kristin Lin.

Transcript

Click on a timestamp to play from that location

0:00.0

Hey, it's Ezra.

0:02.2

We are off now until the end of the year.

0:04.6

But there's something I wanted to do here in our end of the year rears, editorially, not

0:08.7

just throw some old episodes up on the feed.

0:11.3

I think when people look back on 2022, one of the main stories is going to prove to have

0:15.8

been this breakout year in artificial intelligence.

0:19.0

That particularly felt true at the end of the year with the rise or the release of chat

0:22.5

GPT, which is a chat system from OpenAI based on a big predictive language model where

0:29.1

people could ask a question, and it would return writing that felt remarkably capable,

0:33.9

remarkably almost intelligent.

0:36.7

And that stuff has all kinds of problems that we're going to be getting into in an episode

0:40.0

right after the new year.

0:42.4

But I wanted to go back to a couple episodes we've done on AI that I think are going

0:46.2

to make a little bit more sense to people right now.

0:48.2

And I want to start with this episode with Brian Christian, who wrote a book called

0:51.6

The Alignment Problem, which in my view is the single best book you can read on the issue

0:57.1

of machine intelligence and the way in which it is difficult for our form of intelligence

1:02.7

to interact with it.

1:04.1

The way it is much harder than you would think to get a machine to understand what it

1:07.7

is you want it to do and make sure that that is the thing that it ends up doing.

1:12.8

As these systems get much more powerful and become integrated into many more areas of

1:16.7

our lives, the possible damage that misalignment can do is pretty profound.

...

Transcript will be available on the free plan in -833 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from New York Times Opinion, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of New York Times Opinion and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.