meta_pixel
Tapesearch Logo
The Ezra Klein Show

A Skeptical Take on the A.I. Revolution

The Ezra Klein Show

New York Times Opinion

Society & Culture, Government, News

4.611K Ratings

🗓️ 6 January 2023

⏱️ 72 minutes

🧾️ Download transcript

Summary

The year 2022 was jam-packed with advances in artificial intelligence, from the release of image generators like DALL-E 2 and text generators like Cicero to a flurry of developments in the self-driving car industry. And then, on November 30, OpenAI released ChatGPT, arguably the smartest, funniest, most humanlike chatbot to date. In the weeks since, ChatGPT has become an internet sensation. If you’ve spent any time on social media recently, you’ve probably seen screenshots of it describing Karl Marx’s theory of surplus value in the style of a Taylor Swift song or explaining how to remove a sandwich from a VCR in the style of the King James Bible. There are hundreds of examples like that. But amid all the hype, I wanted to give voice to skepticism: What is ChatGPT actually doing? Is this system really as “intelligent” as it can sometimes appear? And what are the implications of unleashing this kind of technology at scale? Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become one of the leading voices of A.I. skepticism. He’s not “anti-A.I.”; in fact, he’s founded multiple A.I. companies himself. But Marcus is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT A.I.’s “Jurassic Park moment.” “Because such systems contain literally no mechanisms for checking the truth of what they say,” Marcus writes, “they can easily be automated to generate misinformation at unprecedented scale.” However, Marcus also believes that there’s a better way forward. In the 2019 book “Rebooting A.I.: Building Artificial Intelligence We Can Trust” Marcus and his co-author Ernest Davis outline a path to A.I. development built on a very different understanding of what intelligence is and the kinds of systems required to develop that intelligence. And so I asked Marcus on the show to unpack his critique of current A.I. systems and what it would look like to develop better ones. This episode contains strong language. Mentioned: “On Bullshit” by Harry Frankfurt “AI’s Jurassic Park moment” by Gary Marcus “Deep Learning Is Hitting a Wall” by Gary Marcus Book Recommendations: The Language Instinct by Steven Pinker How the World Really Works by Vaclav Smil The Martian by Andy Weir Thoughts? Email us at [email protected]. Guest suggestions? Fill out this form. You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast, and you can find Ezra on Twitter @ezraklein. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs. “The Ezra Klein Show” is produced by Emefa Agawu, Annie Galvin, Jeff Geld, Rogé Karma and Kristin Lin. Fact-checking by Mary Marge Locker and Kate Sinclair. Original music by Isaac Jones. Mixing by Jeff Geld and Sonia Herrero. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion audio is Annie-Rose Strasser.

Transcript

Click on a timestamp to play from that location

0:00.0

I'm Ezra Klein, this is the Ezra Conchell.

0:24.6

So on November 30th, open AI release chat GPT to the public.

0:29.3

Chat GPT is, well, it's an AI system you can chat with.

0:33.9

It is trained on heaps of online text, and it is learned, if learned is right word, how

0:40.7

to predict the likely next word of a sentence.

0:44.7

And it turns out that if you predict the likely next word of a sentence enough time is with

0:48.8

enough accuracy, what you get is pretty eerily human like writing.

0:54.1

And it's kind of a wonder.

0:56.1

If you spent much time in social media towards the end of the year, you've probably seen

0:59.6

screenshots of chat GPT writing about losing socks in the laundry, but in the style of the

1:04.9

Declaration of Independence or explaining commashellings theory of nuclear deterrence in the style

1:10.1

of a sonnet.

1:11.9

But if you're reading lots and lots and lots of these AI generated answers and honestly

1:16.5

creating more than a few myself, I was left feeling surprisingly hollow or maybe a little

1:21.8

bit worse than that.

1:23.1

But chat GPT can do, it really is amazing.

1:26.5

But is it good?

1:28.1

Should we want what's coming here?

1:30.7

I want to be clear that I'm not here to say the answer is no.

1:34.9

I'm not here to say that the air revolution is going to be bad.

1:38.5

And if you listen to the episodes with Brian Christian and Sam Altman, you know I am interested

1:43.9

in what these systems can do for us.

...

Transcript will be available on the free plan in -816 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from New York Times Opinion, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of New York Times Opinion and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.