meta_pixel
Tapesearch Logo
Lex Fridman Podcast

Ian Goodfellow: Generative Adversarial Networks (GANs)

Lex Fridman Podcast

Lex Fridman

Philosophy, Society & Culture, Science, Technology

4.713K Ratings

🗓️ 18 April 2019

⏱️ 69 minutes

🧾️ Download transcript

Summary

Ian Goodfellow is the author of the popular textbook on deep learning (simply titled “Deep Learning”). He coined the term Generative Adversarial Networks (GANs) and with his 2014 paper is responsible for launching the incredible growth of research on GANs. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations.

Transcript

Click on a timestamp to play from that location

0:00.0

The following is a conversation with Ian Goodfellow.

0:03.8

He's the author of the popular textbook on deep learning, simply titled Deep Learning.

0:09.0

He coined the term of generative adversarial networks, otherwise known as Gans.

0:14.8

And with his 2014 paper is responsible for launching the incredible growth of research

0:20.9

and innovation in this subfield of deep learning.

0:24.8

He got his BSNMS at Stanford, his PhD at University of Montreal with Yashio Abenjo and Aaron Curville.

0:33.3

He held several research positions including an open AI, Google Brain, and now at Apple

0:39.0

as the director of machine learning.

0:41.6

This recording happened while Ian was still a Google Brain.

0:45.4

But we don't talk about anything specific to Google or any other organization.

0:50.8

This conversation is part of the Artificial Intelligence Podcast.

0:54.6

If you enjoy it, subscribe on YouTube, iTunes, or simply connect with me on Twitter at

0:59.7

Lex Friedman spelled F-R-I-D.

1:03.1

And now here's my conversation with Ian Goodfellow.

1:25.5

You open your popular deep learning book with a Russian doll type diagram that shows deep learning

1:31.8

as a subset of representation learning, which in turn is a subset of machine learning and

1:37.4

finally a subset of AI.

1:39.8

So this kind of implies that there may be limits to deep learning in the context of AI.

1:45.0

So what do you think is the current limits of deep learning?

1:49.1

And are those limits something that we can overcome with time?

1:52.9

Yeah, I think one of the biggest limitations of deep learning is that right now it requires

1:57.3

really a lot of data, especially labeled data.

...

Transcript will be available on the free plan in -2173 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from Lex Fridman, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Lex Fridman and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.