4.7 • 6.2K Ratings
🗓️ 15 April 2025
⏱️ 38 minutes
🧾️ Download transcript
Daniel Kokotajlo, former OpenAI researcher and Executive Director of the AI Futures Project, and Eli Lifland, a researcher with the AI Futures Project, join Kevin Frazier, AI Innovation and Law Fellow at Texas Law and Contributing Editor at Lawfare, to discuss what AI may look like in 2027. The trio explore a report co-authored by Daniel that dives into the hypothetical evolution of AI over the coming years. This novel report has already elicited a lot of attention with some reviewers celebrating its creativity and others questioning its methodology. Daniel and Eli tackle that feedback and help explain the report’s startling conclusion—that superhuman AI will develop within the next decade.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Support this show http://supporter.acast.com/lawfare.
Hosted on Acast. See acast.com/privacy for more information.
Click on a timestamp to play from that location
0:00.0 | The following podcast contains advertising. |
0:04.4 | To access an ad-free version of the Lawfare podcast, become a material supporter of Lawfare at |
0:11.5 | patreon.com slash lawfare. That's patreon.com slash lawfare. |
0:18.2 | Also, check out Lawfare's other podcast offerings rational security chatter lawfare no bull and the |
0:27.5 | aftermath what we hypothesize is that if you have this super what we call a superhuman coder |
0:36.3 | which is like a you know an AI system that is as good as the best human coder, except much faster and cheaper as well, that this would kind of like in various ways improve the research productivity by a significant amount. |
0:49.8 | It's the Lawfare podcast. I'm Kevin Frazier, the AI Innovation and Law Fellow at Texas Law |
0:55.6 | and a contributing editor at Lawfare. Joined by Daniel Kokotelo, former Open AI Researcher |
1:01.8 | and Executive Director of the AI Futures Project, and E.I. Lifeland, an AI Futures Project |
1:08.0 | researcher. |
1:09.0 | If we do get something like Super Intelligence, it's probably going to look crazy. |
1:13.9 | There's a lot to think about, and a lot's going to happen really fast. |
1:17.4 | And not enough people are talking about this and not enough people are thinking about it. |
1:21.7 | And a very small set of people are like thinking about it, |
1:25.5 | specifically using the medium of actual concrete stories. |
1:29.5 | Today we're talking about a report that Daniel and Eli co-authored, AI 2027. It's a hypothetical |
1:35.7 | narrative exploring how AI may evolve in the coming years. Its bold predictions warrant a close read and, of course, a |
1:42.8 | thorough podcast. What if you could peer |
1:45.9 | just two years into the future and catch a glimpse of a world shaped by superhuman AI? What if |
1:53.1 | that future was bleaker than many hope? What if we change policies, alter AI development, |
1:58.7 | make AI a key issue for the general public. What if that future |
2:02.6 | was more utopian? What would you do to have those answers? Well, Daniel, Eli, and a few other |
... |
Transcript will be available on the free plan in 14 days. Upgrade to see the full transcript now.
Disclaimer: The podcast and artwork embedded on this page are from The Lawfare Institute, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of The Lawfare Institute and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.