4.7 • 5.1K Ratings
🗓️ 18 October 2023
⏱️ 91 minutes
🧾️ Download transcript
Click on a timestamp to play from that location
0:00.0 | In part two with AI researcher Yosha Bach, we're exploring the future of AI, |
0:04.6 | from the potential battle between AI and humans to the possible solutions to the problem of alignment. |
0:10.7 | And you will not want to miss the conclusion of this episode where Yosha explains why he |
0:15.1 | thinks optimizing for intelligence is actually the wrong approach altogether. |
0:23.6 | AI bias, which seems to be ratcheting up real fast, |
0:28.4 | that freaks me out and I don't think people... Here's my thing, man. I think people need to |
0:33.8 | distrust themselves. I think people think they're way too smart and that they know what's best |
0:39.6 | and whether we're talking about Catholic priests with their bordelos and as long as the peasants are |
0:44.8 | you know pacified with religion, then we can keep them rule abiding and society works. And the |
0:49.8 | thing is I'm not even arguing that may be true, but I worry really worry about anybody that |
0:57.2 | thinks that they can control top-down what is true and what we let the public understand. |
1:03.6 | So AI bias becomes really, really problematic. I want to set the stage as we get into AI bias, |
1:08.6 | though, with one idea. So through Thucydides trap for those that aren't familiar with it is basically |
1:18.3 | if you look back through history, this was an ancient Greek writer who wrote about this idea and he |
1:23.1 | said, anytime you have a prevailing power and a rising power that comes to challenge them, |
1:28.5 | they are going to go to war. And if you look back, I think it's over the last 500 years, |
1:32.6 | it's happened like 16 times and 12 times, it is ended up in hot war. Those numbers are |
1:37.6 | directionally correct. I don't think they're literally correct. And if we are, and I heard you, |
1:44.9 | you're not an accelerationist, but if we are building AI and we in the hopes of avoiding |
1:53.0 | a dumb, golem AI that maximizes paper clips, we make a hyper intelligent AI that outpaces us |
2:02.1 | on a lot of things. In fact, I saw one of your tweets that said, we've slaved away for the last |
2:07.8 | 10,000 years or 100,000 years, I forget the number you used, so that we could do the things that |
... |
Please login to see the full transcript.
Disclaimer: The podcast and artwork embedded on this page are from Impact Theory, and are the property of its owner and not affiliated with or endorsed by Tapesearch.
Generated transcripts are the property of Impact Theory and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.
Copyright © Tapesearch 2025.