meta_pixel
Tapesearch Logo
Choiceology with Katy Milkman

Data for the Win: With Guests Michael Kist & Cade Massey

Choiceology with Katy Milkman

Charles Schwab

Behavioral Economics, Society & Culture, Social Sciences, Decision Making, Charles Schwab, Business, Science, Investing, Dan Heath, Katy Milkman

4.8 • 1.4K Ratings

🗓️ 4 February 2019

⏱️ 31 minutes

🧾️ Download transcript

Summary

Netflix recommendations, Amazon suggestions, Google searches, airline ticket prices, your social media feed. All of these things are driven by algorithms—computer models that crunch massive amounts of data to generate useful results. These types of online algorithms are commonplace and so, generally speaking, we’re used to them. But what about the algorithms behind self-driving cars or airplane autopilots? What about algorithms used to predict crimes or to diagnose medical conditions? These are domains in which it often feels uncomfortable to let a computer model make what could be life-or-death decisions. In this episode of Choiceology with Katy Milkman, we’re exploring the places where algorithms and computer models bump up against resistance from their human users. Seeing as it’s Super Bowl season, it seemed like a good time to revisit last year’s contest as a case study in decision making. The 2018 Super Bowl champion Philadelphia Eagles played incredibly well against the formidable New England Patriots. The game could have gone either way, but the Eagles had a secret weapon that gave them an advantage. We speak with Michael Kist from Bleeding Green Nation on the Eagles’ integration of computer models for decision making both on and off the field. You’ll hear the story of how those models were temporarily abandoned and the team struggled before re-embracing them. Next, we explore the way self-driving cars make split-second decisions on the road, with results that can make their human passengers squirm. We test whether or not giving people a small amount of control over how a self-driving car behaves gives those people a bit more confidence about the technology. Then Katy speaks with her Wharton School of Business colleague Cade Massey, who explains some of the fascinating ways that algorithms have improved decision making and looks at some of the scenarios where algorithms face an uphill battle for acceptance. Cade Massey is a partner in Massey-Peabody Analytics. Finally, Katy recaps the ways that people designing—or simply using—algorithms can work to overcome our human tendency toward machine mistrust.

Transcript

Click on a timestamp to play from that location

0:00.0

So I'm in self-driving

0:08.0

I'm in self-driving mode,

0:11.0

so it doesn't like there.

0:13.0

You okay?

0:17.0

Yeah, oh yeah.

0:20.0

There's something unsettling to a lot of people about self-driving cars.

0:24.4

The idea of putting your life in the hands of a machine that has to make split second decisions at

0:29.3

55 miles an hour can be disconcerting. But one of the major promises made in the

0:35.2

move towards autonomous cars is that they will be much safer than cars driven by

0:40.0

humans. So why the disconnect? Why do we sometimes bristle at this kind of technology?

0:47.0

Today we're going to explore a human tendency to mistrust algorithms that are designed to make decisions for us,

0:53.6

even when they make those decisions better or more accurately than we do.

0:57.6

I'm Katie Milkman and this is Troisology, an original podcast from Charles Schwab.

1:08.0

It's about decisions, big ones and small ones, along with the subtle biases that affect those decisions.

1:15.2

We guide you through a world of hidden psychological forces,

1:18.5

forces that can influence college admissions, sports championships,

1:22.3

and the way you travel. We isolate these forces in order to

1:25.6

understand them and to help you avoid costly mistakes. So full disclosure on this first story, I'm based in Philadelphia, so I may have certain

1:47.0

allegiances.

1:50.0

If you're not a Philadelphia Eagles fan, you might find parts of this story hard to accept, but trust me, I am not falling victim to confirmation bias.

2:02.0

It's the algorithms at the heart of the story

2:04.0

that make it so interesting to me.

...

Transcript will be available on the free plan in -2248 days. Upgrade to see the full transcript now.

Disclaimer: The podcast and artwork embedded on this page are from Charles Schwab, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Charles Schwab and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.