Where Humans Meet Machines: Intuition, Expertise and Learning


Professor Daniel Kahneman was awarded a Nobel Prize for his work on the psychology of judgment and decision-making, as well as behavioral economics. In this age of human/machine collaboration and shared learning, IDE Director, Erik Brynjolfsson, asked Kahneman about the perils, as well as the potential, of machine-based decision-making. The conversation took place at a recent conference, The Future of Work: Capital Markets, Digital Assets, and the Disruption of Labor, in New York City. Some key highlights follow.

Erik Brynjolfsson: We heard today about algorithmic bias and about human biases. You are one of the world’s experts on human biases, and you’re writing a new book on the topic. What are the bigger risks — human or the algorithmic biases?

Daniel Kahneman: It’s pretty obvious that it would be human biases, because you can trace and analyze algorithms.

In the example of sexist hiring, if you use a system that is predicatively accurate, you are going to penalize women because, in fact, they are penalized by the organization.

The problem is really not the selection, it’s the organization. So something has to be done to make the organization less sexist.

And then, as part of doing that, you would want to train your algorithm. But you certainly wouldn’t want just to train the algorithm and keep the organization as it is.

Brynjolfsson: Your new book, Noise, is about the different kinds of mistakes that people can make that are different than biases. Help us understand that a little bit.
 Kahneman: At an insurance company, we measured what is technically called noise, and we did that in the following way: We constructed a series of six completely realistic cases that were given to 50 of their underwriters. We wanted to determine how much variability there was in their funding decisions. We expected differences between 10% and 15%, but in fact, they disagreed about 56% of the time. That’s a lot of noise.

Daniel Kahneman explains a point to Erik Brynjolfsson. Photo: Samuel Stuart Hollenshead

In many occupations, a single person makes decisions on behalf of the organization, like a triage nurse in the emergency room. And if you have a lot of noise, it sets a ceiling about how accurate you can be. So, noise is a mistake. You can measure noise more easily than bias.

An algorithm could really do better than humans, because it filters out noise. If you present an algorithm the same problem twice, you’ll get the same output. That’s just not true of people.

You can combine humans and machines, provided the machine has the last word! Humans have a lot of valuable inputs; they have impressions, they have judgments.

But humans are not very good at integrating information in a reliable and robust way. And that’s what algorithms are designed to do.

Humans can override the algorithm when something obviously relevant has happened. So, if an algorithm offers a loan to someone and then the banker realizes that that person has been arrested for fraud, that loan will be voided — but that’s the exception. In general, if you allow people to override algorithms, you lose validity because they override it too often. Also, they override on the basis of their impressions, which are biased, inaccurate, and noisy. Decisions may depend on someone’s mood at the moment.

Brynjolfsson: You’re not a big fan of human decision making, I see.
 Kahneman: For a job like underwriters, a simple algorithm can do just as well.

Brynjolfsson: What about the concern that some algorithms may have hidden biases built into them that we don’t even realize, and they may be making biased decisions for thousands or millions of people?

Kahneman: If you get disastrous outcomes, the problem is upstream. It’s with the organization or the training data, which comes from humans.

Brynjolfsson: How will AI change social science?

Kahneman: I have big worries about algorithms, but biases are not the main one.

I’m more concerned about what AI will do to people, and whether they will create superfluous people, and whether it will destroy good jobs, and so on.

My guess is that AI is very, very good at decoding human interactions and human expressions. If you imagine a robot that sees you at home, and sees your interaction with your spouse, and sees things over time; that robot will be learning. But what robots learn is learned by all, like self-driving cars. It’s not the experience of the single, individual self-driving car. So, the accumulation of emotional intelligence will be very rapid once we start to have that kind of robot .

It’s really interesting to think about whether people are happier now than they were. This is not at all obvious because people adapt and habituate to most of what they have. So, the question to consider about well-being and about providing various goods to people, is whether they’re going to get used to having those goods, and whether they continue to enjoy those goods. It’s not apparent how valuable these things are, and it will be interesting to see how this changes in the future.

Watch the full video, here.