In an uncertain world, “it actually becomes optimal to be fair, and natural selection favored fairness,” said David Rand, a postdoctoral fellow in psychology who studied fairness with Martin Nowak and Corina Tarnita.

File photo by Kris Snibbe/Harvard Staff Photographer

Science & Tech

When fairness prevails

5 min read

Harvard research shows how uncertainty affects behavior

Philosophers and scientists have long puzzled over the origins of fairness. Work by a group of Harvard researchers offers some clues, with the discovery that uncertainty is critical in the concept’s development.

Using computer simulations of evolution, researchers at Harvard’s Program for Evolutionary Dynamics (PED) — including Director Martin Nowak, scientist David Rand, and junior fellow Corina Tarnita — found that uncertainty is key to fairness. Hisashi Ohtsuki from the Graduate University for Advanced Studies in Kanagawa, Japan, also contributed to the study. Their work was described in a Jan. 21 paper in the Proceedings of the National Academy of Sciences.

“A number of papers have studied the evolution of fairness over the years,” said Rand, who will begin an assistant professorship at Yale this summer. “Our novel contribution was to take the effects of randomness into account. What we found was that as we turned up the uncertainty in our simulations, it fundamentally changed the nature of the evolutionary dynamic. The result was that in a world that has a lot of uncertainty, it actually became optimal to be fair, and natural selection favored fairness.”

To model fairness, Rand and colleagues used the Ultimatum Game, which involves two players bargaining over a pot of money. The first player proposes how the money should be split. If the second player accepts the offer, the money is split as proposed; if the offer is rejected, the game is over and neither player gets anything.

“The reason this game is interesting is that if you assume everyone is rational and self-interested, the second player should accept any offer, because even if they’re getting only one dollar it’s still better than nothing,” Rand said. “The first player should anticipate that, and should make the minimum possible offer.”

The game almost never works that way, however.

Instead, Rand said, many people will reject offers they believe are unfair. Earlier studies have shown that as many as half of players will reject offers of 30 percent or less — meaning they are effectively paying to retaliate against the other player for making such a low offer, or to stop the other player from getting ahead.

“The proximate psychological explanation for why people behave this way in the Ultimatum Game is that they have a preference for fairness, and they’re willing to pay to create equality,” Rand said. “The question we were trying to answer was: Why? Why did we come to have those preferences?”

Rand and his colleagues built a series of computer players, each of which had a specific strategy describing how much they would offer, and how much they would accept. Each round, all the computer players played the game with one another. Then they updated their strategies in a process similar to genetic evolution.

“You can think of it as though the players that earned higher payoffs attracted more imitators. Players sometimes choose to change their behavior, and when they do, they copy the strategies of players who were more successful,” Rand explained. “It could also represent actual genetic evolution, where players with [a] big payoff leave more offspring. Either way, higher payoff strategies tend to become more common in the population over time.”

By observing which strategies become dominant over multiple generations, the researchers were able to track how the system evolved, and saw that fairness offered players an evolutionary advantage, but only when uncertainty was factored into the system.

To test whether these results would play out in the real world, Rand and colleagues used the online labor market Amazon Mechanical Turk to recruit hundreds of volunteers from around the globe. After playing the Ultimatum Game, participants were asked how easy it was for people in their community to determine who is, and who isn’t, successful.

“We found exactly what the model predicted, which, I think, wouldn’t have been at all obvious had we not done the modeling first,” Rand said. “What we found is a correlation — the more uncertainty there is about who is successful and who isn’t, the more fair people are in the Ultimatum Game.”

Understanding why that is, however, is trickier.

“Think about a world where nobody is offering anything — everyone is completely rational and self-interested,” Rand said. “If you introduce a fair person into a world like that, they will do poorly, because they will make generous offers, and people will accept them. Other people, however, will make low offers to that person, and they will be rejected. As a result the fair person will never have the chance to succeed.”

The same is true of a rational person in a generally fair world. Their low offers will be rejected, resulting in a poor payoff.

So what happens if you assume that successful strategies are always successful and unsuccessful strategies are always unsuccessful, as previous studies have?

“If you’re in a selfish world, the population can never leave that state, because the fair person is always at a disadvantage,” he said. “If you rely on these kind of deterministic dynamics, that first fair person is always going to die out and fairness as a strategy will never spread.

“Whereas in a world where there’s uncertainty, when someone experiments with a fair strategy in a world of selfish people, they will still get a bad payoff, but sometimes just by chance that fair strategy might become more common in the population,” he continued. “And once it becomes common enough, the momentum switches and it’s better to be fair than selfish. That’s how it becomes the favored behavior.”

This work was funded by support from the John Templeton Foundation.