Joshua Grerene.

Joshua Greene says lawmakers should move forward with regulations against Facebook.

Stephanie Mitchell/Harvard file photo

Nation & World

Facebook’s moral quandary

8 min read

Harvard psychologist Joshua Greene explains social media giant’s trolley problem

Testimony by former Facebook employee Frances Haugen, who holds a degree from Harvard Business School, and a series in the Wall Street Journal have left many, including Joshua Greene, Harvard professor of psychology, calling for stricter regulation of the social media company. Greene, who studies moral judgment and decision-making and is the author of “Moral Tribes: Emotion, Reason, and the Gap Between Us and Them,” says Facebook executives’ moral emotions are not well-tuned to the consequences of their decisions, a common human frailty that can lead to serious social harms. Among other things, the company has been accused of stoking division through the use of algorithms that promote polarizing content and ignoring the toxic effect its Instagram app has on teenage girls. In an interview with the Gazette, Greene discussed how his work on moral dilemmas can be applied to Facebook. The interview has been edited for length and clarity.


Do companies like Facebook have a moral responsibility to their users and to the public at large?

Absolutely. These companies have enormous power over people’s lives. In my view, anyone who has that kind of power has an obligation to exercise it responsibly. And part of that may mean relinquishing some of that power. Unfortunately, when these companies make disastrous choices, their actions don’t feel disastrous to them — at least not until they get the blowback from the people who’ve been harmed. As decision-makers, their emotions are not tuned to the gravity and scope of the moral problem.

Is there an example from your field of research that can clarify what is happening at Facebook?

The most emotionally salient moral transgressions are basic acts of physical violence, things like punching someone in the face. When Facebook harms people, the people making those decisions don’t feel like they’re punching someone in the face. This is because the harm is caused passively rather than actively, because the harm is caused as a side-effect rather than intentionally, and because the harm is caused very indirectly — mediated by both technology and the actions of other people. Because of these factors, Facebook executives don’t feel like they’re undermining the mental health of millions of teenage girls or putting a gun to the head of American democracy. Instead, they feel that they’re running a business while managing some “very challenging problems.”

We can break this down using the kinds of moral dilemmas that I and other researchers have used to study the mechanisms of moral judgment.

Imagine that your prized possession is on the trolley tracks — perhaps an antique guitar that’s been in your family for generations. You can save your guitar from the oncoming trolley by pushing someone off a footbridge and onto the tracks. Would you do it? Not unless you’re a psychopath. This is murder, and it feels like murder. It feels like murder because it’s active, direct, and fully intentional. You wouldn’t do this to save your guitar, and Mark Zuckerberg, I believe, wouldn’t do this to preserve Facebook’s profits.

Now let’s adjust the case a bit. Suppose that instead of pushing the person off the footbridge, you could hit a switch that would drop them through a trapdoor onto the tracks. It’s wrong to hit that switch. But it’s certainly easier to hit the switch than to push the person with your bare hands. It’s easier because it’s less direct, because the action does not require your “personal force.” (And it would be even easier if you could get someone else to do the job for you.)

Now let’s make it even easier. The trolley is headed toward your guitar, but this time you don’t have to use anyone as a trolley-stopper. You can hit a switch that will turn the trolley away from the guitar and onto another track. Unfortunately, there’s a person on that track. It’s still wrong to hit the switch, but it feels a bit more defensible. You can say to yourself, “I’m not trying to kill this person. I’m just trying to save my guitar.” You’re killing the person as a side-effect. It’s “collateral damage.”

Now imagine that the switch is already thrown. To save your guitar, you don’t have to do anything. Sure, you could throw the switch back the other way, away from the person and onto your guitar. But is that your job? Is it your responsibility? Now that the harm is caused passively, it’s a lot easier to say, “Well, I feel very bad about this, but it’s not my fault. I didn’t set this runaway trolley in motion. I’m just minding my own business. I have a responsibility to protect my family’s guitar.”

And if you’re feeling somewhat guilty, you can make yourself feel better. Suppose that it’s a very heavy switch. If you pull with all your might, you can turn the trolley away from the person and onto your guitar. So, what do you do? You pull pretty hard, but not hard enough. “I’m trying here! Please understand that this is a very heavy switch!”

That’s basically what Facebook has been saying and doing. American democracy is in peril. The mental health of millions of teenagers is in peril. Facebook doesn’t want these bad things to happen, but they don’t feel compelled to do the heavy pulling that’s necessary to prevent them from happening. Facebook isn’t actively and intentionally and directly causing these problems. Instead, it’s allowing these things to happen as indirect side-effects of it running its business as profitably as possible. And, of course, Facebook is trying to help. But not hard enough. There are things Facebook is not willing to sacrifice.

The problem is that company leaders’ moral emotions — and our moral emotions — are not well-tuned to the situation. Actions that don’t feel particularly violent can do terrible damage on a massive scale. And it doesn’t seem like a terrible thing until the damage has been done. We need systems that acknowledge this and compensate for these human limitations.

“When your interests are at stake you find every reason not to sacrifice your own interests. You raise the threshold for evidence. You call for more research, pointing to all the ambiguities. And there are always ambiguities.”

How do we make this happen?

As Frances Haugen, the Facebook whistleblower, has said, we need regulation. This does two things. First, it puts the decisions in the hands of professionals who can think about the broader consequences, rather than relying on gut reactions. Second, it leaves the decisions to people who don’t have a conflict of interest — people whose job is to protect the public.

We don’t let pharmaceutical companies decide for themselves which drugs are safe because they have a conflict of interest. We don’t let transportation companies decide for themselves which vehicles are safe because they have a conflict of interest. They’re human, and even if they have good intentions, they can be unconsciously biased. When your interests are at stake you find every reason not to sacrifice your own interests. You raise the threshold for evidence. You call for more research, pointing to all the ambiguities. And there are always ambiguities.

The fundamental problem is that Facebook wants to control large swaths of the world’s information infrastructure but doesn’t want to be responsible for the world. They want to hear those coins clink every time someone hops on the information highway, but they don’t want to be responsible for highway safety. Nothing is going to change until the decisions about what’s safe are made by people whose incentives are aligned with the public good, instead of aligned with corporate profit.

As Haugen said, none of this means that Zuckerberg and other executives at Facebook are evil. They’re not trying to make the world worse. But their focus is on that precious guitar. They are very attached to their billions.

What about the argument that Facebook executives can’t control how users engage with or act on their algorithms?

You can’t control what everyone does, and you don’t need to. We need a system that prevents the worst outcomes. We need to align causal responsibility with moral responsibility. Newspapers and other publishers take responsibility for what they publish. They are held accountable. It’s hard, and it takes resources, but it can be done. Someone needs to be responsible and accountable for what appears online. There are too many people online for everyone to be closely scrutinized, but the worst offenders — the ones who have the biggest audiences and can do the most damage — can be restrained.

Many might argue regulation would severely hamper economic growth.

Economic growth and regulation are more than compatible. In the long run, they go hand in hand. Think of the FDA or the FAA. Having government agencies that are responsible for keeping medicines and airplanes safe doesn’t hobble the pharmaceutical and aviation industries. It makes those industries sustainable. If American democracy is destroyed by militant extremism, it won’t be good for Facebook either. The unregulated pursuit of profit is, in the long run, bad for profit. Sustainable capitalism means operating within constraints that safeguard the public good.