X

Do Outcomes Or Perceptions Of Morality Influence Our Sense Of Generosity?

Why do we help our friends, family, and co-workers at a cost to our time or money? One might answer, “Because they helped me out in the past,” or, “Because they will help me out in the future.” Such explanations invoke the notion of reciprocity; if I help you out today, you are more likely to help me out tomorrow. However, we also often help strangers who we will never meet again, or we help animals who can provide no obvious direct or indirect benefit to us in the future. Why do we consistently behave in such a way? This question has puzzled scientists for a long time.

Classic scientific explanations of this behavior assume that people have social preferences. Specifically, preferences for particular social outcomes — such as “equity” (fairness, equal outcomes) or “efficiency” (maximizing the sum of the outcomes) — and that they behave in order to make these outcomes a reality. For example, giving money to a homeless person who one will never see again may be explained by peoples’ social preferences for fair outcomes: It is unfair that I have more than them, so I will share some of my resources.

To study these preferences in a controlled setting, researchers have often turned to “economic games” conducted in the laboratory; that is, games in which money is the common unit of value, and where people must make decisions which affect the amount of money that they and others receive. In classic games such as the “Dictator Game,” for example, one person (the dictator) is given a sum of money and must decide how much of it they would like to share with a second person — often an anonymous stranger — who received no money from the researcher. A common result is that many dictators split the money evenly, sharing half with the anonymous stranger. These types of results ostensibly reveal a social preference for fairness; or, put another way, they reveal an “aversion to inequity.” In this case, the argument is that people experience discomfort when outcomes are unfair (inequitable), and they behave so to reduce the discomfort (sharing the money).

My colleagues and I wondered whether this was the whole story. Specifically, we wondered whether it was the outcome per se that motivated peoples’ decision to help (share) in these situations, or whether it was also (or instead) their perception about what the morally “right” behavior was — irrespective of its outcome.

To investigate this idea, my colleagues introduced the “Trade-Off Game” (TOG), which we subsequently refined in a later study. The TOG works by pitting social preferences against one another, creating a situation where the morally “right” behavior is ambiguous. People are asked to make a choice about how to allocate money between themselves and two others (anonymous strangers). One choice confers equal amounts of money to each person in the group; thus, the chooser and the two anonymous strangers each receive the same amount. The alternative choice confers equal amounts of money to the chooser and one of the anonymous strangers, but the second anonymous stranger now receives a larger amount. So, the latter choice maximizes the overall gains of the group (efficiency) whereas the former choice maximizes fairness (equity). In a final design twist, the choices themselves are labeled such that one or the other choice is framed as the morally right one. For example, the equitable choice is labeled“fair,” or the efficient choice is labeled “generous.”

My colleagues and I found that this simple switching of the labels produced a large change in peoples’ choices, and, in fact, a swing in the majority choice. When the equitable choice (i.e., the same £ to everyone) was framed as morally appropriate (labelled “fair”), approximately 60% of people made that choice. Whereas, when the alternative, efficient choice (i.e., the same £ to two, more £ to one) was framed as morally appropriate (“generous”), less than 30% chose to allocate the money equitably. In addition, and in contrast to research that suggests “bad is stronger than good,” we also found that using negative rather than positive labels — such as “unfair” and “ungenerous” instead of fair and generous — caused a similarly-sized switch in peoples’ choices. In other words, bad was not stronger than good. Taken together, these results provided evidence that many people were motivated by what they perceived to be the morally right action when deciding how to behave in the TOG — and not by the specific outcome of their action per se.

However, this was not the end of the story. To test whether the motivation to choose the morally right action could explain choices in classic games like the Dictator Game (mentioned earlier), a final design twist was necessary. After making their choice in the TOG, all people involved in the study also played a Dictator Game in the role of the dictator. They were given a sum of money and were asked how much (if any) of it they would like to share with a second person — an anonymous stranger — who had received no money. We found that people who made the choice framed as morally appropriate in the TOG — whether that was the equitable choice (“fair”) or the efficient choice (“generous”) — shared consistently more in the subsequent Dictator Game than people who chose otherwise in the TOG. In other words, the people who were sensitive to which choice was morally right in the TOG (as revealed by the moral framing of those choices) were shown to be more prosocial in the Dictator Game.

Taken together, the results of this work suggest that for many people helping others in laboratory games like the Dictator Game may be borne of a motivation for doing (what they perceive to be) the morally “right” action — irrespective of the specific outcomes of that action per se. Away from artificial laboratory games, in the real world this could mean that giving money to the homeless person who one will never see again is explained by peoples’ belief that it is the morally right thing to do; even though their action does little to alleviate the inequitable state of affairs. Of course, in the real world doing what is perceived as the morally “right” thing may often coincide with particular, desirable social outcomes. To the extent that this is the case, our results do not imply that people are altogether insensitive to outcomes, or to the consequences of their actions. Furthermore, our results cannot speak to whether peoples’ preference for the morally right action is internally motivated — such that they feel good by doing it — or externally motivated — such that they look good by doing it.

In ongoing work, we are investigating whether people are more likely to choose the morally framed action when other people are watching them, and potentially evaluating the choice that they make. This could provide a window into understanding how peoples’ preferences for the morally right action are shaped by social and reputational pressures in everyday life.

These findings are described in the article entitled Doing good vs. avoiding bad in prosocial choice: A refined test and extension of the morality preference hypothesis, recently published in the Journal of Experimental Social Psychology.