Strategies to reduce the propensity to engage in in-group, out-group categorizations include

In-group Bias (also known as in-group favoritism) is the tendency for people to give preferential treatment to others who belong to the same group that they do. This bias shows up even when people are put into groups randomly, making group membership effectively meaningless.

Strategies to reduce the propensity to engage in in-group, out-group categorizations include

Where this bias occurs

Debias Your Organization

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

Let’s say you’re a football fan, and you root for the New England Patriots. At work, you have a couple of coworkers who are also into football: John, who is also a Patriots fan, and Julie, who supports the Philadelphia Eagles. You’re much closer with John than you are with Julie, even though you and Julie actually have more in common (outside of sports preferences) than you do with John.

Debias Your Organization

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Learn about our work

In-group bias can harm our relationships with people who don’t happen to belong to the same group that we do. Our tendency to favor in-group members can lead us to treat others unfairly and cause us to perceive the same behaviors among different people very differently depending on their group. We might even feel justified in committing immoral or dishonest actions, so long as they benefit our group.1

In-group bias is a big component of prejudice and discrimination, leading people to extend extra privileges to people in their own in-group while denying that same courtesy to others. This creates unequal outcomes for different groups. In the criminal courts, for example, in-group bias can affect the decision making of judges.2

We all like to think that we are fair, reasonable people. Most of us feel confident that we (unlike others) are free from bias and prejudice, and that the way we see and treat other people must be warranted. However, over the years, research on in-group bias has shown that group membership affects our perception on a very basic level—even if people have been sorted into groups based on totally meaningless criteria.

One classic study illustrating the power of this bias comes from the psychologists Michael Billig and Henri Tajfel. In a 1973 experiment, participants started out by looking at pairs of paintings and marking down which one they preferred. At this point, some of the participants were told that they’d been assigned to a specific group based on their choices of painting, while others were told they were assigned to a group by a random coin toss. (As a control, other participants weren’t told anything about being in a group, and were merely assigned a code number.)

After this, each participant went into a cubicle, where they were told they could award real money to other participants by marking it down in a booklet. The other participants were listed by code number, so their identities were concealed; however, the code number indicated which of the two groups they had been assigned to.

This study was designed so that the researchers could tease apart the possible causes of in-group bias. Would people be more generous to their group members even when they were told that the groups had been decided randomly? Or would this effect only appear when participants were told that the groups were based on painting preference so that people felt that they had something in common with their group mates?

The results showed that people gave more money to members of their in-group regardless of why that group had been formed in the first place: people were more generous to their in-groups, even when they had been assigned by a coin toss.3 Experiments that follow this same basic outline, known as the minimal group paradigm (MGP), have been repeated time and time again, demonstrating that the favoritism people show for their own group doesn’t need to founded in anything particularly meaningful.

But in-group bias goes beyond kindness to our in-group; it can also spill over into harm towards our out-group. Another famous study illustrating in-group bias is the Robbers Cave study, conducted by Muzafer Sherif. In this experiment, 22 eleven-year-old boys were brought to a mock summer camp and divided into two teams, the Eagles and the Rattlers. The teams were separated, and only interacted when they were competing in various activities. The two teams showed increasing hostility towards each other, which eventually escalated into violence (leading some to call the experiment a “real-life Lord of the Flies”).9,16 Although there were a number of problems plaguing the experiment, including a harsh environment that may have made the boys more anxious and aggressive than they would have been otherwise,10 Sherif’s study is often seen as a demonstration of how group identity can become the foundation for conflict.

Another troubling finding is that in-group bias, and the prejudice that goes along with it, shows up in humans from a very early age. Children as young as three show favoritism for their in-group, and research in slightly older children (ages five to eight) found that, just like adults, kids showed this bias regardless of whether their group had been assigned randomly, or based on something more meaningful.5

Group memberships form part of our identities

There are a few theories of why in-group bias happens, but one of the most prominent is known as social identity theory. This approach is founded on a basic fact about people: we love to categorize things, including ourselves. Our conceptions of our own identities are based partially on the social categories we belong to. These categories could involve pretty much any attribute—for example, gender, nationality, and political affiliation are all categories we place ourselves into. Not all of these categories are equally important, but they all contribute to the idea we have about who we are and what role we play in society.6 Categorization processes also compel us to sort people into one group or another.

Another basic truth about people: we have a need to feel positive about ourselves, and we are frequently overly optimistic about how exceptional we are compared to other people. These processes of self-enhancement guide our categorizations of ourselves and others and lead us to rely on stereotypes that demean the out-group and favor our in-group. In short, because our identities are so heavily reliant on the groups we belong to, a simple way to enhance our image of ourselves is by giving a shiny veneer of goodness to our in-group—and doing the opposite for our out-group.4

Research that supports social identity theory has found that low self-esteem is linked to negative attitudes about people belonging to out-groups. In one Polish study, participants completed several questionnaires, including one on self-esteem, one on collective narcissism, one on in-group satisfaction, and one on hostility towards out-groups. (Collective narcissism and in-group satisfaction both involve holding positive opinions of a group that one belongs to, but in collective narcissism, membership in that group is pivotal for a person’s self-concept; meanwhile, in-group satisfaction doesn’t necessarily mean that belonging to a group is so central to someone’s identity.)

The results showed that self-esteem was positively correlated with in-group satisfaction, and negatively correlated with collective narcissism. Put another way, for people with low self-esteem, group membership was more likely to be a central fixture of their identity. Low self-esteem was also linked with out-group derogation.7 Taken together, these results suggest that people with low self-esteem feel a more urgent need to elevate their own group above others because a larger slice of their identity depends on their belief that their group is better.

We expect reciprocity from others

The social identity theory was put forward by Billig and Tajfel, the researchers who invented the minimal group paradigm, and is the commonly-accepted explanation for in-group bias. However, some researchers have argued that Billig and Tajfel’s research didn’t account for an important social norm: the norm of reciprocity, which requires us to repay kindnesses that others have done for us.

In one study, Yamagishi et al. (1998) replicated one of Billig and Tajfel’s original MGP studies, with one modification: some of the participants were paid a fixed amount by the experimenter, rather than receiving money that had been awarded to them by other participants. This made it clear to these participants that the decisions they made about how to allocate money would have no bearing on the rewards they themselves received at the end of the experiment. As the researchers had predicted, this group did not show any evidence of in-group bias: they divided up their money equally between in-group and out-group members.8

These results contradict the conclusion, drawn by other researchers, that in-group bias arises from merely belonging to a group. Rather than springing up automatically wherever a group is formed, it might be the case that group favoritism only happens when people have the expectation that their good deeds will be repaid by their group members. Put differently, having an in-group to belong to seems to give rise to “group heuristics”—the expectation of reciprocity from in-group members, but not necessarily out-group members.

Like all cognitive biases, in-group bias happens without us realizing it. Although we may believe that we are being fair and reasonable in our judgments of other people, in-group bias demonstrates that when we’re interacting with members of an out-group, we may not be as charitable to them as we are to people more “like us.” When it comes to the judgments we make about other ethnic groups, in-group bias fuels ethnocentrism: the tendency to use our own culture as a frame of reference through which to evaluate other people. This usually means seeing other cultures as lesser, rather than simply different.

In-group bias has serious, real-world consequences, particularly for people belonging to minority groups (be it a group based on ethnicity, gender, religion, or whatever else). In the legal system, for example, an in-group bias towards one’s own ethnic group can influence a judge’s decision of whether or not to detain a suspect.2

In-group bias can also lead us to be more lenient than we necessarily should be towards members of an in-group who have done something wrong. In one study, researchers found that people who scored high on measures of modern racism were quick to excuse bad behavior committed by a European American and to praise them for their virtues. When it came to similar behavior perpetrated by an African American person, however, they were not so kind.11 As this study shows, in-group bias can prevent us from holding in-group members accountable for their own behavior.

This bias also has implications for our own decision-making, including decisions about moral behavior. Research has found that people are more willing to lie or cheat in order to benefit their in-group, sometimes even when they themselves don’t stand to gain anything from this dishonesty.1 Our favoritism for our own group is apparently so strong that many of us will bend our morals for the sake of the tribe. This can obviously lead to some bad decisions, especially for people who are lacking in self-esteem and are particularly desperate to gain the approval of their in-group.

In-group bias is very difficult to completely overcome, because most of the time when it asserts itself, it’s not obvious; it works below the surface of our consciousness. That said, the research points to some tactics that might help to reduce in-group bias.

Capitalize on people’s self-interest

While it sounds counterintuitive, some researchers have tried to exploit people’s self-interest in order to reduce their in-group bias. One study compared two games, known as the dictator game (DG) and the ultimatum game (UG). In both games, players decide how to split a sum of money between themselves and a recipient. In the DG, once the deciding player has made a decision, the recipient has no choice but to go along with it. However, in the UG, the recipient can choose to either accept or reject the first player’s offer. If they reject it, neither player receives anything.

In this study, participants played either the DG or the UG, and were told that their partner (who didn’t actually exist) either shared their view on abortion or held the opposite view. When participants were playing the DG, they showed significant in-group bias, offering more money to in-group members than out-group members. However, in the UG, this bias disappeared entirely.12 These results show that concrete incentives to treat people equally might be an effective strategy to reduce in-group bias.

Try a little teamwork

Remember the Robbers Cave study, where boys were separated into teams and pitted against each other? After making arch nemeses out of the Eagles and the Rattlers, Sherif and his colleagues were able to reduce the hostility between the two teams by forcing them to cooperate with each other. In order to achieve this, the researchers artificially cut off the camp’s drinking water supply and told the boys that they would all have to work together in order to fix it. (It was the 1950s, so you were allowed to put children in a forest and deprive them of water, for science.) Through this exercise, and a few others where the teams were given a shared goal, the groups eventually got back on friendly terms.

More recent evidence has supported the idea that cooperation between groups can reduce in-group bias. By interacting with an out-group people’s categorizations of others can expand to include out-group members in a new, superordinate group identity. And even though Sherif had theorized that it was key that his two groups to share a common fate, and that each group’s outcome is dependent on the other’s, research shows that this isn’t actually necessary: interacting with one another is enough.13 Wherever possible, then, trying to encourage cooperation between groups is a useful strategy.

In-group bias has probably been shaping human history for as long as we’ve been around, but it wasn’t until 1906 that it started to become an object of academic curiosity. The concept was introduced by the American sociologist William Sumner, who is known for his work on ethnocentrism and folkways (social norms that are specific to a given society or culture). Sumner believed that ethnocentrism (and the in-group bias underlying it) was universal among humans.4

In the second half of the twentieth century, social psychology started to gain steam as a field, as the world struggled to make sense of World War II and the Holocaust. The topic of intergroup relations, and why people could be so irrationally biased against people who weren’t like them, was a major area of interest (and still is). In the 1960s, Muzafer Sherif, of Robbers Cave fame, worked with his wife Carolyn to develop realistic conflict theory, an approach that posits group conflict arises from competition over resources. Later, in the 1970s, Michael Billig and Henri Tajfel developed the minimal group paradigm, and Tajfel coined social identity theory (along with another psychologist, John Turner).

In the lead-up to the 2008 U.S. presidential election, there were two frontrunners for the democratic nomination: Barack Obama and Hillary Clinton. As it turns out, within the party, people’s allegiances to a given candidate were sometimes enough to inspire in-group bias.

Researchers recruited Democrats to play the dictator game, where they decided how much of a pool of money they would share with an anonymous partner. The participants indicated whether they preferred Obama or Clinton, and were told that their partner either agreed or disagreed. The researchers repeated this experiment 3 times: first, in June 2008, right after Clinton’s concession speech; next, in early August, before the start of the Democratic National Convention (DNC); and finally, in late August, after the DNC had ended.

The results showed that, in the first two experiments, men showed significant in-group bias, giving significantly more money to partners who shared their choice of candidate. (This bias was not found in women.) However, in the third experiment, after the DNC, this difference disappeared. Interestingly, a closer look at the results showed that, although both groups of men showed in-group bias, this effect was much stronger in men who liked Hillary Clinton, compared to those who liked Obama.

The authors of the paper wrote that the 2008 primary season had been a particularly bitter one, and there had been worries in the Democratic Party that spurned Clinton supporters would break from the party and vote Republican. (Sound familiar?) So, the goal at the DNC was to foster a broader group identity among democrats. The fact that the authors found reduced in-group bias after the DNC makes sense, given that national polls also found a large increase in Obama support among Hillary supporters after the convention.14

It’s no secret that sports fans take their allegiances seriously, so it’s probably no surprise that people show in-group bias for fellow supporters of their own team. In one study, researchers had participants fill out a number of surveys right as they were leaving a basketball game. These surveys gauged how invested the participants were in their team, and had them rate the behavior of fans of both teams during the game. The results showed not only that the spectators were biased to favor their in-group, but also that, for people who identified more strongly with their team, this effect was strongest when the game they had just watched was a home game.15

In-group bias is the tendency for us to give preferential treatment to members of our own group, while neglecting or actively harming out-groups.

Why it happens

The main theory of in-group bias is social identity theory, which posits that membership in various groups comprises a large part of our identities. We have a need to feel positively about ourselves, and by favorably comparing our groups to others, we are able to enhance our self-concepts. Other theories include realistic conflict theory, which says that groups get into conflict when they’re in competition for resources, and group heuristics, which says that we are nicer to in-group members only because we expect reciprocity from them.

Example 1 – Democrats and in-group bias in the 2008 election

In the lead-up to the 2008 U.S. presidential election, male democratic voters showed significant in-group bias, favoring people who shared their choice of candidate and penalizing others. This bias vanished after the DNC, where the goal had been to foster a sense of shared identity.

Example 2 – Sports fan and in-group bias

Spectators at a basketball game who were heavily invested in their team showed in-group bias when rating the behavior of fans of both teams. This effect was strongest after a home game.

How to avoid it

In-group bias is notoriously difficult to avoid completely, but research shows it can be reduced through interaction with other groups, and by giving people an incentive to act in an unbiased manner.