People + Strategy Journal

Winter 2020

In First Person: Jennifer Eberhardt

Jennifer Eberhardt drew from her 20-plus years of research and teaching as a Stanford University professor for her book Biased. People + Strategy editors David Reimer and Deb Bubb spoke to Eberhardt about identifying, addressing, and managing bias. 

By David Reimer and Deb Bubb

Managing Bias for Better Outcomesjennifer eberhardt

P+S: This issue is focused on inclusion, considered through the lens of the responsible stewardship of power. Your book Biased came out earlier this year, based upon your 20-plus years as a researcher and professor, and examining events as recent as Charlottesville. What have you seen in terms of responsible use of power when it comes to addressing bias?

Jennifer: We know that bias is more likely to be triggered in some situations than other situations. Just understanding that gives us control or power over the extent to which bias is going to show up and lead to negative outcomes for people. This is especially relevant for those in power—for institutions, for organizations—because they have outsized power to dictate what the social environment is for a lot of us. 

For example, I talk in the book about NextDoor. They set up their platform to have safer and happier communities, to be a place where people can gather and share information and so forth. That was the goal. Yet that same platform can be used for profiling, which they realized was happening. It was leading to instability rather than contributing to the purpose of the platform. 

Leadership decided to tackle that problem. They were able to make a difference by simply slowing people down, by creating a checklist that people have to work through—and the checklist not only slows people down, but it reorients them and gets them to think in a different way. NextDoor was able to curb profiling by 75 percent using that technique. 

What’s key for me is not just the drop in profiling incidents, but the fact that their leaders could act, informed by the research and science, and use their power to make a difference. It made a difference for anybody who uses that platform, and they’re in 95 percent of the neighborhoods in the country right now. That’s huge power because you can go from creating a platform where divisiveness and polarization and hate can get magnified, to creating a platform where those tendencies get muffled. We need leaders to understand how to push one agenda over the other. 

P+S: You’ve noted elsewhere that bias can’t be eliminated but can be managed. What does effective bias management look like?

Jennifer: You have to distinguish between the creation of the bias and the management of it. Organizations alone can’t do much in terms of all the ills of society, but they can control the environment once people are inside the organization. They can do a lot when it comes to making sure that biases don’t influence how people are treated in terms of hiring or promotions. You need to build systems in order to keep bias at bay. 

You can’t just think, “We don’t agree with biases and so we’re going to put out some kind of value statement for our organization, and that’ll be it.” You have to do a lot more work. This gets back to responsibility and power. Our leaders have a huge responsibility and we want to equip them to be able to understand and to act on that understanding in good ways.

line
line
Listen to a clip of People + Strategy editor David Reimer talk with Jennifer Eberhardt about bias within organizations. 
line

P+S: Your work with police departments is an interesting example of coming to terms with bias and performance at a systems and individual level day-to-day. What was key to managing bias in that setting?

Jennifer: One of the things I’m most proud of is how we were able to move the needle in Oakland and here in the Bay Area. Together with some colleagues at Stanford and people within the police department, we initially met to talk about how to reduce the number of traffic stops that officers made of people who weren’t committing any serious crimes, because all that time pulling people over typically didn’t amount to much. Those stops were a real point of friction with the community. They led to tension and distrust. 

P+S: What techniques did you apply?

Jennifer: We developed a metric for intelligence-led stops because officers would always say, “Our stops are all intel-led, so if you see racial disparity in the people we stop, that’s because those are the people who are committing crime. So our hands are tied. There’s nothing we could be doing differently.” But when we developed a metric to track it, we could see that not all the stops were intel-led. 

Any time an officer stops someone on the streets in Oakland or in the car, they have to complete a form. On that form, they have to note the reason for the stop, the outcome of the stop, the location of the stop, and so forth. 

We simply added a question: Is this stop intelligence-led (yes or no)? First of all, we had to get them to agree on a definition because they were all using “intel-led” in different ways. And so the department’s definition is: Do you have credible evidence to tie this particular person to a specific crime and do you have that evidence ahead of time? Now you had a definition and metric to track, and we found that only 20 percent or so of their stops were intelligence-led. 

Then we asked, can we incentivize intel-led stops? 

The chief and the leadership said to the organization, “We want you to focus more attention on these intel-led stops and less on these other stops.” Pulling people over for having a license plate light out or things like that—those were the stops that caused an issue. Sometimes they were initiated because they gave the officers time to investigate the person. Maybe they had an intuition about someone. It’s called a pretextual stop, and that’s a legal stop, it’s considered constitutional, but we would see a huge ratio of disparities in who they pulled over when they used only their intuition.

P+S: How did that impact outcomes? 

Jennifer: In 2017, before that question was introduced to the form, officers made about 32,000 stops across the city, but with the addition of that question, it was under 20,000 stops. If we look at African American stops alone, those stops dropped by 43 percent in that one year. It was a huge difference. And the big issue is that they were stopping everyone because they felt like it was a good crime-fighting tool. Their narrative was that this is how they kept crime down. But we found even with the huge reduction in the number of stops, the crime rate continued to fall. That led them to rethink their strategy. They didn’t need to stop as many people as they thought. Then it also led them to think, why were we stopping that many people? It led them to consider the potential role of bias in their own actions. 

P+S: You’re describing a fascinating blend of data and science and letting people discover and step into the new way of behaving without just lecturing. 

Jennifer: In terms of how to have an impact, it’s not through giving a lecture, but more helping people see through their own actions, helping them to come to ask the question themselves of whether bias is playing a role. We were able to achieve this shift because we changed the practice, we incentivized from the top, and leadership changed the cultural norms around what’s good and what we want you to focus your attention on and what we don’t. We started with a metric. That’s huge, because you can’t see what you can’t measure. And if you’re not seeing it, you can’t attack it and you can’t address it. So that took power, that took leadership. But with the police, even the individual officer on the street has power. They don’t have as much power within the organization, but they have power in terms of the community member, the person they’re stopping. They can determine life or death in that situation. And they started out saying, “We get what we get. These are the people who commit crime so we don’t have any control.” Until the data helped change their narrative. Simply helping people to understand both their power and their responsibility is a good thing. 

P+S: Any surprises you’ve found in studying how bias plays out in organizational contexts?

Jennifer: It’s hard sometimes for people to know how to think about people who are not in their “group.” If you’re at an organization where women are in positions of power and you’re not used to seeing them, or people of color, it’s hard to know how to evaluate their work because they’re less familiar to you. Somehow you almost don’t know what to do with them. 

As one example, we found this in an investment study where we presented different venture capital (VC) teams to people who allocate assets in the VC firms. We gave the allocators a one-page description of a team that was black-led or team that was white-led. We saw bias within the teams that were highly qualified. But the bias came because they couldn’t tell the difference between a black-led VC team that was lower in qualifications versus a black-led VC team that was higher in qualifications. They could distinguish that easily for the white-led teams, and they could use that information to predict future performance, but they couldn’t do that for the black-led teams. I think part of it is that the VC participants didn’t know how to assess the black leaders. It’s like when race comes into it, you don’t know how to use the same metrics. It’s almost like race blinds you from making basic distinctions that you could make in any other situation. 

P+S: Can you theorize on the “why” behind that?

Jennifer: We want to follow up on it more. Maybe some of it is just the lack of familiarity, but I think part of it is the discomfort around race. Maybe you can’t trust your own perceptions, and so you don’t know how to reason in your usual way. Which is also tied to feedback. That’s a huge thing for people of color and women too, just getting adequate feedback—not just in the corporate world, but in the academic space —because having people evaluate your work is hard to do. The evaluators worry about how they’re going to be seen or how you’re going to interpret their critique: will you think their feedback is grounded in gender bias or race bias? 

So then a manager ends up not actually providing the same quality of feedback because he’s thinking about managing his own image of himself as a good person and he doesn’t want to be accused of those things. Then if you don’t ever get the feedback that you need to really improve, and over time real differences in performance emerge, partly because of that. So leaders can play a biased role in that way without even realizing that they are changing the trajectory, the long-term outcomes for that person because they’re trying to manage their own stuff. 

P+S: Increasingly, our leaders are evaluated based on speed—their ability to come to quick conclusions and drive quick outcomes. In the NextDoor example you shared earlier, interrupting speed and slowing down the thought process was a huge part of the success. How might leaders balance the value of speed against the risks of bias?

Jennifer: I think just helping them to understand that there is a balance, there is a tradeoff. People value speed so much: the faster you can make decisions, the better you are, the more leadership qualities you’re exhibiting. It’s almost like it shows that you are in control. But helping leaders understand that sometimes we need to slow down a bit to get it right. That’s not a sign of weakness but a sign of better decision making. 

There’s a famous study from the 1970s teaching us about the power of the situation. The researchers had seminarians, practicing priests, who were told to go across the campus to deliver a sermon on the Good Samaritan. Some of the priests were told that they were ahead of schedule, some they told they were behind schedule, and some that they were right on time. All of the priests, when they were crossing campus after they’d written their speech on the Good Samaritan, passed by a person who needed help.

And so the question was: who would stop to help? They found that if you’re ahead of schedule, you’d stop to help, but if you were behind schedule, you wouldn’t, even though you’re a priest and you’ve just written a sermon on the Good Samaritan. That shows the power of the situation. But it also shows how we are sometimes not as connected with our values when we are making decisions and we have to act quickly. If you can slow down, you’re more likely to behave in ways that are consistent with your own core values. 

P+S: Are there particularly productive techniques to bring people in power along with you in understanding bias, as opposed to putting them on the defensive?

Jennifer: When you threaten people, they shut down. When you shame people, they shut down. But one way of bringing people along is to help them to understand that we’re all pulled in different directions by the situation we’re in. Even priests dedicating their life to service were willing to abandon that core value if the situation called for it. Helping people to understand that is huge. 

It’s also about showing respect for people, reaching for a connection with people. That’s how I have been able to survive in the policing context because you can’t go in—especially never having been a sworn officer—saying that you have the answers and that you know their job or you know what’s wrong. You have to try to understand their situation and what the world looks like from their perspective and try to reach them where they are and relate to them. The point is to use that relationship to help them think about things differently, especially when other people are being harmed. 

I won’t say that you should only use this strategy, because different times calls for different approaches. I do the carrot thing better than the stick. Some people do the stick thing better than the carrot, and sometimes the stick is actually what you need. But when you’re trying to reform, when you’re trying to rebuild, when you’re trying to expand, when you’re coming to this voluntarily to do better, the better strategy I found is, “We’re all in this together, trying to solve this together.”

P+S: One of the things that makes your book so powerful is the way you weave your story through the data. How has your research affected your own work, your family, your life?

Jennifer: They influence each other. In writing the book, I came to understand my own life in a way that I didn’t before—even seeing the research through that lens. For instance, I talk about getting arrested the day before my college graduation. I was pretty upset about that at the time, but it was so many years ago now I hadn’t thought about it in a long time. I had written that chapter initially where it was all about the statistics, looking at the disparities in the criminal justice system, but both my book editor and agent were saying, “This is too dry. It’s one statistic after another.” They felt like I needed a story. I thought about my arrest. And the funny thing about integrating that story is that I made connections that I hadn’t before. I hadn’t connected that experience to the work that I’m doing now with others here at Stanford, studying footage from body cameras where we’re analyzing officers’ language, for example, and when they stop black versus white drivers. And a lot of what we found in that work is what I experienced as a young person in my 20s, being stopped and roughed up and arrested by police. A lot of it had to do with language. In my 20s, I thought, “Oh, we had this bad luck, we ran across this officer who was a bad apple.” But then I later came to understand that this was a strategy that officers used around the country to fight crime. So I came to understand in the writing process that I was part of this larger phenomenon. 

One last point about that. I talked to a number of people for the book, including Tiffany Crutcher, whose twin brother was shot and killed by the police a few years ago in Tulsa. Initially I was just trying to gather information. But when you interview people like that and hear their stories, you also experience their pain. I came to realize how powerful the science was in speaking to that pain. It’s not just that science explains things, but science can soothe. I gave her information about these studies, and she was incredibly grateful. They helped steady her somewhat and make sense of what had happened in a way that led to relief. After I’d written the book, I sent it to her and she read that chapter to her parents, and they cried together. They understood maybe there was a way in which they could understand what might have happened with their loved one that gave them some comfort. The science was giving them some path forward. I didn’t appreciate that before writing the book. The science can help people to make sense of their own stories. 

Can Our Narratives Compromise Evidence-Based Management?

​A critical issue is our narratives about the disparities we see in data. For example, I study racial and gender disparities. Our society can lead us to interpret those disparities in certain ways and to form narratives about what they mean. We build associations in our minds between “people like this do this” or, “I can expect a person like this to behave in this way.” But our narratives also serve to rationalize why those disparities exist. For example, in the US, we place a lot of stock on individual choice and independence, and so if you’re in a bad position, that’s on you. It has to do with the choices you’ve made. 

Americans are less focused on structural issues or understanding other forces—social forces—that can move a person in one direction or the other. That’s our blind spot. Not all societies are like that, but we certainly are.

I did some research with a former graduate student, Rebecca Hetey. We were interested in how people think about disparities in the criminal justice system and why people are so punitive in terms of support for sometimes draconian policies. We were interested in the extent to which race played a role, and so we did a study where we gave people images of someone who was incarcerated. And we presented them with images where they could see that the racial disparities were very extreme or less extreme—so a prison population that was more black or less black. 

We looked at how that influenced how they thought about policies and which policies the participants supported. We found that the more black they thought the prison population was, the more supportive participants were of punitive criminal justice policies. That was an eye-opener for social activists because for a lot of them, their theory was that change would occur if you just let people know how bad things are. They would share statistics that even though African Americans make up less than 13 percent of the US population, they make up nearly 40 percent of the prison population, or that one in three African American men will spend time incarcerated. 

They assumed this disparity would shock and motivate people to do something. But we found the exact opposite. And that has to do with the narratives they have of the disparities. Participants didn’t react with, “Oh, something’s wrong with the system if you get these numbers. We need to reform these criminal justice policies and systems.” Rather, their narrative was that African Americans are criminal. Unconsciously, the data were interpreted as evidence for criminality, and so people then get even more committed to the system that’s already there. 
—Jennifer Eberhardt