Discrimination is surprisingly difficult to root out. Implicit association tests reveal that prejudice lingers in the subconscious. Informing people about their biases won't necessarily stop them from committing the same mistakes over and over again.

As Taylor Swift put it, “haters gonna hate, hate, hate, hate, hate.”

Prejudice operates like muscle memory. It takes constant vigilance to catch our (often inadvertent) moments of sexism or racism. It takes hard work. It takes practice.

“Put bluntly, changing behavior means work that the vast majority of us are not motivated to do,” writes Harvard professor Iris Bohnet in her new book, “What Works: Gender Equality by Design.”

Every year, Bohnet says, companies spend $8 billion on diversity training despite scant evidence that these brief workshops do any good. They might even backfire — worsening discrimination by reinforcing stereotypes or by making people complacent about their prejudices.

What if we gave up trying to change people’s minds? Bohnet, a behavioral economist at Harvard’s Kennedy School of Government, makes a provocative proposal: Instead of striving to make everyone less sexist, we should change the system so it’s harder for sexism to thrive.

To prevent discrimination in hiring, why not hide the names of applicants? If female employees are reluctant to ask for flexible work schedules, why not make flex time the default for everyone? And if women are hesitant to guess on standardized tests, why not eliminate the penalty for guessing?

Bohnet’s book is a collection of these ideas and the research that underpins them. Recently, we had a chat about why it’s important to fix our institutions, and why that might be easier than trying to fix our bias-prone minds.

Tell me more about why you wrote this book.

Iris Bohnet: I am a behavioral economist, and that's really where the thinking started for the book. Behavioral scientists for a very long time have been trying to understand biases. And not just biases related to demographic characteristics like gender — cognitive biases in general.

What we’ve found is that biases are very stubborn. They are very hard to overcome simply by trying to change mindsets.

For instance, people tend to interpret information in a self-serving way. It’s called the “self-serving bias.” A number of researchers have been trying to tackle this bias, which often leads people to be too optimistic about their own bargaining positions. Often it causes negotiations to end in an impasse. When I taught negotiation [at Harvard], I was struck by how difficult it was to get people to understand this.

Informing people that this bias exists does very little. If it does anything — and this tells us something about the human mind — it backfires. People become more aware of the bias in others. They say “Oh, now I see why my counterpart was negotiating so assertively.” But they don’t recognize the problem in themselves.

Simple awareness itself doesn’t do the trick. Researchers have tried a number of different things. What eventually worked was for people to force themselves to write down counterarguments to their own beliefs. You have to have a little bit of your brain always playing the devil’s advocate. Write down five reasons you might be wrong. That helps people de-bias themselves the best. It’s super hard to do and only works if people are extremely conscious of the kinds of tricks their minds play.

The argument you make in the book is that we should change the world so it’s harder for our biases to harm other people.

Bohnet: Yes, so one intervention that has huge potential for organizations are blinded evaluations. We have very good evidence that they work.

In the 1970s, the major symphony orchestras in the United States began making musicians audition behind a curtain. They found that this made women 50 percent more likely to advance to later rounds. It contributed to an increase of female musicians in major orchestras in the U.S.,  from 5 percent in the 1970s to almost 40 percent today.

This example is important to me for two reasons. First, it drives home the idea of unconscious bias. These weren’t bad selection committees. They were convinced that they cared only about the music, not whether somebody looked the part. Still, they fell prey to their biases.

Second, the curtain represents a design innovation. It doesn’t try to change people’s mindsets. It just makes it easier for our biased minds to get things right.

It would be very easy for most organizations to blind themselves to people’s demographic characteristics, at least in the early rounds of hiring. They could really focus on people’s abilities, their performance and their potential to do the job well.

To me, this is a low-hanging fruit. Blind evaluations are powerful innovation that wouldn’t be expensive to implement but could have huge benefits.

As a caveat though, I would point out that sometimes companies have very gender-stereotypical criteria for their jobs. Or they evaluate candidates in ways that are facially neutral but tend to prefer men. Maybe they’re asking, “How proactive is this person?” Or “How likely are they to speak out?” These are traits that are very gendered.

Bohnet: Yes, that’s an excellent point you’re raising. Blind evaluations don’t solve everything.

The problem starts even with the language we use in job descriptions. There are experiments showing that, for the same job, men are more likely to apply when the advertisement contains more stereotypically male adjectives.

I’m actually quite optimistic that big data will help us understand this better. There are companies, such as Google, which use data to help identify which interview questions that are predictive of future performance. You can see which questions favor men, and which questions favor women. Ideally, we’d use questions that don’t have a gender bias at all.

But even better than that would be to rely much less on interpersonal interviews and to rely more on work-sample tests. This is empirically proven. The best predictor of future performance is not an interview but a test that is very closely related to the kind of work this person will be doing on the job.

For me as a professor, when I hire a research assistant, it’s actually not such a hard task. I can give the person a problem to solve. I can have them do a data analysis, run some regressions and write up a report. I can actually see what kind of a job the person does.

That’s the kind of work sample test that I suggest we should use much more often.

Still, companies aren’t going to get rid of interviews, right? So how do we make them better?

Bohnet: First, companies should completely abolish unstructured interviews — these are conversations where we just ask random questions and interviewers are free to talk about anything.

A candidate might share my nationality, or is a fan of the same sports team, or have gone the same high school, and then I will be naturally inclined to like that person and think that person would make a great hire — even though I know that being synchronized swimmer, for instance, has nothing to do with being a great economist. But our minds are not able to tease apart the useful information from the irrelevant information.

What the research suggests, basically, is that unstructured interviews are noise.

What’s interesting is that we’re seeing the opposite happening in some places. In Silicon Valley, there’s this toxic trend of emphasizing “cultural fit” — looking for people you want to hang out in the office and play ping-pong with. And they’re very transparent about this. They’re very clear that cultural fit is something they care about. But this seems like an easy way to be inadvertently discriminatory.

Bohnet: Yes, I think there are two ways to think about cultural fit. Fundamentally, I completely share your concerns. Cultural fit is where bias creeps in.

Lauren Rivera [a sociologist at Northwestern University] did a number of very interesting studies where she asked interviewers what they were looking for in a candidate.

And people would regularly say, "I use myself as a measuring rod, and I kind of look for people like myself because that’s all I have to go by." That’s almost a literal quote from her work, and it very directly speaks to your question. How are we going to measure cultural fit without being biased? That’s something I’m very concerned about.

Now, here’s just a little bit of hope. What companies really care about is whether somebody is going to succeed and be productive in the company. So the right way to think about this is to use data. Don’t just tell me that cultural fit is important. If you need to consider it, at least measure to see if cultural fit is in any way predictive of a person’s future success at your company.

We’ve been talking a lot about discrimination at the hiring level, but how can we also deal with bias in promotions and raises?

Bohnet: There are two practices that companies should be especially concerned about.

Commonly, when employees are being reviewed, they have to first evaluate themselves. They share their self-evaluation with their manager, and only afterward does the manager write his or her evaluation of the employee.

What we found — and this is no surprise to behavioral economists — is that managers are swayed by these self-evaluations. It’s called anchoring. In a negotiation, it’s hard to forget the opening offer.

A lot of evidence suggests that women are less self-confident than men, and if that is true, women may give themselves lower ratings than the men, which will make their managers biased against women. I haven’t come across any studies showing that sharing self-evaluations does any good.

The second thing we have to be concerned about is that many companies evaluate their employees in two ways — according to their past performance and their potential. Performance is more easily measurable. But evaluating potential lends itself to a lot of bias. Leadership is generally associated with men, so we cannot easily imagine that women would want to climb up the career ladder, so we’re less likely to see potential in women than in men.

The point about self-confidence is really interesting. In the book, you mention Muriel Niederle at Stanford, who has done a lot of work showing that women tend to be less competitive and are often less likely to take risks.

Bohnet: Yes, we’re good friends! Here’s a true story. A doctoral student of mine, Katie Baldiga Coffman, came to me and wanted to study gender differences in self-confidence and risk-taking — she was influenced by some of Muriel’s work.

She came up with the idea of studying multiple-choice test-taking. She brought in people and had them do SAT-type questions, multiple choice questions where there were five possible answers. You could fill in an answer or skip the question.

She used the same kind of incentives that the [old] SAT used, which is that you would get a point for every right answer and a quarter point would be deducted for every wrong answer. Rationally, if you can exclude at least one of the answers, you should guess.

Then she had them take another test, in which she forced everyone to answer all the questions. So she could measure how much people knew.

And what she found was that, between equally able people, men are much more willing to guess on a question, and women are much more likely to skip it.

What’s interesting is that the new SAT [which debuted in March] is now gender debiased — they took out the penalty for guessing wrongly. This has the effect that we basically legalized guessing, and now everyone feels entitled to guess.

I interviewed Muriel [Niederle] recently about her research, and I asked her: So how do we get women to be more competitive? How do we encourage them? And she said something that really struck me. She said, well why do we need to do that? Why is competitiveness something that we have to value so much?

Bohnet: Muriel and I are completely aligned on this. Why don’t we de-bias the system? Why don’t we change the way we do things?

The very same logic applies to negotiations. Women are less likely to negotiate. They’re less likely to ask for things. But does everything need to be negotiable?

For example, at the Kennedy School, faculty members used to have to ask for parental leave. If they didn’t ask, they wouldn’t get it. We’ve changed that policy now, partly because of my colleague Hannah Riley Bowles’s research, which shows that women don’t ask.

Why did women have to ask for parental leave? Now we’ve made it the default. Automatically, you get parental leave unless you opt out of it.

We have to think harder about how we do things. What are our norms? How do we run our organizations? Are they inadvertently biased against certain parts of the population?