A few times a month, in an unmarked white office building on Long Island, a group of Nassau County government employees discuss which children they should separate from their parents. The meeting involves a caseworker, supervisors, and attorneys reviewing notes from the caseworker’s investigation into child maltreatment allegations against the parents. If the group makes the difficult decision that a child is not safe at home, the attorneys will drive to the county courthouse down the road to argue for a removal before a family court judge.
Most of the professionals involved in this decision will be white. If the judge approves the removal, the foster parents who take in the child will likely be white, too. But for years, even though only around 13 percent of the county’s overall population is black, black kids have made up half or more of the Nassau children deemed in need of foster care placement. It’s part of a familiar story: decades after the height of the civil rights movement, extreme racial inequalities persist.
“It was shockingly bad,” said Maria Lauria, the director of children’s services for Nassau County, referring to the disparity. “Personally, I would use the word ‘ashamed.’ At the time I wondered, ‘Why is it so bad?’ We had to try something.”
So, in 2010, her agency tried something. What happened next is a less familiar story: what they tried worked.
When a government caseworker substantiates an allegation of child abuse or neglect, they typically present their notes to a group of superiors before seeking a judge’s order. Those notes detail visits to the child’s home and interviews with family members and with doctors, psychiatrists, or other investigators. The notes also include demographic details about the family—including race or ethnicity. But beginning in 2010, Nassau County took that part out. In its new “blind” removal meetings, information about race, as well as names and addresses, which could provide clues, is redacted by the caseworker. In other words, the people making these decisions are color-blind.
The results of the experiment were dramatic. In 2011, black children made up 55 percent of removals. In 2016, the number was down to 27 percent—still disproportionately high, but an unprecedented drop in such a large county-run foster system.
The breakthrough made waves. Governor Andrew Cuomo’s administration praised it in an annual report, and other counties in the state started calling Nassau for advice. A team of researchers from Florida State University conducted a study on the experiment that turned into a TED Talk that has now been viewed more than a million times. Child welfare administrators from around the country are visiting soon to watch a demonstration.
It is unusual for efforts to address racial disparities to make an actual impact. Decades of research have painted a rich picture of the political, social, and psychological origins of systemic racism, but our understanding of what actually works to overcome it has not come close to catching up. “By many standards, the psychological literature on prejudice ranks among the most impressive in all of social science,” wrote Elizabeth Levy Paluck and Donald P. Green, a pair of prominent scholars, in 2009. But, after reviewing nearly 1,000 studies on various interventions designed to reduce prejudice, they concluded that the effects of the vast majority of interventions were largely unknown—an observation Paluck has confirmed in more recent research. Yet here, in Nassau County, was a good result from the field.
Given the success, it might seem surprising that more institutions have not attempted to use “blinding” techniques to achieve more racial equality. Withholding information about race (or gender, age, and so on) from decisionmakers is one of the oldest proven ways to circumvent discrimination. But the concept has largely fallen out of favor among racial justice advocates, in no small part because it has been co-opted by conservatives as a way of opposing any policy that takes race into account in an effort to combat racial inequality.
Withholding information about race from decisionmakers is one of the oldest proven ways to circumvent discrimination. But the concept has largely fallen out of favor among racial justice advocates.
In its place has risen a new approach: implicit bias training. Interventions that purport to address implicit bias—an academic term for the prejudice lurking in our subconscious, shading our reactions to people of different races, genders, ages, nationalities, and so on—have become wildly popular, inspiring an entire industry that caters to businesses, schools, and government agencies. Most famously, last year, after a video went viral of Starbucks employees in Philadelphia calling the police on two black patrons who were minding their own business, the company announced that it would shut down more than 8,000 stores across the country for a one-day implicit bias training session involving more than 175,000 employees.
But while the existence of unconscious prejudice is well established, and indeed intuitive, the dirty secret is that there’s no evidence that implicit bias trainings do anything to mitigate it. It’s a technique with a lot of buzz, but little experimental support.
“There are two ways to think about implicit bias,” said Paluck. “One is to say we should grab it by the horns and control our less conscious habits and tendencies that we learn from society.” That’s implicit bias training. “The other is to take away information that would activate it.” That’s blinding. “There is better evidence about the takeaway stuff that triggers implicit bias. But people have been more interested in implicit bias—it’s very politically palatable to talk about bias and disparities as something out of our control and our personal will and something everybody shares.”
The success of the Nassau County experiment suggests that people working to reduce racial disparities in a variety of domains should be taking another look at color blinding. If implicit bias is fundamentally an unconscious reaction to a certain stimulus, then it makes sense to remove that stimulus whenever it would make a difference. Do renters need to know the race of people inquiring about Airbnb rentals? Should charging decisions by prosecutors, or school suspension or expulsion decisions by a principal, receive an independent, blind review before getting filed? Until the immature science around bias “training” grows up, or scholars convince policymakers to pursue large-scale integration, or Congress gets around to taking a serious look at reparations, organizations should try blinding more decisions. But first the concept of color blindness must be taken back from the conservative movement that has done so much to discredit it.
Our Constitution is color-blind, and neither knows nor tolerates classes among citizens,” wrote Supreme Court Justice John Harlan in his famous dissent in Plessy v. Ferguson, the 1896 case that upheld racial segregation. By “color-blind,” Harlan meant that the Equal Protection Clause of the Fourteenth Amendment—which provides that no state may “deny to any person within its jurisdiction the equal protection of the laws”—should be read to forbid the government from singling out black people for discriminatory treatment.
It’s an obvious position, but it would take another half century to become law. In Brown v. Board of Education, in 1954, the Supreme Court adopted Harlan’s approach, paving the way for large-scale desegregation. But as the liberal Warren Court gave way to a more conservative majority, the concept of color blindness shifted from being used to dismantle racial discrimination to being used to thwart efforts to address it. Perhaps the definitive example of the shift was the Court’s ruling in a 1978 case striking down the affirmative action policy at the University of California, Davis, medical school, which set aside a fixed number of seats each year for underrepresented minority applicants. According to the majority opinion by Justice Lewis F. Powell Jr., the problem with the policy wasn’t that it kept the plaintiff, Allan Bakke, a white man, from getting in. Rather, the “principal evil” of UC Davis’s system was that it denied Bakke “individualized consideration” for one of the reserved spots.
This set the tone for how conservatives would talk about racial discrimination moving forward. It was wrong not because of the substantive consequences it had for historically disadvantaged groups, but rather because of something inherent in the act of classifying someone according to their race, period—even a white person. Over the years, conservative Supreme Court majorities have applied this reasoning repeatedly to roll back attempts to counter the historical impact of slavery and segregation. This version of color blindness may have reached its purest expression in a 2007 opinion by Chief Justice John Roberts striking down school desegregation efforts. “The way to stop discrimination on the basis of race,” he intoned, “is to stop discriminating on the basis of race.”
This helps explain why color-blinding strategies like Nassau County’s haven’t gained wider purchase among progressives: they’ve been forced to fight a harmful version of the idea in court and in the popular imagination for four decades now. And they’ve seen how racial disparities can be “produced and maintained by colorblind policies and practices,” as Traci Schlesinger, a sociologist who studies racial disparities in criminal justice at DePaul University, has written. The mass incarceration of black men since the 1970s, for example, was accomplished using superficially race-neutral criminal laws and procedures.
The conservative co-opting of color blindness also helps explain the growing emphasis on implicit bias. Its fundamental insight is that color blindness is a mirage: we attribute characteristics to people based on their race and other group identities even when we think we’re being impartial.
The term “implicit bias” was coined in the late 1990s when a team of social psychologists created the Implicit Association Test (IAT). It was designed to measure unconscious bias by having test takers offer a quick positive or negative assessment of a series of images on a computer screen, sometimes of black and white people, sometimes men and women, and so on, depending on the type of bias being measured. The test quickly gained wide attention thanks to breathless write-ups in the media, including in Malcolm Gladwell’s best seller Blink. “The IAT is more than just an abstract measure of attitudes,” he wrote. “It’s a powerful predictor of how we act in certain kinds of spontaneous situations.” Nearly twenty million people have taken an implicit bias test on a website designed by Harvard’s Project Implicit.
Meanwhile, “implicit bias” supplanted “diversity training” as the rhetoric of choice for anyone who wanted an uncontroversial way to broach the topic of institutional disparities. The Obama White House released multiple advisories and task force reports in its second term on the mitigation of implicit bias in hiring, technology, policing, and school discipline. Organizations like Fair & Impartial Policing exist solely to train away implicit bias among cops. The organization’s website explains that it is the “#1 provider of implicit-bias-awareness training for law enforcement in North America,” and that its approach is “based on the science of bias, which tells us that biased policing is not, as some contend, due to widespread racism in policing.” The Department of Justice has offered the program to more than 2,600 local police departments since 2010, and announced in mid-2016 that its 28,000 employees would attend the program’s training sessions.
I recently was invited to take an online implicit bias training course designed for employees in New York City’s child welfare system. The course, which is mandated by a 2017 law signed by Mayor Bill de Blasio, started with a ten-question pre-quiz on the science and social dynamics behind prejudice. (“Lifelong processing of myths, misinformation, stereotypes and oppressive views that society communicates about particular targeted social groups can lead to internalization of superiority. True or false?”)
Then I clicked through a text-heavy, narrated slide show with videos of unfortunately common workplace incidents—an oafish white man telling a black female colleague that bias trainings are stupid and pointless, a fussy white employee suggesting that a black colleague identifies too closely with the troubled black youth in their care. The next slide asked whether I agreed that these were articulations of biased thinking, or microaggressions, by the white coworker. Throughout the presentation, there were pit stops to explore the recent scientific consensus that subtle racist remarks, conscious or not, can cause psychological and professional harm to minorities in the workplace.
I didn’t disagree with anything I heard or read in the training—but I also didn’t get the sense that it would change the mind of anyone who wasn’t already sympathetic to the cause of racial justice. And it made me worry that people who take the training could walk away thinking that unconscious bias was the extent of the problem. It provided no context about the history of explicitly racist, twentieth-century education, housing, and employment policies that created the oppressive conditions in black neighborhoods where the child welfare system has always been most involved in the lives of families.
Not all trainings are created equal, of course. They vary by context, content, and thoroughness. Starbucks’s training, which was designed by top experts, included a history of racial discrimination in public accommodations. Google’s explanatory portion included examples like the venture capital gender gap (only 11 percent of founders who get venture backing are women). Depending on the number of people in the training and the time available, there are often discussions in which people can share how they feel bias has impacted them in the workplace, or moments when they’ve caught themselves making biased assumptions. The hope is that recognizing and discussing these moments will help people correct their own future behavior.
But there’s essentially no evidence that it does. “I can name all the rigorous experiments [on implicit bias training] on one hand,” Calvin Lai, a researcher with Project Implicit, recently told the Daily Beast. “That’s not saying that they don’t work and that other diversity-type training is better. It’s just that we don’t know and that there isn’t enough research.” Lai and seven coauthors—mostly proponents of implicit bias science—wrote that while “implicit bias can be changed,” they found “little evidence that changes in implicit bias translated into changes in explicit bias and behavior, and we observed limitations in the evidence base for implicit malleability and change.” Even Anthony Greenwald, one of the creators of the IAT, recently told VICE News, “No one should be presenting themselves as being able to offer education or training that will undo or eliminate implicit biases.”
Indeed, there’s some evidence that the trainings might even make things worse. One peer-reviewed study found that making people aware of the prevalence of stereotyping could, paradoxically, make them more likely to think and act based on stereotypes. Another found that people who get messages about why they shouldn’t be prejudiced were then more likely to act in a prejudiced way when compared to a control group.
“Can we just call it racism?” said the comedian Kamau Bell. “If people want to get into the idea of antiracism training, they have to create a space where it’s okay for people to have their feelings hurt—especially white people.”
“My wife put it best: Can we just call it racism?” said Kamau Bell, the comedian and host of CNN’s series United Shades of America. Bell, who is black, took an interest in the implicit bias rhetoric after a waitress tried to shoo him away from a café in Berkeley, where he had stopped to say hello to his white wife and her friends—an episode he memorably recounted for This American Life. The café owner soon proclaimed that he would be instituting implicit bias training, but Bell wasn’t impressed. “There’s an effort to, in an academic way or corporate way, relabel things,” he told me. “If people want to get into the idea of antiracism training, they have to create a space where it’s okay for people to have their feelings hurt—especially white people. People want to start on a hug and end on a hug.”
Bell saw echoes of his experience after the Philadelphia Starbucks incident. “When Starbucks said they were going to do something, I became immediately suspicious,” he said. “The trainings are only pointing out problems, not solutions.”
Other experts have similar concerns. “Even the folks demanding better research on implicit bias are missing the point: You usually have to convince people racial disparity is worth addressing at all before you start talking about something like implicit bias,” said Michael Finley, the chief of strategy and implementation at the W. Haywood Burns Institute, which works to address racial disparities in juvenile justice, education, and child welfare nationwide. Finley tries not to emphasize implicit bias too much, since so many of his clients either are explicitly prejudiced or hold insensitive or cynical—but very much conscious—attitudes about why people of color experience hardship. “The hardest part of the job,” he said, “is just getting white people to use the word ‘racism.’ ”
While implicit bias training is unproven, blinding procedures have a track record going back decades. In the 1950s, the Boston Symphony Orchestra implemented a revolutionary approach to improving its gender balance. The orchestra administrators erected screens onstage to block musicians performing in tryouts. Many orchestras soon followed suit, and the nation’s top troupes went from 6 percent female in 1970 to 21 percent in 1993. A classic 2000 study by the economists Claudia Goldin and Cecilia Rouse restaged the process and found that the intervention likely deserved the credit for the gender shift.
More recent experiments have expanded on these insights, showing how broad the potential applications for blinding could be in hiring. In a 2003 study, researchers sent out nearly identical resumes, half with stereotypically white names and half with stereotypically black names. The white-sounding names were 50 percent more likely to get a callback. In 2014, researchers asked law firm partners to evaluate identical writing samples by “black” and “white” lawyers. Not only did the fictitious white lawyer receive better qualitative reviews, but the partners found more errors in the black lawyer’s sample—including twice as many spelling and grammar mistakes. Mere awareness of race affects seemingly objective aspects of evaluating candidates.
But, despite their clear promise, blinding procedures in hiring have yet to take off widely. The tech start-up Slack has drawn positive news coverage for increasing its ranks of black and Hispanic coders in part by relying on a blind coding evaluation, but it remains the exception rather than the rule.
Employment is far from the only domain that could benefit from the strategic use of blinding. Gig-economy platforms are rife with opportunities for discrimination, unconscious or not, that could be eliminated simply by hiding certain information from users. A 2015 study on Airbnb, for example, confirmed what most black renters already knew: it’s much harder to book a rental with a black-sounding name. The company responded by promising to fix the problem, but in 2018 the researchers reported that little had changed. “Truly fixing discrimination at Airbnb will require more far-reaching efforts, likely including preventing hosts from seeing guests’ faces before a booking is confirmed,” they concluded.
In 2014, researchers asked law firm partners to evaluate identical writing samples by “black” and “white” lawyers. The partners found more errors in the black lawyer’s sample—including twice as many spelling and grammar mistakes.
School discipline is another domain where blinding holds potential. A seminal 2005 report by researchers from Yale University’s Child Study Center found that black preschoolers were expelled from pre-K programs roughly twice as often as Latino and white kids. The same disparity exists at higher grade levels, and, as a 2016 federal policy guidance pointed out, these kind of trends have “remained virtually unchanged over the past decade.” Even as schools have significantly decreased these types of discipline procedures overall, the racial disparities remain.
Education researchers told me, however, that while many K–12 schools are talking about implicit bias, few are talking about any of the more empirically validated strategies for reducing racist outcomes. Why not require a blind review by an administrator before expelling someone? The most common answer I heard to questions like this—across conversations with dozens of scholars who study housing segregation, school discipline, criminal justice, and corporate hiring and diversity practices—was that change is difficult and complicated in these overburdened systems. But when children’s futures hang in the balance, that’s a sorry excuse.
Blinding is not the only alternative to implicit bias training backed by a strong base of research. In one project, Finley’s group worked with the Baltimore district attorney’s office to improve the racial disparity in court hearing attendance. Instead of using a robotic-sounding staffer from the prosecutor’s office to call defendants, they created a more sympathetic, less threatening script for a social worker to use. More people started showing up for their court hearings. Elizabeth Levy Paluck won a MacArthur Fellowship in 2017 for her research showing the power of getting social media influencers to post anti-bullying messages. She also coauthored a recent piece reviewing the theory that contact between different groups reduces prejudice, known as the “contact hypothesis.” She concluded that sustained positive contact, under the right conditions, stands a decent chance of reducing racial prejudice.
Clearly, blinding isn’t close to a total solution—in any context. Orchestras still struggle with gender gaps. Even in Nassau County, color blinding only reduced the racial disparity from horrific to bad. And when it comes to racial inequality, no short-term intervention, whether it’s social media or contact or color blinding or anything else, can on its own address the enormous disadvantages imposed on minorities in America, especially African Americans, thanks to centuries of institutionalized racism.
But there’s no reason to let the opponents of racial justice maintain their hegemony over color blindness. Yes, in the wrong hands, the concept can be used to justify results that set back the cause of racial equality, as when California banned state universities from considering race in college admissions. But blinding doesn’t have to be in conflict at all with proactive efforts to increase diversity. The key might be to take race out of the equation at the evaluation step. Everything we know about implicit bias suggests that even the most well-meaning evaluators are unconsciously docking minority applicants. An organization looking to increase diversity may find itself with more qualified minority applicants to choose from if it purposely ignores their race until the final stages.
But that requires accepting that there is information we can’t be trusted with. Maria Lauria, the director of children’s services for Nassau County, said that her organization’s color-blinding experiment faced internal resistance—perhaps, she explained, because it suggests that we’re lost causes, hopelessly prey to our most primitive prejudices against humans we don’t identify with. It’s uncomfortable for well-intentioned people to learn that they’ve been part of the problem, Lauria said. “People didn’t want to think something like this would work.”