Wird Voreingenommenheit durch Analytics gemindert oder verstärkt?

Technology   |   Alan Jacobson   |   Jun 30, 2020 TIME TO READ: 7 MINS
TIME TO READ: 7 MINS

This year has brought the challenges of racism and biases into full light with the murder of another unarmed Black person, George Floyd, at the hands of police. We have now seen demonstrations in communities far and wide with demands for action. But what can we do to help change the course of over 400 years of systemic racism? Is there data that can help us? Are there solutions that have been shown to work?

The problem of bias: It’s you and me

Every day, we make hundreds of decisions. Trying to ensure we are not letting our prejudices bias our answers in inequitable ways is clearly difficult for humans. In 1995, renowned psychologist Anthony Greenwald and a team of researchers developed the Implicit Association Test (IAT), which has proven to be highly effective in unmasking implicit racism (that is, using unconscious bias when making judgements about people from different racial and ethnic groups) as well as other forms of unconscious biases.

Over 14 million online participants have taken the assessment, helping to develop self-awareness to racial biases within, as well as contribute to a statistically significant database that reveals rather shocking data: 70 to 75 percent of all Americans showed implicit bias. Notes test author Greenwald, “Even people with the best intentions are influenced by these hidden attitudes, behaving in ways that can create disparities in hiring practices, student evaluations, law enforcement, criminal proceedings — pretty much anywhere people are making decisions that affect others.”

Objective approaches to eliminating racial and gender discrimination

Numerous studies highlight methods that have not worked, from exposing people to counter-stereotypical examples to encouraging people to think about their biases before making a decision. Simply becoming conscious of one’s biases has also not proven to work by itself.

One strategy that has worked is what Greenwald calls “discretion elimination,” which acknowledges that people often make subjective decisions about a person, from a teacher evaluating a student’s performance to a judge pronouncing a sentence to an employer making a hiring decision. Eliminating “discretion” involves creating and rigorously applying objective criteria, which has shown significant positive results.

Undoing these behaviors requires moving from a fixed mindset — the belief that we’re already doing the best we possibly can to build diversity — to one of openness and growth, where we can deeply understand, challenge, and confront our personal biases. (Harvard Business Review)

In the 1970s and 1980s the top orchestras moved to conduct blind auditions, which conceal the identity of the performer so judges are focused on musical talent and not gender, race, education, or other criteria that has nothing to do with talent and technique. Interestingly, the employment of blind auditions was originally made in response to students feeling that the auditions were biased in favor of graduates from top music programs such as Julliard. The most noted impact from the implementation of the blind audition was actually seen in a different bias: gender bias. Over a period of approximately 25 years, the percentage of women in the orchestra shifted significantly: In 1970, female musicians made up only 5% of players in the top 5 U.S. orchestras, and by 1997 women represented 25% — still not adequate, but an improvement.

In other areas, we can collect data to determine if biases exist that need to be reduced or eliminated. In HR, studies such as this National Bureau of Economic Research field experiment have shown that people with traditional white-sounding names needed to send 10 resumes to receive one callback, while those with names that sound typically African-American needed to send around 15 resumes to get one callback. A number of companies now leverage data science and analytics platforms like Alteryx to remove elements of applicants’ resumes in an effort to reduce opportunities for bias to enter the hiring process.

When bias reduction methods have the opposite effect

As much as I love and live by data, we can’t discount the human element in identifying and eliminating bias. In one study in Australia, a team of behavioral economists worked with more than 2,000 managers in the Australian Public Service. The study randomized resumes and used a control and test group with resumes that showed and disguised applicant gender respectively.

The researchers expected that disguising applicant gender would lead to a reduction in gender bias (much like the blind auditions for orchestras), but were shocked to find that it had the opposite result. It appeared that the managers were already practicing a form of affirmative action, and when gender was disguised on the resumes, they were unable to be proactive.

The role of artificial intelligence and machine learning in bias reduction

When leveraging artificial intelligence and advanced analytic methods, we must be careful that inputs don’t bias the outcomes. In many cases, models are built based on historical data, and if these data include biases, they can propagate into future decision-making. In one now-famous “AI fail” example, a tech company looking to automatically perform initial resume screening built a model based on the characteristics of historically “high-achieving” employees. However, the inputs were flawed: the tech industry is heavily male-dominated and their high-achieving employees were thus more likely to be male as a percentage.
Artificial intelligence doesn’t make moral judgements. It is not inherently biased, but historical data and the creators of the model could be.

We have so much further to go

The implicit bias data from the Harvard project reveals that as many as three quarters of Americans show implicit bias. As human beings, as analysts and data scientists and business leaders, as citizens of a country and a worldwide community, we have much work to do. Analytics can certainly be leveraged to help reduce and eliminate biases from processes, but only if the humans who are creating the solution work toward that end and understand how bias is getting into the process in the first place.

The biases we see, and the many we do not, truly impact everything we do. Although the examples shared here relate to hiring practices, collecting and examining data related to systematic racism or sexism can be useful in many domains of a workplace. Once hired, how do traditionally marginalized employees fare in terms of performance reviews, promotions, job satisfaction, and job turnover? Where might bias be shaping these outcomes? It is important that we measure where we are on this journey with objective metrics and work to implement actions to change the outcomes.

If you have ideas of how we can help make a difference with analytics, please reach out, as I would love to hear your thoughts. And for those who are interested in learning more, here is a reading list that you may find useful to learn more about racism, sexism, and other forms of explicit and inherent bias:

It is through understanding our biases that we can work to change the outcomes.

Reprinted with permission from Tabor Communications.

Tags