Reducing Bias in the Hiring Process
Sep 14, 2020
TRANSCRIPT of Myra Norton, President/COO of Arena Analytics explaining how the machine learning technique of adversarial networks removes bias from the hiring process for Arena’s clients.
I’m Myra Norton. I’m the president here at Arena Analytics. I’ve been with Arena since before there was an Arena.
We started this company with a pretty big mission in mind – we want to rewire the labor market around outcomes. What does that mean? Well, it means we want to change the way people get hired. We want that process to happen free of bias.
And what drives that mission is two foundational beliefs. The first is that talent is equally distributed, but opportunity, often, is not. That fact, in and of itself, points to bias that exists in the way people get hired today.
The second fundamental belief is that, if we use data to match individuals to jobs where they will thrive and achieve outcomes, we not only can reduce that bias, but we can also build stronger organizations. We can build workforces that deliver better financial outcomes, better operational outcomes and deliver more effectively on mission.
I want to talk to you about the impact that we’re having on diversity on that front. The way we think about measuring ourselves is, we want to know that candidates that Arena is predicting for success mirror the communities and applicant pools in which they live and work.
Out of the gate, from the first models we built, we found that our mirroring exceeds the thresholds that are set by governing bodies like EEOC and OFCCP. But we wanted to do more. And we thought we could do more.
So, in full disclosure — I’m a mathematician by training and I do geek out a little on this stuff so I’m going to try not to go too deep… But we got a team of great data scientists and engineers who went to work on this problem and developed a really novel approach that has enabled us to achieve even greater things when it comes to diversity.
That approach — I’m just going to give you a simple analogy — basically involves taking two mathematical models and letting them duke it out. The first model is a predictor: that model is designed to predict, as accurately as possible, the outcomes that an individual is likely to achieve in a particular job, in a particular department, on a particular shift, inside your organization. And the goal of that model is to be as accurate as possible.
The second model is called a discriminator. That model’s goal is… well, it’s kind of nefarious. That model is trying to pull out as much demographic data as possible. It wants to be able to say, “I know that this individual is male or female” or “Here’s their age” or “This is their race” or “This is their socio-economic background.” That’s the goal of this model.
We pit those two against each other, and end up with a model that’s knitted together that essentially optimizes the predictor, subject to the constraint that this discriminator cannot discern any demographic data about an individual. When we do that, the result is that our mirroring moves from over 80% to upwards of 95%. We’re getting close to almost 100% mirroring of the communities and applicant pools in which individuals live and work.
But that is not even the most exciting part. For me, the most exciting part is — we knew that we might take a hit on performance of that predictor. But do you know what actually happened? The models are stronger. We can actually predict more effectively. By removing that bias. And to me, that is incredibly fulfilling. And it’s why myself and the entire team of geeks here at Arena get excited to work with organizations like you, who are committed to building diverse workforces and work most effectively on your mission.