Achim Preuss, Nick Martin and Jack Porter discuss the ways human bias can unintentionally influence AI algorithms in hiring:
"We’re definitely very black box. We don’t even know ourselves why we have certain biases. With AI’s you typically at least know that. You know why it’s got its bias. You probably programmed it in there and it’s a consistent bias, where I think a lot of times humans are very inconsistently biased and we have no idea why we come up with that bias.
Unfortunately or fortunately depending on how you look at it those algorithms are developed by humans and whether or not we intend to do it, we have those subconscious biases built into us even if it’s just because I am who I am and I’ve been successful so for me I have this, this sort of image in my mind of what success means.
Those criteria are part of the data that trains the algorithm over time. Well, that algorithms been trained to focus on white males form Stanford. That doesn’t really serve the purpose of rying to increase the diversity, as an example, within an organization. So humans have to constantly look at this. The more the system has learned about what bias means and to prevent bias, the better it is.
As soon as we get more closer to perfection of the measurement, there is no bias. The bias comes then from the decision making and that rather then, a human problem, not a machine problem."
Do you want to learn more about Artifical Intelligence in Assessment? Visit our website.
About the AuthorFollow on Linkedin Visit Website More Content by Aon's Assessment Solutions