AI in assessment and recruitment: managing the bias

October 25, 2018 Maximilian Jansen

AI in Assessment

The limitations of AI in decision making

We recently wrote about IBM’s new solution to help identify and ultimately reduce bias in AI systems and this impacts the use of AI in assessment.

The topics of both bias and AI are important for all HR and talent practitioners and it has prompted us to look at this from a different perspective.

Recently, it was reported that Amazon.com Inc had stopped the development of an AI hiring tool as it was discovered to be systematically disadvantaging women when evaluating candidates’ resumes. The model trained itself to score a candidate lower when the resume showed indicators of the candidate being female. According to Reuters the bias also extended to favouring certain verbs that are more likely to be used by male applicants and that the system was also recommending unqualified candidates. These problems are caused by issues in the training data itself.

These examples emphasise (yet again) the importance of training data during the development of AI systems. But there is another important aspect to the caution around AI; that is the highly specialised nature that we see in current AI systems. Katheryn Hume, VP of Product and Strategy at integrate.ai, refers to current AI algorithms as “idiot savants” and “[…] super intelligent on one very, very narrow task”.

It is this aspect of current AI systems that reaffirms our own approach regarding the use of AI in decision making.

We believe that the utilisation of AI to be a matter of augmentation rather than automation. AI can collect and measure additional information efficiently, but it also supplements other information about candidates gained from other sources. Whilst AI does automate processes and routine tasks in part, for us the value is that it can also enable new types of assessments – the type of which we are currently developing – which give value beyond automation.

When seeing AI through this lens, it helps us to recognise that humans and machines focus on those areas only in which their respective expertise and specific skills lie.

Paul Daugherty, co-author of Human + Machine: Reimagining Work in the Age of AI, describes this in an interview with Harvard Business Review as “collaborative intelligence, which is what happens when you put the strengths of the human together with the strength of the machine”.

The takeaway here is to make sure that AI is not put into real-world recruitment without the applications, inputs, outputs, and procedures specified clearly, and maybe most important of all human guidance.

There needs to be continuous training and evaluation of the AI to mitigate biases – and to ensure a mutually beneficial cooperation between AI and human recruiters. We know that continuous monitoring is essential as algorithms continuously learn and adapt. But it also means that we need to increase the human awareness of all potential biases and be able to intervene early on should problematic patterns appear.

You can learn more about our working with AI in assessment on our website.

About the Author

Maximillian Jansen is a product and analytics assistant within the research team at Aon's Assessment Solutions. He is currently studying for a Master's degree in Psychology at the Otto-Friedrich University of Bamberg and has an interest in the development of online assessments. Aon's Assessment Solutions undertake 30 million assessments each year in 90 countries and 40 languages.

Visit Website More Content by Maximilian Jansen
Previous Article
The 4 essential guidelines for AI in recruitment
The 4 essential guidelines for AI in recruitment

There is clearly a role for AI in recruitment. But what is it - and how should we work with it? We offer fo...

NEXT FEATURE
AI: another step forward to making AI more transparent
AI: another step forward to making AI more transparent

Read how the developments around transparency and fairness within AI algorithms will support the use of AI ...