Why a Glass Box Rather Than a Black Box Approach is Important when Using AI in Talent Assessment

May 17, 2019 Achim Preuss

First published in the ATC Newsletter

Many employers are intrigued by the idea of using artificial intelligence to improve their hiring processes. AI-based assessment increasingly is part of candidate and employee psychometric assessments. From chatbot-type realistic conversations with candidates in situational judgment tests (SJT) to algorithm-based reviews of candidates’ responses to test questions, the use of AI in assessment now regularly informs HR and talent decisions.

But despite this broadening use, people continue to have doubts about companies using AI to assess candidates and employees, especially when it’s unclear exactly how these processes work and what steps have been taken to ensure fairness. Aon’s research suggests that this so-called “black box” approach to AI can lead candidates to react unfavorably both to AI itself and to the organizations that rely on it. This reaction can taint impressions of a potential employer even among candidates who are offered a job.

Employers that want to harness the power of AI in talent assessment must find a way to help candidates feel more comfortable with AI in the hiring process, and a key step is a “glass box” approach that emphasizes transparency.

Here are steps you can take to create a glass-box approach to AI that addresses candidates’ concerns and increases their trust in the talent assessment process.

Educate Candidates About AI’s Role in the Process

Helping candidates understand how you’re using AI, and why, can go a long way toward easing their anxiety about AI in talent assessment. Organizations that describe the benefits of this approach, such as improving role and organizational fit, can benefit from using AI in recruitment while at the same time avoid alienating potential talent.

Candidates who are more familiar with AI are generally more accepting of its use. Aon’s research has found that participants who were highly familiar with AI were equally as trusting of an AI decision-maker as a human decision-maker.

The need for education is particularly critical in the face of increasing calls for transparency regarding how organizations use their applicants’ data. Organizations are already required by law to obtain consent from job applicants in the European Union before using automated decision-making strategies.

The more information applicants have about how your organization is using AI, the more likely they are to embrace it. In addition, the process of selecting candidates — whether for entry into the organization or promotion within — must always be legally defensible. It must not be discriminatory nor favor a particular group of candidates based on gender, race or any of the other groups outlined in equal opportunity legislation.

Organizations that use AI in hiring will likely find they need to increase the amount of interpersonal contact they have with applicants during the selection process. Even if an AI system is used to automate the decision-making process, applicants may find comfort in having open lines of communication with a contact person while they are applying.

Build Trust Through Proactive Transparency

The AI that’s built into your talent assessment must be transparent and open to challenge, and you must be transparent with applicants about your use of AI in the hiring process.

The complex algorithms sometimes used in AI can make selection decisions difficult to justify when their reasoning isn’t explained. And if your selection decisions can’t be easily explained, they could be challenged by applicants in a court of law.

A glass-box AI in which all stakeholders understand what’s being measured and how those measurements are used reduces legal and regulatory risks and reduces applicants’ anxieties about the talent assessment process. It also makes it easier to quickly course-correct when things go awry.

Greater transparency during the consent process can ease concerns about AI. Aon’s research suggests that providing such explanations can lead people to react favorably during selection processes by making them feel more informed and respected by the organization.

The vast amounts of data generated online about human behavior can help us understand and develop people’s full potential in ways that were impossible in the past. But it’s critical to approach that data-mining process with a set of constructs around the characteristics, capabilities and traits that directly relate to success on the job. Rather than using a black-box approach, companies need to develop a competency model that is transparent about hiring and promotion decisions based on clear predictive models.

Do you want to know more about AI in talent assessment? Read our handbook.

About the Author

Achim Preuss

Dr. Achim Preuss is a renowned pioneer in the assessment industry and a visionary practitioner. As Head of Global Solutions at Aon's Assessment Solutions, he is responsible for the company’s global product development and its best practice innovations. Achim co-founded cut-e Group, a global talent management and assessment specialist, in 2002 and was its Chief Technology Officer when the company was acquired by Aon plc in 2017. cut-e and Aon, as Aon's Assessment Solutions, undertake 30 million assessments each year in 90 countries and 40 languages.

Follow on Linkedin Visit Website More Content by Achim Preuss
Previous Article
How to Avoid the Pitfalls of AI
How to Avoid the Pitfalls of AI

Evan Theys explains how to maximize the benefits of AI applied to assessment as well as the limitations and...

7 Ways to Use Your HR Data Better
7 Ways to Use Your HR Data Better

People analytics is relatively new in HR and too many teams are missing out because they don’t know how to ...