Can AI Help to Identify Competencies Shown in Video Interviews?

January 16, 2019 Miriam Quante

Asynchronous Video Interviewing (AVI) may not be a commonly used term, but you will know its application as organizations are already embracing the concept. In short, AVI is where a candidate records their responses to pre-set questions and then submits the video via an online platform to the interviewer. This process saves time and resources for both interviewee and interviewer because the recorded ‘interview’ can be shared and re-watched across the entire hiring team. Given the power of structured interviewing and efficiency gains AVI offers, this solution has quickly become a valuable inclusion to the early stages of hiring processes.

But with advances in the application of Artificial Intelligence (AI) in assessment situations, is there a role for AI in the objective scoring of such video interviewing?

Miriam Quante, a Master’s degree student at the University of Luebeck, Germany has investigated whether an off-the-shelf AI service from a third party provider can return accurate personality profiles from responses given in video interviews. This service was initially trained by leveraging machine learning algorithms on random texts to predict personality outcomes – not following traditional psychometric standards.

The Study

Five trainee positions were available at an Australian company and all applicants were asked to complete three ability tests and a personality questionnaire mapped onto the Big Five personality factors. The 275 highest scoring applicants based on their combined assessment scores were then asked to answer six interview questions, using our AVI platform, vidAssess, about specific behaviours in past situations. This process yielded in 132 paired samples of data.

For this study, the submitted responses were then transcribed into text. From these texts, the AI service claims to be capable of suggesting how the responses would map onto the Big Five personality factors. In doing this, Miriam was able to look at the correlations between the AI ratings and the results from the personality questionnaire.

The results of the study helped us gain valuable insights into possibilities and limitations of self-trained AI models and their viability in psychometric assessment. There are two key findings that we found particularly useful: (1) the degree of overlap between AI-produced scores and human ratings; and (2) the lack of relationship between AI-produced scores and participant’s self-reported personality results. Simply put, while the AI service produced scores that were positively related to human ratings on pre-defined competencies, it did not produce scores that replicated participant’s self-reported Big Five personality profiles. 

The results from this study ultimately support our approach to develop proprietary AI models based on psychometric research and personality theory. Further, the results also support our decision to apply rigorous psychometric/scientific standards when training and developing natural language classification-based algorithms to analyse unstructured data (e.g., interviews, open-ended responses).

 

About the Author

Miriam Quante

Miriam Quante is an HR Business Partner within Group HR at Aon’s Assessment Solutions. Miriam has both Bachelor’s and Master’s degrees in Psychology and wrote her theses supported by Aon’s Assessment Solutions. Having worked within the product development team, Miriam’s focus now is on the Learning and Development of internal staff. Aon's Assessment Solutions undertake 30 million assessments each year in 90 countries and 40 languages.

More Content by Miriam Quante
Previous Article
The Power of Humility
The Power of Humility

NEXT FEATURE
Future-Proofing Your Talent: How Agile Talent Practices Can Drive Digital Transformation
Future-Proofing Your Talent: How Agile Talent Practices Can Drive Digital Transformation

Subscribe to our talentNews

Subscribe