AI: another step forward to making AI more transparent

October 18, 2018 Maximilian Jansen

AI in assessment

Eliminating bias in AI systems

It was interesting to read IBM’s latest developments around AI for its Cloud service. IBM announced that it has extended its trust and transparency capabilities to help address the principles of explainability and fairness of a variety of machine learning frameworks. That is, to make it easier to be more transparent in the systems’ processing so that fair decisions are made. It is, IBM says, an important step in developing its trusted services.

IBM has included checkers to help detect bias in training data and models, tools to pinpoint the source of the bias and suggestions as to how to mitigate this. IBM believes these developments will help to engineer greater trust in artificial intelligence systems and the subsequent solutions and actions taken. Ultimately this will lead to greater fairness and compliance with GDPR.

We welcome such developments – and it’s good to hear how the rules which govern AI are explored for fairness and bias. This IBM software is generating its own test cases and then displays the results with clear text explanations as to how the AI has generated its result and examples of findings. This in turn can be then recorded and kept as documentation.

It is important for us all as recruiters and talent decision makers to be aware of our own biases (even unconscious bias) and to minimise these to eliminate discrimination. This applies to training data and AI algorithms just as much as it does to humans.

We must also understand the decision making patterns within the AI systems that we utilise. It’s important that we are able to retrace the AI rating processes that have taken place, and to then review these for bias. Training data is the key to developing capable AI systems. It means we need to find the sources of bias and then to correct them so as to minimise their influence. It’s not a quick fix but an ongoing task throughout the entire development process when designing and refining a measure such as a psychometric test.

And this is why we invest time and resource in considering the training data during the development process. We always have the end goal in sight; to support human raters in their decision-making. And to do this, we aim to replicate a reliable, fast and fair rater.

The challenge – and the risk – is that when developing AI, one creates a black-box problem in which it is unclear as to how a decision was generated. AI cannot yet be used alone in high stakes, ‘go or no-go’, hiring decisions. Nonetheless IBM’s developments are certainly be helpful in ensuring the explainability and retraceability of AI processes and results.

These new tools will also allow humans and AI to cooperate not only in creating the basis for decision making, but also in improving mutual bias awareness and mitigation.

About the Author

Maximillian Jansen is a product and analytics assistant within the research team at Aon's Assessment Solutions. He is currently studying for a Master's degree in Psychology at the Otto-Friedrich University of Bamberg and has an interest in the development of online assessments. Aon's Assessment Solutions undertake 30 million assessments each year in 90 countries and 40 languages.

Visit Website More Content by Maximilian Jansen
Previous Article
AI in assessment and recruitment: managing the bias
AI in assessment and recruitment: managing the bias

The value of applying AI to hiring processes is beyond that of automation. It is about extending the inform...

NEXT FEATURE
Contact Centre recruitment: how to hire the right people
Contact Centre recruitment: how to hire the right people

Find out how best to carry out contact centre recruitment and the assessment and selection of the best appl...