How can we build trust in the use of AI? Guidelines recently released by an expert group offer some suggestions. Read how we have considered these in relation to our own AI assessment platform.
In 2019, the independent High-Level Expert Group on AI (AI HLEG) publicly released a set of guidelines aiming to promote trustworthy Artificial Intelligence (AI). According to this expert group, trustworthy AI consists of three components: lawfulness, ethicality, and robustness.
We believe this is an important and timely framework so we’ve looked carefully at the key points highlighted in the guidelines and considered how they relate to the context of our own video assessment platform, vidAssess.
Retain possible human invention
We know that AI systems offer great potential for automation of otherwise manual processes, but we must strive for such systems to uphold respect for human autonomy. We can do this by guaranteeing the possibility of human oversight throughout a given process and enabling this through human-centric design. This oversight does require a certain level of access to the AI systems by humans and consequently, that access must be appropriately secured to prevent access from non-authorized persons. Measures need also be put in place to mitigate the possible effects of an intrusion with malicious intent.
Technical robustness and security measures play an important role and must acknowledge the sensitivity of the processes and the information they guard. They must ensure the security of sensitive information, and compliance with GDPR requirements at every stage of the process.
While security considerations require AI systems to be protected and secured, this should not negatively impact transparency and explicability. We must be able to explain how decisions are being made. Processes and systems need to be adequately documented, and information regarding their capabilities and limitations needs to be available and up to date. Part of transparency is also ensuring that systems are free from bias and discrimination. This requires regular research, ongoing monitoring, and human intervention where necessary.
There is also the need for accessibility as well as secure access, as humans need to be able to adjust AI systems and correct for biases. However, such high-level access should only be granted to a small group of people with a proven need. Transparency also includes keeping records of who has access to which parts of a given AI system.
As systems evolve in the course of their life cycle, and aspects of AI-supported assessments change through regular updates, research on possible negative impact and sources of bias needs to be updated along with the systems. An important aspect of this is to ensure that even with assessments becoming more and more complex, no particular group, especially those more vulnerable, is excluded based on factors outside their control.
The table below outlines aspects of realizing trustworthy AI in vidAssess.
For more information about our vidAssess-AI platform, take a look at our website.
When you want to learn more about how AI is being used in talent assessment, sign up for our Assessment Essentials series of webinars.
About the AuthorVisit Website More Content by Maximilian Jansen