The AI Agenda, our flagship AI conference brought together James Davies, Tarun Tawakley and Ali Vaziri from Lewis Silkin alongside Sacchin Beepath from Holistic AI to explore how artificial intelligence is transforming recruitment. Together, they discussed the opportunities AI presents and the strategies and tools needed for responsible implementation.
In a skills-driven economy, embracing AI allows employers to improve how they identify, assess and hire the right people. But putting the right safeguards and governance structures in place is crucial to doing this safely and lawfully.
1. Choose AI tools carefully
- Assess whether to use “off the shelf” AI tools or those trained on your own data. There are pros and cons to both in terms of deployment speed and embedded bias.
- Conduct due diligence on both the tool and the training data before putting to use.
2. Prioritise data quality
- Understand what data has been used to train the tool. Poor quality, untrustworthy or biased data can lead to discriminatory outcomes or bad decisions.
- Ensure ongoing monitoring for fairness.
3. Ensure transparency and explainability
- Provide clear explanations to candidates about how AI is used in the recruitment process and how it may affect their application.
- Be able to explain, in understandable terms, the logic and principles behind AI-driven decisions . This will be essential if challenged by candidates and to comply with rules on automated decisions.
4. Understand the regulatory landscape
- Be aware that different jurisdictions have varying compliance requirements. Some tools may be designed for US discrimination regime.
- Some regulations have extra-territorial reach, so think about where the recruitment process impacts candidates, not just where your business is based.
5. Implement robust governance and accountability
- Establish clear roles and responsibilities for AI oversight that complement existing business functions.
- Develop comprehensive policies and supporting processes covering AI development, deployment, monitoring, and review.
- Maintain records of testing, validation, and any changes made to AI systems.
6. Ensure meaningful human oversight
- Guard against automation bias, where humans may defer too readily to AI recommendations. Ensure meaningful human oversight.
- Foster AI literacy within your HR and recruitment teams to ensure informed and ethical use of technology.
- Assign responsibility for final decisions to trained individuals who can critically assess AI outputs.
7. Regularly audit
- Conduct regular, independent audits of AI tools, both before and after deployment, to identify and mitigate bias and ensure tools remains fair and effective as circumstances change.
- Ensure that any “success” criteria used by AI do not simply replicate historic bias.
8. Address ethical and privacy considerations
- Be transparent with candidates about the use of AI and obtain appropriate consent, especially when using tools that analyse personal or sensitive data.
- Provide alternative processes or accommodations for candidates who may be disadvantaged by AI tools, such as chatbots or video assessments.
9. Consider the entire recruitment pipeline
- Audit the entire recruitment pipeline, including where and how job adverts are placed and targeted, to ensure that all stages are free from bias and don't result in indirect discrimination.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.