From resume screening bots to AI tools that assess facial expressions in interviews, artificial intelligence is rapidly changing how employers make decisions about candidates and employees. Artificial intelligence (AI) and automated decision systems (ADS) are reshaping the entire employment lifecycle, from the moment a job posting goes live to the day of separation.
Employers are increasingly relying on AI to streamline recruiting and hiring, while ADS are also being used to evaluate employees' performance and inform separation decisions. While these tools promise efficiency and consistency, their growing influence in employment decisions raises important legal questions and concerns. In response, several jurisdictions have begun implementing guidance and legislation aimed at addressing algorithmic discrimination in the use of AI in employment.
The Promise
AI is transforming recruitment by promising to make hiring faster, more efficient, and potentially less biased. By automating tasks like resume screening, AI can quickly sort through a large volume of applications to identify candidates who meet the minimum qualifications. Another benefit of AI is the ability to apply consistent criteria across all candidates, thus, reducing human preferences and unconscious bias in hiring decisions.
When implemented properly, AI can also help expand access to a more diverse talent pool. For example, algorithms can be designed to focus on qualifications and skills rather than demographic characteristics – enabling employers to uncover candidates they might have otherwise overlooked.
The Pitfalls
Despite its promises, AI in hiring also raises concerns about fairness and discrimination. Rather than eliminating bias, AI tools can inadvertently amplify it. This is known as "algorithmic bias" and occurs when AI tools favor or disfavor candidates based on race, gender, or socioeconomic background, producing discriminatory outcomes. The bias does not come from the algorithm itself, but from the data it is trained on. If the historical hiring data used to train the AI contains patterns of favoritism, such as preferences for certain schools or a preference for a specific gender in a particular field, those patterns can be replicated and even worsen by AI. Without human oversight, AI may perpetuate or exacerbate disparities, often leading to disparate impacts.
The Pushback
As AI tools become more common in employment decisions, some state and local governmental authorities are beginning to respond to rising concerns about algorithmic discrimination. Several jurisdictions, including Illinois, Colorado, California, and New York City have introduced or passed laws requiring employers to disclose their use of automated hiring systems and, in some cases, conduct regular bias audits to ensure compliance with anti-discrimination laws.
In others, such as Delaware, authorities have taken a more proactive approach by establishing an entire regulatory body, or an "Artificial Intelligence Commission," to advise on safe and ethical AI use. Similarly, the New Jersey Division of Civil Rights recently issued guidance on the responsible use of AI in employment decisions. These growing efforts reflect a push for greater transparency and accountability in how AI is used in the workplace.
The Fix
To harness the benefits of AI while minimizing harm, employers must take proactive steps to safeguard against algorithmic bias. One key measure is conducting regular bias audits to assess whether an algorithm produces disparate outcomes for certain groups. Transparency is also essential. Employers should clearly disclose when and how AI is used in employment decisions. Another crucial measure is maintaining human oversight. AI should assist, not replace human judgment. In that regard, decision makers must be trained to understand how these tools work and when to question their results. Finally, using diverse and comprehensive training data can help reduce the risk of reinforcing historical patterns of bias.
The Bottom Line
AI has the potential to revolutionize employment decisions and processes, but without careful design, implementation, and oversight, it can introduce new risks and create additional challenges. Employers must balance innovation and responsibility, ensuring that AI-driven employment decisions remain transparent, equitable, and aligned with evolving legal standards. The members of the KMK Labor and Employment Practice Group are here to help employers to avoid the pitfalls of using AI when making employment decisions.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.