ARTICLE
6 March 2026

Legal Risks In Relying On AI For Hiring

BL
Butzel Long

Contributor

Founded in 1854, Butzel Long has played a prominent role in the development and growth of several major industries. Business leaders have turned to us for innovative, highly-effective legal counsel for over 170 years. We have a long and successful history of developing new capabilities and deepening our experience for our clients’ benefit. We strive to be on the cutting edge of technology, manufacturing, e-commerce, biotechnology, intellectual property, and cross-border operations and transactions.

Employers have come to rely on artificial intelligence (AI) tools to screen applicants, rank candidates, and accelerate hiring. The appeal is understandable.
United States Technology
Brett J. Miller’s articles from Butzel Long are most popular:
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • in European Union
  • with readers working within the Accounting & Consultancy, Banking & Credit and Insurance industries
Butzel Long are most popular:
  • within Technology, Accounting and Audit, Litigation and Mediation & Arbitration topic(s)

Employers have come to rely on artificial intelligence (AI) tools to screen applicants, rank candidates, and accelerate hiring. The appeal is understandable. Two pending class actions, however, make clear that the legal risks of AI-assisted hiring are real, growing, and not someone else's problem to manage.

Warning No. 1: Internal AI Hiring Tools Create Direct Legal Exposure

If your organization uses any AI-powered system to screen or filter applicants, that system may be producing discriminatory outcomes without your knowledge. AI tools trained on historical data can replicate and amplify hiring biases at scale.

In Mobley v. Workday (N.D. Cal.), the plaintiff alleged that Workday's algorithm-based applicant screening tools discriminated against him and other similarly situated job applicants. Workday provides applicant screening services on a subscription basis to businesses spanning various industries. Specifically, it provides customers with a platform on the customer's website to collect, process, and screen job applications. The plaintiff, an African American applicant over 40 with disabilities, applied to more than 100 positions routed through Workday's platform over seven years and was rejected every time, often within minutes.

Mobley v. Workday received preliminary class certification in May 2025 as a nationwide collective action under the Age Discrimination in Employment Act ("ADEA"). The court has already ruled that Workday could face direct liability as an "agent" of the employers using its platform, finding that the software actively participates in employment decisions rather than merely executing neutral criteria.

Any tool that disproportionately screens out applicants based on race, sex, age, or disability can generate disparate impact claims under Title VII, the Americans with Disabilities Act ("ADA"), the ADEA, and Michigan's Elliott-Larsen Civil Rights Act and Persons with Disabilities Civil Rights Act ("PWDCRA"). The current federal administration has reduced appetite for disparate impact enforcement, but courts remain bound by precedent and state agencies are not similarly constrained.

Key Questions Every Employer Should be Able to Answer:

  • Can you explain, in plain terms, how your AI tool evaluates applicants and what data it uses?
  • Has anyone conducted a bias audit against your actual applicant pool?
  • Are final hiring decisions made by a human who can override and document the reasons for algorithmic outputs?

Warning No. 2: Using a Third-Party Vendor Will Not Shield You

Many employers assume that by using an established vendor's platform rather than building their own system, they are insulated from liability. That assumption is wrong. Both the vendor and the employer can face exposure.

A class action filed in January 2026 against Eightfold AI illustrates a separate but serious dimension of this risk. Plaintiffs allege that Eightfold collects applicant data from resumes, LinkedIn profiles, social media, and internet activity, then uses that data to generate a proprietary "Match Score" that employers use to automatically filter candidates, often before any human reviews the application. The lawsuit argues that these AI-generated assessments function as consumer reports regulated by the Fair Credit Reporting Act ("FCRA"), which requires disclosure, candidate access, and dispute rights. If courts agree, employers that relied on these scores without ensuring FCRA compliance will face exposure alongside the vendor.

Before Signing or Renewing Any AI Vendor Agreement, Employers Should:

  • Review contracts to understand what data the vendor collects, how outputs are generated, and who bears responsibility for compliance failures.
  • Confirm whether the vendor provides indemnification for discrimination claims and understand the scope of that indemnity.
  • Demand bias audit reports and validate the tool against your own applicant pool.
  • Determine whether the system implicates FCRA obligations and whether your process includes required disclosures.

A Related Caution: AI-Drafted Employment Documents

Employers also increasingly use AI to draft offer letters, restrictive covenant agreements, and employment policies. AI tools often produce boilerplate language that fails to account for Michigan-specific requirements, recent court decisions, or your organization's particular circumstances. An AI-generated non-compete or arbitration clause may be facially reasonable but legally unenforceable.

There is also a privilege concern. When an employer submits a legal question or a draft document to an AI platform, that communication is almost certainly not protected by attorney-client privilege and may be discoverable. If privileged materials are disclosed to an AI tool, the employer risks waiving privilege on that subject matter. The law here is still developing, and caution is warranted.

Bottom Line

AI hiring tools offer real efficiency gains and we do not suggest avoiding them. What we do suggest is applying the same legal diligence you would to any other consequential employment practice: audit your tools, scrutinize vendor agreements, maintain meaningful human oversight, and document your decisions. The organizations best positioned to defend their hiring practices will be the ones that treated AI adoption as a governance obligation, not just a procurement decision.

How Our Labor and Employment Team at Butzel Can Help:

If you are exploring AI hiring tools, preparing to renew a vendor contract, or simply want to understand your legal exposure, we would be pleased to assist. Drawing on deep employment-law expertise and a practical understanding of how modern AI systems function, we help organizations assess potential risks under federal and Michigan civil rights laws as well as the FCRA. We also review and negotiate vendor agreements to clarify data practices and indemnification terms and assist in integrating meaningful human oversight into hiring workflows. Beyond hiring tools, we guide employers in drafting employment documents and protect against jurisdiction-specific pitfalls and potential privilege concerns. Through these services, we help clients strengthen efficiency, reduce legal exposure, and implement AI practices aligned with both legal requirements and organizational goals.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More