ARTICLE
4 February 2026

Navigating The AI Employment Landscape In 2026: Considerations And Best Practices For Employers

KG
K&L Gates LLP

Contributor

At K&L Gates, we foster an inclusive and collaborative environment across our fully integrated global platform that enables us to diligently combine the knowledge and expertise of our lawyers and policy professionals to create teams that provide exceptional client solutions. With offices worldwide, we represent leading global corporations in every major industry, capital markets participants, and ambitious middle-market and emerging growth companies. Our lawyers also serve public sector entities, educational institutions, philanthropic organizations, and individuals. We are leaders in legal issues related to industries critical to the economies of both the developed and developing worlds—including technology, manufacturing, financial services, healthcare, energy, and more.
Artificial intelligence (AI) regulation and litigation are set to take center stage in 2026, as new laws, guidance, and enforcement priorities are introduced at the federal and state levels.
United States Employment and HR
Kathleen D. Parker’s articles from K&L Gates LLP are most popular:
  • with Inhouse Counsel
  • with readers working within the Oil & Gas and Law Firm industries

Introduction

Artificial intelligence (AI) regulation and litigation are set to take center stage in 2026, as new laws, guidance, and enforcement priorities are introduced at the federal and state levels. This year employers will face a rapidly evolving patchwork of state-level AI laws that impose distinct requirements for transparency, risk assessment, and anti-discrimination in the use of AI systems, particularly in employment and other high-risk contexts. At the same time, federal initiatives, such as the Trump Administration's (the Administration's) December 2025 Executive Order on AI, signal a push for a national framework and preemption of state laws, setting the stage for significant legal and compliance challenges. Meanwhile, high-profile litigation over AI training data, copyright, and algorithmic bias continues to progress, with courts addressing novel questions about fair use, data provenance, and liability for AI-generated outputs.

This alert provides an overview of the key US AI laws taking effect in 2026, recent federal and state regulatory developments, and significant pending litigation that will shape the United States' AI legal landscape in the year ahead.

US AI LAWS ANd REGULATIONS

Colorado—SB 24-205

Starting 30 June 2026, Colorado's SB 24-205 (the Colorado AI Act) introduces new compliance obligations for entities doing business in Colorado, regardless of their location, and relying on "high-risk" AI tools to make employment decisions (and other "consequential decisions" not addressed here) that affect Colorado residents.1 The law is part of Colorado's Consumer Protection Act.

Key Requirements Under the Colorado AI Act

  • Risk Assessments: Covered employers must evaluate high-risk AI systems to identify and mitigate potential harm.
  • Transparency Notices: Candidates and employees must be informed when AI influences employment decisions like hiring, firing, or promotion.
  • Reasonable Care Standard: Covered employers must take proactive steps to prevent algorithmic discrimination. Otherwise, they risk being subjected to enforcement actions.

What Counts as "High-Risk" AI?

Any AI system that makes or influences significant employment decisions, such as hiring, promotion, or termination, is covered by the Colorado AI Act. This includes systems used by employers such as automated hiring tools, resume-screening algorithms, or predictive analytics.

What Does "Reasonable Care" Mean?

Under the Colorado AI Act, covered employers must exercise "reasonable care" to ensure that high-risk AI systems do not result in unlawful discrimination. The Colorado AI Act requires that employers take affirmative actions to safeguard against unlawful discrimination, including:

  • Bias testing to regularly audit AI tools for disparate impact on protected classes.
  • Confirming that third-party AI providers or vendors meet legal and ethical standards.
  • Maintaining records of risk assessments, mitigation steps, and vendor compliance.
  • Ensuring that final employment decisions are not fully automated and include meaningful human review.

Failing to satisfy this standard may expose employers to enforcement actions, civil liability, and reputational harm. The law positions AI risk-management as a core compliance responsibility, making proactive measures essential for meeting legal standards.

How Can Covered Employers Prepare for Compliance With the Colorado AI Act?

The Colorado AI Act is the latest in a growing trend toward AI accountability at the state-level, and other state and local laws are likely to follow. Early action is key to mitigating risk and ensuring compliance.

To get ahead of the 30 June 2026 deadline, employers should review and update their policies, focusing on these critical areas:

  • Compliance Roadmaps: Develop a clear, step-by-step plan to meet all requirements before the effective date.
  • Risk Assessment Frameworks: Implement practical tools to identify, measure, and mitigate bias in AI-driven employment decisions.
  • Policy Development: Draft and refine transparency notices and internal protocols to align with the law's standards.
  • Vendor Management: Confirm that third-party AI providers comply with legal and ethical obligations.

California

SB 53: Transparency in Frontier Artificial Intelligence Act

California's2 Transparency in Frontier Artificial Intelligence Act (SB 53) took effect on 1 January 2026, marking the first US statute focused on transparency and safety governance for "frontier" AI models. The law defines a "frontier model" as a "foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations." Generally speaking, a frontier model is a large, highly advanced AI model that has been trained on massive datasets with exceptionally high compute thresholds. The law applies to frontier developers whose models are available in California. The law also imposes heightened obligations on developers of frontier models with more than US$500 million in annual revenue.

Covered companies must publicly publish and annually update a "Frontier AI Framework" describing how they identify, assess, and mitigate catastrophic risks, including cybersecurity protections for unreleased model weights and internal governance processes. In addition, all frontier developers must issue transparency reports when deploying new or substantially modified frontier models, detailing model capabilities, intended uses, and applicable restrictions, with large developers required to summarize catastrophic risk assessments and any third-party evaluations.

SB 53 also establishes mandatory reporting of "critical safety incidents" to the California Office of Emergency Services, robust whistleblower protections for employees raising AI safety concerns, and civil penalties of up to US$1 million per violation enforceable by the California Attorney General. Although the law applies directly to a relatively small number of developers, its influence is expected to extend well beyond California. Much like prior California privacy and environmental laws, SB 53 may function as a de facto national benchmark in the absence of comprehensive federal AI legislation.

With the statute now in force, and as California regulators begin issuing guidance and recommendations to update key definitions, companies developing, deploying, or procuring high-capacity AI systems should expect increased scrutiny of AI governance practices, incident response protocols, vendor assurances, and internal reporting structures, even if they fall outside the law's formal scope.

AB 853: Amendments to the California AI Transparency Act

On 13 October 2025, Assembly Bill 853 (AB 853) was signed into law, delaying the California AI Transparency Act's (the Act's) effective date to 2 August 2026 and imposing new requirements.

The Act establishes new standards for generative AI (GenAI) hosting platforms and systems. Among its initial requirements, the law mandates that creators of GenAI systems provide, at no cost to users, an AI detection tool.

System Provenance Data: Disclosure and Preservation Requirements

AB 853 imposes new requirements regarding system provenance data for online platforms. System provenance data is metadata embedded into content that contains information about the type of device, system, or service that was used to generate the content, or information otherwise related to content authenticity. These requirements apply to a range of covered entities, including:

  • Public-facing social media platforms;
  • Mass messaging services;
  • File sharing platforms; and
  • Standalone search engines that have served more than two million unique monthly users in the past 12 months.

Under AB 853, social media platforms are subject to the following three requirements regarding system provenance data associated with content distributed on their services.

  • Detection: Covered platforms must detect whether any provenance data is embedded in the content distributed on their platform
  • User Interface: Covered platforms are required to provide a user interface that discloses the availability of provenance data, specifically when that data reliably indicates that the content was generally or substantially altered by a GenAI system. These disclosures must be clear and accessible to the users, enabling them to understand when the content has been significantly modified through GenAI technologies.
  • User Inspection: Covered platforms must allow users to inspect all available system provenance data in an easily accessible manner. This can be achieved in the following ways:
    • Directly through the user interface;
    • By providing downloadable content that contains the provenance data; or
    • By offering a link to the content's provenance information displayed on an internet website or in another application provided by the platform or a third party.

The law also prohibits these platforms from knowingly removing any system provenance data from content.

GenAI Hosting Platforms and GenAI Disclosures

AB 853 requires that GenAI systems include latent disclosures in any AI-generated image, video, or audio content. These disclosures must convey specific information and must be permanent or extremely difficult to remove.

Effective 1 January 2027, AB 853 introduces further obligations for GenAI hosting platforms. In particular, these platforms may not knowingly make available any GenAI system that fails to embed the required disclosures within the content it creates.

Manufacturers of Content Creating Devices Must Provide Authenticating Information by Default

Starting 1 January 2028, any device intended for sale in California that captures digital content will be required to include a latent disclosure by default. The disclosure must contain the following information:

  • The name of the device manufacturer;
  • The name and version number of the capture device that created or altered the content; and
  • The time and date when the content was created or changed.

Although users may be given the option to enable or disable these disclosures, the device's default settings must ensure compliance with the rule.

Enforcement and Penalties

Enforcement of these new rules will fall to the California Attorney General, city attorneys, or county counsel. Any violation of the law will result in a civil penalty of US$5,000 per offense. Additionally, attorneys' fees and applicable costs may be imposed on violators.

Final Regulations Regarding Automated Decision Systems

On 1 October 2025, the final Employment Regulations Regarding Automated Decision Systems (ADS Regulations) final regulations went into effect. For more information on the regulations, please see the firm's 31 October 2025 alert and its 17 November 2025 alert.

An "automated decision system" (ADS) is any system, AI, machine learning, algorithms, statistics, etc., used to make or aid decisions related to job applicants or employees. This includes candidate-scoring tools, personality assessments, recruitment ad targeting, video-interview analytics, and predictive hiring models.

The ADS Regulations apply existing anti-discrimination laws to tools that employers use to directly or indirectly make employment decisions. Indeed, every California employer covered by the Fair Employment and Housing Act must practice algorithmic accountability when using ADS and AI in employment decisions. Key compliance requirements include:

  • Anti-Discrimination Measures: Employers must ensure the ADS does not cause disparate impact against protected groups. Liability applies even without intent, impact alone matters.
  • Bias Testing and Audits: Routine, independent audits and anti-bias testing are mandatory. One-off reviews at deployment are insufficient.
  • Transparency and Notice: Employers must inform applicants and employees both before and after ADS use. Notices should explain usage, options to opt out, and how to request human review.
  • Affirmative Defense: Employers can defend against legal claims by showing they have taken good-faith steps (e.g., audits, corrective measures, continuous oversight) with solid documentation.
  • Vendor Accountability: Outsourcing doesn't shift responsibility. Employers remain fully liable for bias or discrimination introduced by third-party systems.
  • Record Retention: Maintain all ADS-related documentation (e.g., data inputs/outputs, decision rules, audit results, correspondence) for a minimum of four years.

To ensure compliance with the regulations, covered employers should:

  • Establish policies and procedures for use of an ADS in employment decisions;
  • Train human resources personnel, managers, and anyone else using an ADS;
  • Educate users and leadership on the risks associated with using an ADS and the employer's obligations under the law;
  • Ensure there is human oversight and a human element to the decision-making process, even if an ADS is used;
  • Continuously monitor and test and regularly audit the ADS for bias and effectiveness.
  • Carefully vet vendors; and
  • Consult with legal counsel.

Illinois—HB 3773

As discussed in more detail in the firm's 1 May 2025 blog post, effective 1 January 2026, Illinois House Bill 3773 (HB 3773) amends the Illinois Human Rights Act, to expressly prohibit Illinois employers from using AI that "has the effect of subjecting employees to discrimination on the basis of protected classes." Specifically, Illinois employers cannot use AI that has a discriminatory effect on employees, "[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment." HB 3773 also requires employers to notify employees and applicants when using AI during recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or when the use could affect the terms, privileges, or conditions of employment.

At the end of 2025, the Illinois Department of Human Rights published draft rules implementing HB 3773, but these rules have not yet been finalized. However, Illinois employers should work with counsel to prepare for compliance.

Texas—HB 149

As discussed in more detail in the firm's 25 June 2025 alert, effective 1 January 2026, Texas' HB 149 Responsible Artificial Intelligence Governance Act (TRAIGA) imposes limited obligations on covered private employers3 and instead focuses on Texas government agencies' use of AI systems and the use of AI for certain limited purposes, such as to manipulate human behavior to incite violence/self-harm or to engage in criminal activities, and for social scoring. It also creates a state AI advisory council and regulatory sandbox program that allows accepted entities to test AI systems without a license, registration, or other regulatory authorization.

New Jersey—N.J.A.C. 13:16

New Jersey adopted regulations, effective 15 December 2025, governing disparate impact discrimination in the workplace, which includes the use of automated employment decision technology. "Automated employment decision tools" are defined as any software, system, or process that aims to automate, aid, or replace human decision-making relevant to employment. These regulations clarify that automated employment decision tools must be evaluated for potential disparate impact on protected classes. They explain that automated tools used for recruiting, screening, interviewing, hiring, and other employment decisions can replicate and amplify existing workforce imbalances, penalize applicants based on religion, disability, or medical needs, and generate biased outputs when the underlying technology has not been properly tested on diverse populations. Examples include résumé‑scoring models that mirror the demographics of a nondiverse workforce, scheduling filters that screen out applicants who cannot work on particular days for religious reasons, and facial‑analysis tools that inaccurately assess individuals with darker skin tones, disabilities, religious head coverings, or facial hair because the systems were not validated on comparable groups.

Federal Action

Executive Order 14365—Ensuring a National Policy Framework for Artificial Intelligence

On 11 December 2025, President Trump signed Executive Order 14365, "Ensuring a National Policy Framework for Artificial Intelligence" (EO 14365) aimed at preempting state AI laws in favor of unified, national regulation. EO 14365 provides that state-by-state AI regulation creates compliance burdens, requires entities to embed ideological bias within AI models, and impermissibly regulates beyond state borders, "impinging on interstate commerce." To "correct" this issue, EO 14365 seeks to establish "a minimally burdensome national standard" by charging various federal agencies with establishing a framework through which states can be penalized for enacting AI laws contrary to the Administration's AI policy, and existing state AI regulations can be legally challenged. EO 14365 also calls for the preparation of a legislative recommendation establishing a uniform federal policy framework for AI that preempts state AI laws.

Pursuant to EO 14365, on 9 January 2026, the US Attorney General established the AI Litigation Task Force, comprised of the Attorney General, the Associate Attorney General, and representatives from the Office of the Deputy Attorney General, the Office of the Associate Attorney General, the Office of the Solicitor General, the Civil Division with a state purpose of challenging state AI laws deemed inconsistent with the Administration's AI policy.

EO 14365 also provides that, within 90 days:

  • The Secretary of Commerce must issue a policy notice describing the circumstances under which states may be ineligible for certain broadband deployment funding under the Broadband Equity Access and Deployment Program if they impose certain AI-related requirements and must publish a list of state AI laws considered "onerous"; and
  • The Federal Trade Commission (FTC), in consultation with the Special Advisor for AI and Crypto, must issue a policy statement addressing how the FTC Act's prohibition on unfair or deceptive acts or practices applies to AI models and explain how certain state laws are preempted by the FTC Act.

In addition, within 90 days of the Secretary of Commerce's above-described actions, the FTC, in consultation with the Special Advisor for AI and Crypto, must initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that preempts conflicting state laws.

Given EO 14365's breadth and focus on preemption, legal challenges are anticipated. For example, Florida is moving ahead with its own AI regulations despite the issuance of EO 14365. Governor Ron DeSantis has introduced recommendations for Florida lawmakers, including an "Artificial Intelligence Bill of Rights," introduced as Senate Bill 482 by Sen. Tom Leek for consideration in the legislative session beginning 13 January 2026. SB 482 would prohibit AI companion chatbot platforms from establishing or maintaining accounts with minors without a parent's consent, and require them to allow parents to monitor, restrict, and disable their child's interactions. DeSantis stated that EO 14365 cannot preempt state authority under the Tenth Amendment and maintains Florida's proposals are consistent with child safety goals the federal government encourages. He stated, "Even reading [EO 14365] very broadly, I think the stuff we're doing is going to be very consistent," DeSantis said. "But irrespective, clearly, we have a right to do this."

At this time, employers should:

  • Continue to comply with applicable state AI regulations; and
  • Monitor further developments regarding EO 14365 and related litigation and legislative actions.

For more information on EO 14365, please see the firm's 15 December 2025 alert.

Department of Justice Compliance Guidance

The Department of Justice's (DOJ) updated Evaluation of Corporate Compliance Programs now directs prosecutors to assess how companies identify, manage, and mitigate risks associated with AI and other emerging technologies. Key elements under scrutiny in prosecutorial investigations include whether organizations conduct explicit AI risk assessments, implement robust controls and mitigation strategies, integrate AI risk management into broader enterprise governance frameworks, and provide training on responsible AI use. Additionally, corporate compliance teams must have adequate access to data and analytical tools to monitor and respond to AI-related risks, and companies are expected to adapt their compliance programs in response to technological and regulatory developments.

This expanded focus on AI risk management by DOJ can significantly influence prosecutorial decisions, affecting whether a company is charged, the severity of sanctions or monitoring obligations, and the possibility of reduced penalties or declination. The guidance underscores that effective AI governance is now a critical component of a robust corporate compliance program, requiring proactive measures and continuous adaptation to emerging risks.

AI Litigation

Mobley v. Workday, Inc., 740 F. Supp. 3d 796 (N.D. Cal. 2024)

Cases against AI vendors for bias in employment decisions and privacy violations are active, and employers should expect rulings on algorithmic discrimination and disclosure obligations in 2026. One of the most closely watched cases in this area is Mobley v. Workday, Inc., which is currently pending in the US District Court for the Northern District of California and illustrates the litigation risks of using AI in hiring.

In Mobley, a job applicant alleged that Workday's AI-driven recruitment screening tools disproportionately rejected older, Black, and disabled applicants, including himself, in violation of anti-discrimination laws. In late 2024, Judge Rita Lin allowed the lawsuit to proceed, finding the plaintiff stated a plausible disparate impact claim and that Workday could potentially be held liable as an "agent" of its client employers. This ruling suggests that an AI vendor might be directly liable for discrimination if its algorithm, acting as a delegated hiring function, unlawfully screens out protected groups.

On 6 February 2025, the plaintiff moved to expand the lawsuit into a nationwide class action on behalf of millions of job seekers over age 40 who applied through Workday's systems since 2020 and were never hired. The amended complaint added several additional named plaintiffs (all over 40) who claim that after collectively submitting thousands of applications via Workday-powered hiring portals, they were rejected—sometimes within minutes and at odd hours, suggestive of automated processing. They argue that a class of older applicants were uniformly impacted by the same algorithmic practices. On 16 May 2025, Judge Lin preliminarily certified a nationwide class of over-40 applicants under the Age Discrimination in Employment Act (ADEA), a ruling that highlights the expansive exposure these tools could create if applied unlawfully.

Throughout 2025, the case moved forward, with arguments relating to the certification of the class. On 6 January 2026, a motion hearing was held on Mobley's Motion to file a Second Amended Complaint, and Judge Lin granted the motion. Mobley moved to add additional Class Representatives and to add Title VII (sex and race), ADEA, and Americans with Disabilities Act claims on behalf of these proposed Class Representatives who have received their Notice of Right to Sue from the Equal Employment Opportunity Commission following the filing of the First Amended Complaint and adding race, gender, and age claims under California Fair Employment and Housing Act Gov. Code § 12940 et seq. The court also noted Defendants may take Mobley's deposition (per certain limitations).

Mobley marks one of the first major legal tests of algorithmic bias in employment and remains the nation's most high-profile challenge of AI-driven employment decisions.

Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well positioned to provide guidance and assistance to clients on the ever-changing AI legal landscape.

Footnotes

1. The law provides for a narrow exemption for employers with fewer than 50 employees that do not use their own data to train or further improve their AI systems.

2. None of the following closely watched California AI bills were passed into law in 2025, including: (1) Senate Bill 7 (No Robo Bosses Act); (2) Assembly Bills 1018 (Automated Decisions Safety Act); (3) 1221 (Workplace Surveillance Tools); and (4) 1331 Workplace Surveillance (Off Duty/Private Areas). K&L Gates discussed a number of these bills in its 29 May 2025 alert. However, the failure of these four bills to pass does not signify a shift in California's legislative priorities. California is still focused on algorithmic accountability and workplace surveillance, and that pressure is not going away. Companies that take proactive steps now (e.g., tighten governance, reduce bias, and implement transparency protocols with employees) will be prepared to pivot quickly if these or similar bills resurface in 2026.

3. TRAIGA regulates those who (1) deploy or develop AI systems in Texas; (2) produce a product or service used by Texas residents; or (3) promote, advertise, or conduct business in the state.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More