ARTICLE
7 January 2026

Artificial Intelligence Governance For Insurers

WE
Wilson Elser Moskowitz Edelman & Dicker LLP

Contributor

More than 800 attorneys strong, Wilson Elser serves clients of all sizes across multiple industries. It maintains 38 domestic offices, another in London and enjoys more extensive international reach as a founding member of Legalign Global.  The firm is currently ranked 56th in the National Law Journal’s NLJ 500.
With the advent of artificial intelligence (AI), companies are rushing to invest in AI systems to achieve unprecedented growth and operational efficiencies...
United States Colorado Maryland New York Technology
Anjali Das’s articles from Wilson Elser Moskowitz Edelman & Dicker LLP are most popular:
  • with Senior Company Executives, HR and Finance and Tax Executives
  • with readers working within the Insurance and Technology industries

Introduction

With the advent of artificial intelligence (AI), companies are rushing to invest in AI systems to achieve unprecedented growth and operational efficiencies, and to develop superior goods and services. The insurance industry is poised to reap massive benefits from leveraging AI technology to sift and analyze its vast troves of claims, underwriting, and customer data at lightning speed. Early adoption of AI can create a competitive advantage by enabling accurate identification and pricing of risk across all lines of insurance. However, as the industry races to embrace AI, insurers must develop a robust AI governance and risk management plan. 

NAIC Model Bulletin: Use of AI Systems by Insurers

In December 2023, the National Association of Insurance Commissioners (NAIC) issued a model bulletin setting forth guidance regarding the development and use of Artificial Intelligence by the insurance industry. As of 2025, twenty-four states have adopted the NAIC's AI bulletin. As noted by the NAIC, “AI is transforming the insurance industry.” Moreover, “AI techniques are deployed across all stages of the insurance life cycle, including product development, marketing, sales and distribution, underwriting and pricing, policy servicing, claim management, and fraud detection.”

However, the NAIC cautions that “AI has the potential to increase the risk of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers”. As such, “it is important that insurers adopt and implement controls specifically related to their use of AI that are designed to mitigate the risk of Adverse Consumer Outcomes.” In particular, the NAIC bulletin requires insurers to develop and implement a written program for the responsible use of AI systems (AIS Program). 

The AIS Program should be designed to mitigate the risk that the insurer's use of AI systems will result in Adverse Consumer Outcomes, which is defined as “a decision by an insurer that is subject to insurance regulatory standards enforced by the Department [of Insurance] that adversely impacts the consumer in a manner that violates those standards.” The AIS Program should address the use of AI systems across the entire insurance lifecycle, from product development to claims-handling. This includes AI systems developed by the insurer or a third party. 

The AIS Program should include a governance framework. This includes the creation of a multi-disciplinary AI governance committee with members from different business units, such as actuarial, underwriting, claims, compliance, and legal. All individuals involved in the development, implementation, testing, and monitoring of the AIS Program should receive appropriate training. In addition, there should be senior management and Board oversight and accountability for the AIS Program. 

Special consideration should be given to the underlying data used to develop and train AI systems, including data quality free from inherent biases. Moreover, insurers must take steps to protect the confidentiality of any non-public consumer information that may be used as inputs or outputs for AI systems. This includes reasonable data security and privacy practices regarding the collection, use, storage, and sharing of customer information.

To the extent that an insurer uses AI systems developed by a third party, the insurer must conduct appropriate due diligence to ensure that the systems and underlying datasets will not result in Adverse Consumer Outcomes. In addition, the contracts with such third parties should include provisions that address the insurer's audit rights and the third party's duty to cooperate with the insurer in the event of any regulatory investigations related to the use of the third party's AI systems.

The NAIC Model Bulletin further notes that insurers may be required to provide regulators with information and documentation relating to the AIS Program, such as an inventory and description of AI systems; risk management and internal controls; corporate governance; policies and procedures relating to the adoption, implementation, maintenance, monitoring, and testing of the AIS Program; training materials; due diligence on third-party vendors; and data practices to ensure data integrity and suitability.

New York State Department of Financial Services AI Bulletin

On July 11, 2024, the New York State Department of Financial Services (NY DFS) issued the final version of its bulletin addressing the use of Artificial Intelligence by insurers. As noted by NY DFS, the use of AI can simplify and expedite insurance underwriting and pricing processes. However, benefits from the use of AI also pose potential risks that may result in unfair adverse effects on consumers or discriminatory decision-making. The purpose of the bulletin is to put insurers on notice of the NY DFS's expectations regarding the implementation and use of AI. This includes, but is not limited to, the adoption of an AI governance and risk management framework. While the bulletin specifically applies to insurers authorized or licensed to write business in New York, the overall guidance is beneficial to all insurers.

NY DFS cautions that insurers should not utilize AI for underwriting and pricing unless and until they conduct a comprehensive risk assessment demonstrating that such use will not result in unfair or unlawful discrimination. As part of its analysis, the insurer should document its analysis of unfair or unlawful discrimination from the use of AI; conduct periodic testing for unfair or unlawful discrimination; conduct a quantitative assessment using statistical metrics to analyze AI data and model outputs; and perform a qualitative evaluation for unfair and unlawful discrimination, taking into account how the AI operates.

Insurers should document their AI risk management plan, policies, and procedures, including: 

  • Board and senior management oversight of the insurer's AI-related activities.
  • Inventory and description of AI systems (including purpose, use, risks, safeguards).
  • Policies and procedures with respect to the development and use of AI systems.
  • Employee training on the permissible use of AI systems.
  • Processes for identifying risks associated with the use of AI systems.
  • Internal controls designed to mitigate identified risks.
  • Monitoring and testing of AI systems at least annually to assess performance. 
  • Audits to evaluate the overall effectiveness of the insurer's AI risk management plan. 

NY DFS emphasizes that insurers are responsible for ensuring that any AI systems used for underwriting and pricing developed by third-party vendors comply with all applicable laws and regulations. As such, insurers should conduct appropriate due diligence on vendors. In particular, insurers should adopt written standards, policies, and procedures for the acquisition, use, or reliance on AI systems developed or deployed by third-party vendors. 

In addition, insurers may want to include specific provisions in their vendor contracts, such as (i) audit rights; (ii) vendor's duty to cooperate in any regulatory investigations stemming from the insurer's use of the vendor's AI systems; (iii) reporting, remediation and elimination of incorrect (or biased) information contained in AI datasets; (iv) vendor's duty to report any known or suspected compromise of AI tools, models or datasets; and (v) vendor's inability to share, use or disclose the insurer's non-public data for any purposes outside the scope of services rendered to the insurer.

NY DFS further notes that transparency is an important consideration in an insurer's decision to use AI systems to underwrite and price insurance. Accordingly, insurers should disclose to consumers (1) the fact that the insurer uses AI in its underwriting and pricing decisions; (2) whether the insurer used data about a person that was obtained from a third-party vendor; (3) that the person has the right to request the specific information used about them in the underwriting or pricing decision; and (4) in the event of an adverse decision, the insurer should provide details about the type and source of information leading to an adverse underwriting or pricing decision.

NY DFS may audit an insurer's use of AI systems within the scope of regular or targeted examinations permitted under New York insurance law. Moreover, the failure to adequately disclose the use of AI systems and the external data sources upon which they rely may constitute an unfair trade practice in violation of insurance laws. Insurers must be prepared to respond to consumer complaints or inquiries regarding the use of AI and maintain records of all such complaints.

Colorado ECDIS Insurance Regulation 

Colorado amended its insurance regulations on October 15, 2025, related to the use of External Consumer Data and Information Sources (ECDIS). The regulation defines ECDIS as “data or an information source that is used by the insurer to supplement or supplant traditional underwriting factors or other insurance practices or to establish lifestyle indicators that are used in insurance practices.” This may include credit scores, social media habits, locations, purchasing habits, homeownership, educational attainment, licensures, civil judgments, court records, consumer-generated Internet of Things (IoT) data, biometric data, and insurance risk scores derived by the insurer or a third party from such external information sources. 

Insurers that are licensed to do business in the State of Colorado, offer life insurance, private passenger automobile insurance, or health benefit plans, and use ECDIS (including algorithms and predictive models that use ECDIS) are required to develop and implement a governance and risk management framework to determine whether the use of ECDIS potentially results in unfair discrimination. 

The ECDIS governance and risk management framework must include documentation demonstrating:

  • The use of ECDIS is reasonably designed to prevent unfair discrimination.
  • Senior management and Board oversight and accountability.
  • An established Governance committee, with members from key functional areas across the business.
  • Documented policies and procedures for the design, development, testing, deployment, use, and monitoring of ECDIS.
  • Ongoing training and supervision for personnel on the acceptable use of ECDIS.
  • Inventory of all ECDIS (including description, purpose, and outputs generated).
  • Methods and criteria for selection of third-party vendors that supply ECDIS.
  • A comprehensive annual review of the ECDIS governance and risk management framework.

Insurers subject to the Colorado insurance regulations are required to provide the Colorado Department of Insurance with a narrative report summarizing compliance with the ECDIS requirements, including the title and qualifications of each individual responsible for compliance. These reports must be signed by an officer attesting to compliance with the regulation. The compliance reports must be submitted annually (beginning on December 1, 2024, for life insurers and on July 1, 2026, for private passenger automobile insurers and health benefit plans). Noncompliance may result in civil penalties and/or the suspension or revocation of an insurance license.

Maryland AI Insurance Regulation

On October 1, 2025, Maryland enacted a new law regulating the use of AI in connection with utilization review by health insurers, health maintenance organizations, and any other health plans subject to regulation by the State of Maryland (collectively, “insurers”). Utilization review is a process used by health care insurers to evaluate the necessity and appropriateness of medical treatment, in part to control costs. The law requires that AI used for utilization review does not (i) base its determination solely on a group (versus individual) dataset, (ii) replace the role of a healthcare provider in the decision-making process, (iii) result in unfair discrimination, (iv) directly or indirectly cause harm to an enrollee, or (v) deny, delay or modify health care services. Insurers are required to make AI tools available for inspection and audit by the Maryland Insurance Commissioner. In addition, insurers must have written policies and procedures that describe how AI tools will be used and what oversight will be provided. Insurers must review the performance, use, and outcomes of AI tools on a quarterly basis.

Conclusion

With the rapid adoption of AI systems and tools by the insurance industry and the inherent and unknown risks associated with their use, it is likely that more States will enact new insurance laws and regulations governing the use of AI. Meanwhile, the existing guidance and laws make clear that a common recurring theme is the need for a well-documented AI governance and risk management framework that sets the tone from the organization's top echelons.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More