National Association of Insurance Commissioners Embraces Model Bulletin on Artificial Intelligence


  • NAIC adopted a model bulletin on AI in insurance on December 4.
  • The model bulletin guides state regulators on AI usage compliance with state laws.
  • It focuses on preventing unfair practices and discrimination in AI systems.
  • Insurers are required to develop and maintain responsible AI programs.
  • Verification and testing for errors and biases in AI systems are encouraged.
  • The program should align with the extent of AI use and potential harm to consumers.
  • Risk management controls include standards for AI system validation and data suitability.
  • Several states are also implementing regulations to govern AI in insurance.

Main AI News:

In a unanimous decision on December 4, the National Association of Insurance Commissioners (NAIC) took a significant step by adopting a model bulletin that delves into the realm of artificial intelligence (AI) within the insurance sector. This model bulletin is designed to be a guiding light for state insurance regulators, outlining their expectations for how insurance companies should harness and employ AI technologies in full compliance with state laws. These laws encompass regulations addressing unfair trade practices and unjust claims settlement practices. It’s essential to note that the model bulletin’s impact is contingent on individual state adoptions and will apply exclusively to insurers with a valid certificate of authority to operate within that state.

What sets this model bulletin apart is its comprehensive approach to safeguarding consumers against any detrimental consequences stemming from AI usage that contravenes state laws. The document explicitly calls upon insurance providers to establish a stringent framework to mitigate such risks. A core requirement is the development and maintenance of a well-documented program aimed at the responsible use of AI systems. Furthermore, the model bulletin strongly advocates for the adoption of verification and testing methodologies to identify errors, biases, and the potential for unfair discrimination within predictive models and other AI systems.

The standards laid out in the model bulletin for AI program development encompass the following key points:

  • Governance, encompassing oversight and accountability through senior management, is answerable to the insurer’s board or an appropriate board committee.
  • Tailoring the program to align with the extent of AI use, taking into consideration the potential harm that could befall consumers.

Moreover, the model bulletin mandates that risk management controls established under an insurer’s AI program must encompass, among other aspects, standards for validating, testing, and retesting AI systems as required to assess their performance. Equally significant are the standards for evaluating the suitability of data used in the training, validation, and auditing of these systems. The model bulletin also necessitates processes for evaluating data and AI systems procured from third-party sources.

This model bulletin couldn’t have arrived at a more opportune time, as an increasing number of states are proactively adopting AI standards within the insurance sector. Noteworthy examples include:

  • A regulation enacted by the Colorado Division of Insurance in September 2023, compelling life insurers to leverage external consumer data and information sources (ECDIS) to implement a governance and risk management framework geared towards preventing unfair discrimination.
  • Draft regulations issued by the California Privacy Protection Agency (CPPA) on November 27, 2023, pertaining to automated decision-making technology (ADMT), mandate businesses using ADMT to provide California residents with opt-out notices and access to information regarding ADMT use.
  • Bulletins issued by the California Insurance Commissioner on June 30, 2022, expressing concerns about the potential for AI technologies and “Big Data” to lead to unfair discrimination by insurers and instructing insurers to review their practices.
  • Plans unveiled by the New York Department of Financial Services (NYDFS) to issue a Circular Letter, outlining best practices for insurers utilizing AI and clarifying concerns raised in a previous Circular Letter from January 18, 2019, regarding the use of external data, algorithms, and predictive models in life insurance underwriting. The NYDFS aims to ensure transparency and prevent unfair discrimination in these practices.

The adoption of this model bulletin by NAIC marks a pivotal moment in the insurance industry’s journey toward responsible AI implementation, with a primary focus on safeguarding consumer rights and upholding the highest ethical standards. It is expected that this comprehensive guidance will serve as a blueprint for state regulators and insurers alike as they navigate the evolving landscape of AI-driven insurance practices.


NAIC’s Model Bulletin signifies a significant step in shaping the insurance industry’s AI landscape, emphasizing consumer protection and ethical AI practices. As more states follow suit with AI regulations, insurers must adapt their strategies to align with these evolving standards, ensuring transparency and fairness in AI-driven insurance processes.