Blog Image
09 Jul 2024

Navigating AI in Insurance: Compliance and Best Practices (per the NAIC)

The insurance industry is rapidly evolving with the adoption of Artificial Intelligence (AI) systems. The National Association of Insurance Commissioners (NAIC) has recently issued a model bulletin outlining the regulatory expectations and best practices for insurers using AI. This guidance aims to ensure that the deployment of AI in insurance is both innovative and compliant with existing laws, thereby protecting consumers and maintaining market stability.

Access the full NAIC AI Guidelines here.

The Transformative Power of AI in Insurance

AI is reshaping the insurance landscape by enhancing product development, marketing, underwriting, policy servicing, claims management, and fraud detection. Its ability to streamline processes and improve accuracy offers substantial benefits. However, these advantages come with potential risks, including data vulnerabilities, biases, and lack of transparency. Insurers must adopt measures to mitigate these risks and ensure that AI systems comply with all relevant regulations.

Regulatory Expectations and Legislative Authority

The NAIC bulletin emphasizes compliance with several key legislative frameworks:

  1. Unfair Trade Practices Model Act (UTPA): This act prohibits unfair or deceptive practices in the insurance industry. Insurers must ensure that AI-driven decisions do not result in unfair competition or discrimination.
  2. Unfair Claims Settlement Practices Model Act (UCSPA): This act sets standards for fair claims handling. AI systems must adhere to these standards to avoid unfair claim settlements.
  3. Corporate Governance Annual Disclosure Model Act (CGAD): Insurers must disclose their governance practices, including how they manage and oversee AI systems.
  4. Property and Casualty Model Rating Law: This law ensures that insurance rates are not excessive, inadequate, or discriminatory. AI models used for rate setting must comply with these principles.
  5. Market Conduct Surveillance Model Law: This law provides a framework for regulatory oversight of market practices, including the use of AI in insurance operations.

Implementing an AI Systems (AIS) Program

Insurers are expected to develop and maintain a comprehensive AIS Program that addresses the risks associated with AI usage. The program should include robust governance, risk management controls, and internal audit functions. Key components of the AIS Program include:

  1. Governance Framework: Establish policies and procedures to oversee AI systems throughout their lifecycle—from development to retirement. This includes documenting compliance with AIS Program standards and ensuring transparency, fairness, and accountability.
  2. Risk Management and Internal Controls: Implement processes to manage the risks of using AI, including data governance, model validation, and protection of non-public information. Regular testing and validation of AI systems are crucial to maintain their reliability and fairness.
  3. Third-Party AI Systems and Data: Conduct due diligence when acquiring AI systems or data from third parties. Contracts with third-party vendors should include audit rights and compliance obligations to ensure that their AI systems meet regulatory standards.

Mitigating Risks and Ensuring Compliance

The potential for AI systems to produce inaccurate, arbitrary, or unfairly discriminatory outcomes necessitates strict controls. Insurers must ensure that their AI systems:

  • Do Not Discriminate Unfairly: AI-driven decisions must comply with anti-discrimination laws.
  • Maintain Transparency: Consumers should be informed when AI systems are used, and the decisions should be explainable.
  • Protect Consumer Data: Robust data security measures must be in place to protect sensitive information.
  • Are Regularly Audited: Continuous monitoring and auditing are essential to detect and address biases or errors in AI systems.

Regulatory Oversight and Documentation

The NAIC bulletin outlines the documentation and information insurers must provide during regulatory investigations. This includes:

  • Written AIS Program: Documentation of the AIS Program’s policies, procedures, and compliance measures.
  • AI System Inventories: Detailed descriptions of AI systems and predictive models used.
  • Risk Management Documentation: Records of risk management practices, data governance, and validation processes.
  • Third-Party Agreements: Contracts and due diligence records related to third-party AI systems and data.

Conclusion

AI offers immense potential to transform the insurance industry, but it must be used responsibly and in compliance with regulatory standards. The NAIC’s model bulletin provides a comprehensive framework to guide insurers in the ethical and effective use of AI. By adhering to these guidelines, insurers can harness the power of AI to innovate and improve their services while protecting consumers and maintaining market integrity.