ComplianceAugust 01, 2025

AI will be ‘picking winners and losers’ among insurance products, say experts

Artificial intelligence is expected to transform many functions of the insurance business, but the area where it might prove the most disruptive – and fraught – could be product recommendations.

By Warren S. Hersch

As published in Life Annuity Specialist

Carriers are eager to deploy artificial intelligence to improve their customer experience and streamline mundane operatins, and insurers are embracing the technology to achieve faster and more accurate underwriting. But the robots are also coming for another aspect of the insurance business that could have profound and unintended consequences: recommending individual products for consumers.

At the S&P Global Ratings Annual Insurance Conference in New York last month, Penn Mutual CEO David O'Malley predicted that AI tools will soon enable insurance and financial services professionals to automatically identify the most suitable products for their clients. In this shift, he suggested, AI will effectively become the decision-maker — "picking winners and losers" among competing carrier products by analyzing complex product terms and individual client needs.

Automating this process would not only raise questions about how carriers might find ways to "game the algorithm" to put their own products ahead of their competitors', but it would undoubtedly draw scrutiny from regulators charged with assuring that such recommendations are genuinely in consumers' best interest.

Kathy Donovan, a specialized consulting manager in the insurance compliance business of professional services firm Wolters Kluwer, said in an interview that she expects regulators to take keen interest in this application of AI.

Donovan pointed to the Connecticut Insurance Department's leadership in crafting AI standards, likely based on a National Association of Insurance Commissionersmodel bulletin on insurers' AI use, which could become a reference point for evaluating insurers' practices during market conduct examinations.

Transparency and traceability needed

Jackie Morales, a senior principal at consultancy Datos Insights, emphasized transparency and traceability as central metrics regulators will use when scrutinizing AI-driven product recommendations and marketing. She foresees enhanced disclosure requirements designed to protect consumers and ensure that insurers can interpret and implement the rules effectively.

These requirements would supplement existing NAIC guidelines, which mandate that insurers maintain written procedures to demonstrate compliance. Morales noted that these procedures also call for oversight and auditing of AI-generated materials to ensure quality and accuracy.

Some insurers are already feeling the strain. Sean Cox, president of First Consulting & Administration, said the volume of AI-generated marketing materials is stretching the capacity of compliance departments.

The AI models that generate content "often don't understand the applicable laws and regulations, which can vary by product type," Cox said. They also frequently fail to consider an insurer's own internal compliance standards, he added.

And it doesn't help that regulations often vary widely from state to state — but rules can also vary subtly, which can be even more nettlesome to account for.

The 'black box' problem

A persistent concern among compliance professionals is AI's so-called "black box" problem — the lack of transparency in how AI tools reach their conclusions, particularly regarding product suitability. This issue is also alarming to state insurance regulators, nearly all of whom have adopted the NAIC's best interest standard for annuity sales, but can find that hard to assess when the recommendations are based on opaque decisions by AI systems.

To mitigate this risk, Donovan said insurers need technology controls robust enough to "question the intelligence of the AI," ensuring that its outputs align with regulatory and internal standards. This includes creating parallel review systems to validate AI-generated decisions.

Jeff Chapman, field CTO at AI platform developer Vectara, echoed this need for safeguards, noting that AI systems remain susceptible to "hallucinations" — fabricated or inaccurate information that the programs present as fact.

Many carriers have built promising AI prototypes but never deployed them, Chapman said, citing the risks of opaque logic as a common stumbling block. This challenge, he added, has created demand for platforms like Vectara's, which allow insurers to build AI assistants grounded in their own data and evaluate the accuracy of AI-generated content.

Despite the complications AI introduces, it is also proving invaluable in addressing them. First Consulting, for example, is working with a platform, called Comply, designed to streamline advertising compliance reviews and workflow management.

Morales noted that AI can also reduce compliance risk by rapidly gathering, analyzing, and summarizing regulatory changes, and by scanning vast volumes of data far faster than human reviewers.

"In the insurance industry, speed to compliance is as important as speed to market," she said.

Vall Herard, CEO of compliance software vendor Saifr, claims that his firm's AI Agent for NAIC model law and state regulations helps reduce compliance review cycle times by up to 70%.

The platform, he added, helps ensure regulatory consistency and allows marketing teams to resolve issues before compliance review, freeing up staff to focus on more complex matters.

'Human in the loop'

Still, experts caution that human oversight remains essential. Stacy Koron, VP at First Consulting, warned that while AI can accelerate product development, it can also overlook key compliance issues.

"Like any other risk to our business, we need to monitor output closely, work with developers to improve accuracy, and pivot quickly if we determine AI systems are not working as intended," she said in an email.

Koron pointed to the NAIC model bulletin as a valuable tool for applying existing laws to new AI technologies — particularly around non-discrimination, governance, and market conduct. While these laws may need to evolve, she said insurers can minimize disruptions by staying focused on core compliance principles and consumer protections.

First Consulting's Cox emphasized that insurers should view AI as a tool to augment, not replace, compliance teams. Having a "human in the loop," especially during these early days of AI adoption, is essential for verifying results, he said.

Saifr's Herard agreed. "The AI tool isn't a replacement for human review — it is a supplement," he said.

To balance innovation with risk management, insurers must implement structured AI governance frameworks, said Vectara's Chapman. This includes investing in mandatory employee training on the responsible use of AI, deploying monitoring tools integrated into configuration management databases to track where and how AI systems are used, and involving risk officers and senior executives to ensure transparency and accountability in AI use.

Back To Top