AI drives insurance productivity, faces scaling challenges
Scaling efforts are hindered by regulatory, data, and governance challenges.
The adoption of artificial intelligence (AI) within the insurance industry has accelerated in recent months, with a particular focus on enhancing productivity, according to Laurent Doucet, Partner for Insurance Asia at Roland Berger.
Doucet highlighted that while AI has been a part of the industry for years, recent advancements, particularly in generative AI, are enabling insurers to explore new use cases and improve various aspects of their operations.
"Technology is evolving very, very quickly," Doucet said, "If we look at the adoption and the number of the benefits and adoption for insurers, most of the cases are in the productivity, so general personal productivity, customer services, claim, claim management, of course."
Doucet also pointed out that generative AI is opening new avenues for insurers, particularly in the area of product development and marketing. He mentioned the development of virtual consumer panels, which allow insurers to create digital twins for product testing and marketing assessments.
However, scaling AI effectively presents significant challenges for insurers. According to Doucet, a well-defined AI strategy is crucial for success. "The first thing to scale AI is to get a proper strategy," he advised.
He underscored the importance of a strategy that balances business objectives, technology execution, and governance, "When you define such a strategy, you need to have, like, probably three, three pillars: one pillar on the business strategy, value proposition, one on the technology execution, and one on the governance and people."
Doucet emphasised that customer data security is paramount in AI implementation, particularly in a highly regulated industry like insurance. "Customer data security is paramount," he stresses. Insurers must also contend with legacy systems, making it critical to have a master plan for integration and system upgrades.
Governance and change management are equally important in scaling AI. Doucet warns of the risks associated with poor governance, including model risks, data quality issues, and ethical concerns related to AI bias.
"There are a number of risks on the regulatory and compliance, of course. There are some aspects of model risk, which is the right selection and customization of the large language model that you want to use and approval of these models," he explained.