Knowledge

AI and Insurance: between operational efficiency and cyber risks

by Mario Calcagnini (partner, Excellence Consulting) and Maurizio Primanni (CEO, Excellence Consulting)

In 2025, the European insurance sector is experiencing a phase of strong technological acceleration. The adoption of artificial intelligence-based solutions, in particular, is growing rapidly. According to Fitch Ratings, over 50% of European non-life companies have already adopted AI solutions (data updated to February 2025), while according to Milliman, 60% of companies use generative applications in processes related to claims management and customer care, as reported in the Milliman Barometer of GenAI Adoption – European Insurance, updated to March 2025.

Also at a global level, the direction is clear. A recent survey conducted by Gallagher in 2025 shows that 68% of C-level executives consider artificial intelligence a strategic lever. However, the figure is down from 82% recorded the previous year, indicating a phase of greater awareness: it is no longer just a matter of adopting AI, but of doing so according to criteria of safety, ethics, and control.

It is a clear step forward compared to the period 2020–2022, when artificial intelligence was viewed with suspicion and the use cases were not yet immediately intuitive. Today, also due to the competitive urgency and the so-called “ChatGPT effect”, the adoption of AI is no longer a marginal bet or a secondary experiment: it has become a systemic, structural, and essential trend.

However, while the engine accelerates, the regulatory bodywork and the frame of operational resilience struggle to keep up. The DORA and AI Act regulations were created precisely to bridge this gap: their objective is not to hinder innovation, but to anchor it to criteria of solidity and systematic governance.

Where AI stands today in insurance
Artificial intelligence has now become pervasive in the insurance sector. Its applications are multiple and cover strategic areas. In intelligent underwriting, for example, AI is used for risk calculation through advanced predictive models. In the field of automated claims, instead, technologies such as computer vision and Natural Language Processing are used to efficiently manage photos and damage quantification.

AI is also present in customer engagement, with conversational agents always available, 24/7, to serve customers. In addition, anti-fraud and dynamic pricing solutions are increasingly widespread, where adaptive algorithms that update dynamically with user usage come into play.

The advantages of these applications are evident: cost reduction, higher service quality, and more satisfied customers. However, behind the enthusiasm for the first applications lie risk elements that are still not effectively managed: often, there is a lack of a systemic vision of the risk governance model associated with the use of AI.

Risks can undermine the value created by AI
In many cases, artificial intelligence is treated as if it were a normal technological component, but in doing so, there is a risk of underestimating the risk factors it introduces. Concrete examples of incidents due to the use of AI are increasingly frequent.

In one case, an internal audit discovered that an AI-based pricing model autonomously modified tariff parameters, following “opaque” patterns, without accessible explainability logs. In another case, a company tested an anti-fraud model that ended up systematically penalizing certain postal codes, with the evident risk of incurring discriminatory behaviors.

There is more: some claims were rejected by the AI model due to a presumed inconsistency, which, however, was the result of an error in the algorithm’s own “reasoning” path. In all these cases, the problem does not lie in the technology itself, but in the lack of an adequate governance model based on operational transparency and structured safeguards.

DORA and AI Act: two regulations, one coherent message
The Digital Operational Resilience Act (DORA) and the European AI Act should not be interpreted as competing regulations, but as complementary tools. DORA imposes strict requirements on ICT resilience and on the control of critical suppliers. The AI Act, instead, introduces a classification of artificial intelligence systems based on risk, imposing specific requirements for those considered “high risk” – a category that includes many insurance applications.

There are some key points to oversee. First of all, cybersecurity by design, which must also be extended to AI models integrated into processes. Then, it is necessary to implement continuous assessment of technology solution providers, including those who provide models or services via API.

It is essential to ensure the traceability of automated decisions, with the possibility to analyze, explain, intervene and, if necessary, deactivate erroneous behaviors. Finally, it is fundamental to introduce continuous robustness testing, even in real operational contexts.

Due to its very non-deterministic nature, AI must be governed as a strategic asset and regulated as a potential source of operational risk.

The challenge: governing an intelligence that is (still) not mature
To move from adoption to orchestration of AI, insurance companies must act on four fundamental directions.

First, an interfunctional governance model is needed, where IT, compliance, risk management, data science, and business share languages, objectives, and responsibilities. Second, it is crucial to adopt a logic of explainability by design: the ability to track, understand, and correct decisions made by AI models can no longer be considered optional.

The third element concerns supply chain management: every algorithm or external dataset represents a potential risk vector and must therefore be verified and certified. Finally, an evolutionary cyber readiness is needed: security is no longer just a technical IT function, but a strategic asset that must be able to adapt to the evolution of AI usage by companies.

The main Italian companies have already long since started structured paths to face these challenges. However, the real difference will not be made by those who simply “use AI”, but by those who will be able to govern it with strategic vision and industrial discipline.

In the insurance world, where trust and reliability are vital assets, true innovation will be the one capable not only of exploiting AI, but also of protecting the system from its flawed applications.

Whistleblowing

L’Istituto del “Whistleblowing” è riconosciuto come strumento fondamentale nell’emersione di illeciti; per il suo efficace operare è pero cruciale assicurare una protezione adeguata ed equilibrata ai segnalanti. In tale ottica, al fine di garantire che i soggetti segnalanti siano meglio protetto da ritorsioni e conseguenze negative, e incoraggiare l’utilizzo dello strumento, in Italia è stato approvato il D.Lgs. n.24 del 10 marzo 2023 a recepimento della Direttiva (UE) 2019/1937 riguardante la protezione delle persone che segnalano violazioni.

Il decreto persegue l’obiettivo di rafforzare la tutela giuridica delle persone che segnalano violazioni di disposizioni normative nazionali o europee, che ledono gli interessi e/o l’integrità dell’ente pubblico o privato di appartenenza, e di cui siano venute a conoscenza nello svolgimento dell’attività lavorativa.

Segnalazione

(*) Campi obbligatori