Insurers Seek to Exclude AI Risks From Corporate Policies
Insurers Brace for AI Risk, Seek to Limit Exposure in Corporate Policies
The rapid proliferation of artificial intelligence is triggering a significant recalibration within the insurance industry, as major providers move to shield themselves from potentially massive claims arising from the technology’s inherent unpredictability. AIG, Great American, and WR Berkley are among those petitioning U.S. regulators for permission to offer policies explicitly excluding liabilities linked to businesses deploying AI tools, including increasingly common chatbots and automated agents.
A Growing ‘Black Box’ of Uncertainty
The insurance industry’s caution stems from the unique challenges AI presents. Unlike traditional technological risks, the “hallucinations” – instances where AI models generate false or misleading information – and opaque decision-making processes of AI systems create a level of uncertainty insurers are hesitant to underwrite. “It’s too much of a black box,” explains Dennis Bertram, head of cyber insurance for Europe at Mosaic, a specialist insurer operating within Lloyd’s of London. Even Mosaic, which offers some coverage for AI-enhanced software, is steering clear of underwriting risks associated with large language models like ChatGPT.
This reluctance isn’t merely theoretical. Recent high-profile incidents are already translating into substantial legal challenges. Solar company Wolf River Electric is currently pursuing a $110 million defamation lawsuit against Google, alleging that the tech giant’s AI Overview feature falsely accused the company of legal wrongdoing. Similarly, Air Canada was compelled to honor a discount offered by its customer service chatbot, despite the offer being a fabrication. In the UK, engineering firm Arup lost HK$200 million (approximately $25 million USD) after falling victim to a sophisticated fraud scheme utilizing a digitally cloned senior manager during a video conference.
Systemic Risk: The Billion-Dollar Question
The core concern for insurers isn’t isolated incidents, but the potential for systemic, aggregated risk. Kevin Kalinich, Aon’s head of cyber, highlights the industry’s capacity to absorb a $400 million or $500 million loss from a single company deploying flawed AI. However, he warns that a scenario involving “1,000 or 10,000 losses” – a widespread failure impacting numerous clients simultaneously – is a different order of magnitude. This fear is amplified by the fact that AI-driven errors often fall outside the scope of traditional cyber insurance, which typically covers security or privacy breaches.
The potential financial impact of unchecked AI risk is substantial. According to a recent report by the World Economic Forum, global cybercrime, which AI could exacerbate, is projected to cost the world $10.5 trillion annually by 2025. This figure underscores the urgency with which insurers are attempting to understand and mitigate the risks associated with AI.
Regulatory Scrutiny and Policy Adjustments
Insurers are responding to this evolving landscape through a variety of mechanisms. AIG, while currently holding off on implementing the exclusions it has sought approval for, stated in a filing with Illinois regulators that generative AI is a “wide-ranging technology” and the likelihood of future claims will “likely increase over time.” Other companies, like QBE, are introducing “endorsements” – amendments to existing policies – to clarify coverage related to AI. However, brokers caution that these endorsements often result in reduced coverage. For example, QBE’s endorsement limiting payouts for fines stemming from the EU’s AI Act to 2.5% of the total policy limit has raised concerns among industry observers.
The EU’s AI Act, considered the world’s most comprehensive attempt to regulate the technology, is forcing insurers to reassess their risk models. Zurich Insurance’s Chief Information Officer, Ericson Chan, points out that with previous technological errors, identifying responsibility was relatively straightforward. “With AI risk, you potentially involve many different parties – developers, model builders, and end users,” he explains. “The potential market impact could be exponential.”
The Looming Legal Battles
As AI-driven losses increase, insurance brokers and legal experts anticipate a surge in litigation. Aaron Le Marquer, head of insurance disputes at law firm Stewarts, predicts that “it will probably take a big systemic event for insurers to say, hang on, we never meant to cover this type of event.” This suggests a period of legal uncertainty lies ahead, as courts grapple with the complexities of assigning liability in cases involving AI failures. Chubb, for instance, has agreed to cover some AI risks but has explicitly excluded “widespread” incidents, signaling a cautious approach and a potential for future disputes.
The insurance industry’s response to AI is a critical indicator of the technology’s broader economic impact. By attempting to quantify and mitigate the risks associated with AI, insurers are not only protecting their own bottom lines but also shaping the future of innovation and deployment. The coming years will likely see a continued evolution of insurance products and policies as the industry adapts to the ever-changing landscape of artificial intelligence.