Major insurers such as AIG, Great American and W.R. Berkley are requesting regulatory approval to exclude liability claims arising from artificial intelligence use from standard corporate policies, citing the technology’s unpredictable “black box” nature and the risk of simultaneous large-scale losses. Insurers fear that one malfunctioning AI model could trigger thousands of claims at once and find the exposure uninsurable under traditional underwriting frameworks. According to recent reporting, the shift signals a critical warning to businesses racing to adopt AI: the support infrastructure for transferring risk may no longer exist. (TechCrunch, November 23); (Financial Times, November 23); (Barron’s, November 24).
Sources: Financial Times, Barron’s
Key Takeaways
– Insurers are actively moving to insert explicit exclusions for AI-related liabilities in corporate insurance policies, indicating they view AI risk as too uncertain and unquantifiable for standard underwriting.
– The retreat in coverage underlines that the rapid business push to integrate AI may face a new friction point: lack of insurability—companies may have to absorb the risk themselves or postpone deployment.
– The decision signals a broader structural challenge in the AI ecosystem: when risk transfer fails (via insurance), businesses and regulators must rethink who bears responsibility for AI errors, malfunctions or malfeasance.
In-Depth
In recent weeks the insurance sector has quietly begun signalling a major shift in how it treats the risks associated with artificial intelligence. For decades, insurers have been at the heart of risk-management: companies deploy new technologies or processes knowing that the insurance industry will absorb some of the fallout if things go wrong. Today, though, that assumption is being challenged as major carriers — including AIG, Great American and W.R. Berkley — have filed with U.S. regulators seeking permission to remove liability coverage for AI-related claims from corporate policies. The reason? AI is becoming too unpredictable, too opaque and potentially too systemic for traditional underwriting models to manage.
According to reporting by TechCrunch and the Financial Times, insurers describe many AI systems as “black boxes” — models whose internal logic is often inscrutable even to their developers. In one cited example, an AI tool owned by Google falsely accused a company of being under legal investigation, triggering a $110 million lawsuit. Another involved a chatbot at Air Canada that invented a discount, leading to an actual payout. When you combine those kinds of errors with the fact that a single AI platform might be deployed by thousands of companies, the risk of “many simultaneous losses” — something insurers refer to as correlated risk — becomes real.
Traditionally insurers could underwrite individual clients, spread risk across many independent exposures and protect against a defined loss. With AI, the scenario increasingly looks like thousands of clients being affected by a single model’s failure or a systemic flaw in a widely used platform. This breaks the diversification assumptions underlying commercial insurance. One underwriter quoted in the FT said they could manage a $400 million loss for one company — but not something that hits 10,000 companies in one event. As one article put it: “We’re about to find out what happens when the software that everyone’s racing to adopt becomes too risky for anyone to insure.”
What does this mean for businesses and for broader AI adoption? On one hand, it introduces a new cost and risk calculus. Deploying AI is no longer just about how well you can integrate the tools — it’s also about whether you are willing, or able, to accept the consequences if that tool makes a mistake. Without insurance, companies must self-insure, set aside capital, or develop internal mitigation strategies. That could slow adoption, raise costs, or push companies toward more conservative uses of AI.
On the other hand, the insurance industry’s pullback opens the door to new types of specialty insurance products or alternative risk-transfer arrangements tailored to AI. Indeed, a few niche insurers have already launched policies to cover AI-specific failures (for example, at Lloyd’s of London one product covers chatbot errors), but these are selective, limited and not yet scaled for the mass market. The fact that mainstream carriers are seeking exclusions rather than new coverage offerings suggests the status quo is under strain.
From a regulatory and governance perspective, the development raises questions about accountability and responsibility. If insurers won’t cover AI errors, who will? Will developers bear more liability? Will companies deploying AI face stricter demands for auditability, transparency or third-party review? Will regulators step in to ensure that certain exposures remain insurable, or will we see a bifurcation where only large firms can afford the risk?
For risk-conscious executives — especially in sectors like finance, healthcare, manufacturing or defence where AI deployment is accelerating — this is a red flag. Insurers pulling back means not only higher self-insured retention but also the need to scrutinize the AI systems themselves: What fallback plans exist if things go wrong? How are you testing the model? What governance do you have over third-party AI providers? How will you respond to a mass malfunction or coordinated misuse? The insurance retreat emphasizes that the technological promise of AI must now be matched by disciplined risk management.
In the end, the insurance industry’s actions signal that one of the foundational support systems for enterprise risk is wobbling at the edges of AI deployment. For companies charting forward, this means that AI should not just be seen as an innovation driver — it must be managed as a risk driver. And with insurance stepping back, firms will have to fill that gap themselves. The era when you could “just adopt AI and rely on your insurer to handle losses” appears to be ending. The question now is whether the industry, regulators or new market entrants will step in to build a new model of risk-transfer suited to the age of artificial intelligence.

