AI Accountability Gets a Boost with New Insurance Offering
Lloyd’s of London, through the Toronto-based startup Armilla, has launched a pioneering insurance policy designed specifically for the AI era.
This coverage aims to protect companies from financial losses stemming from AI-related errors or malfunctions.
While Lloyd’s and Armilla are tapping into the booming AI market to grow their business—much like they have done with previous emerging risks—the move highlights an important reality: AI, despite its transformative potential, remains a significant business risk.
For companies hoping AI will lower operational costs, this new insurance signals a cautionary note.
Integrating AI could, in fact, increase expenses such as insurance premiums.
Armilla’s policy is structured to cover legal costs and potential damages if a company faces lawsuits related to harm caused by its AI products.
CEO Karthik Ramakrishnan suggests that beyond risk mitigation, this insurance could encourage wider AI adoption by easing fears around technology failures.
For example, in 2024, Air Canada faced costly repercussions after its AI chatbot mistakenly offered unauthorised discounts, a court ruling forced the airline to honour those offers.
Had Air Canada been insured under Armilla’s policy, some of those losses might have been mitigated.
However, the coverage is selective—Armilla only insures AI systems after thorough evaluation to ensure an acceptable risk profile, refusing to cover “lemon” models prone to failure.
This contrasts with some existing insurers who provide limited AI-related protection as part of broader tech error policies.
Ultimately, this new product reflects the evolving landscape where AI’s power is balanced by the very real risks it introduces—and the growing need for businesses to manage those risks proactively.
Risks of Trusting AI’s Made-Up Data in Decision-Making
The impact of companies relying on AI-generated falsehoods—known as hallucinations—can be profound, resulting in misguided decisions, financial setbacks, and reputational damage, according to industry news site PYMNTS.
The outlet also raises critical questions about accountability when AI systems produce such errors.
This concern aligns with insights from MJ Jiang, Chief Strategy Officer at Credibly, who recently told Inc that while hallucinations in AI cannot be fully eliminated, they can only be mitigated.
Jiang warns that companies face significant legal risks from these AI-induced mistakes and should proactively consider who bears responsibility if an AI error leads to harm.
She emphasizes the importance of establishing robust mitigation strategies to minimise these risks.
In fact, she thinks that:
“…because GenAI cannot explain to you how it came up with the output, human governance will be essential in businesses where the use cases are of higher risk to the business.”
Business leaders and experts alike caution that adopting AI is far from risk-free and advocate for thorough preparation to ensure compliance and manage potential legal challenges.
Incorporating these considerations into your AI strategy and budget is essential for navigating the complex risks of AI implementation.