AI risks: What do insurers want to know about your use of AI?

  • Publications and reports

    10 November 2025

AI risks: What do insurers want to know about your use of AI? Desktop Image AI risks: What do insurers want to know about your use of AI? Mobile Image

Last month, we visited some of our professional indemnity insurers, in London. These visits are always interesting. As well as an opportunity to maintain important relationships, they offer us a window into what insurers are thinking about and what risks they are concerned about. Their focus is often on new risks, in part because they wish to learn about them, but also because they want to make sure their insureds are aware of them and are managing them appropriately. Discussions such as these are also useful for insureds, to help them to become aware of and manage new and developing risks.

When we last visited, our insurers were focused on cyber risk, which was understandable given the increasing frequency and severity of cyber events. This year, all the insurers wanted to talk about artificial intelligence or AI. 

Our firm has been investing in AI-assisted legal tools since 2017, when we co-founded and invested in a legal technology company, McCarthyFinch, which created AuthorDocs, an AI-based legal drafting tool. This initial collaboration led to the development of further AI tools before McCarthyFinch was sold to a global legal management software provider. In 2023, we created an in-house gen-AI based chatbot named AIME. We are increasingly using external AI tools as well.

For insurers, this technology shift presents opportunities which we have talked about in previous editions of Cover to Cover, such as automating customer on-boarding, queries, claims, complaint responses and insurance broking. For professional services firms, it offers similar opportunities to automate and derive efficiencies from manual processes. However, it also presents risks and challenges. Insurers must ask: when a customer says “we use AI”, what exactly is happening, what could go wrong, how could it result in liability, and how should that feed into risk acceptance and pricing? These concerns are not unique to insurers of professional services firms.

What insurers are looking for

We found that insurers were interested in two things in particular:

  • Whether professional services firms are using AI without properly checking and verifying the outputs. Are they using it only as a useful tool, or crossing the line to allowing it to generate work product which they rely upon

  • Whether professional services firms are using confidential client data in AI tools they do not fully control. Is there a risk that client information could fall into the hands of the wrong people, or be used to educate or build an AI model in a way that risks the release of information to competitors or others?

Use of AI without proper checks

The recent New Zealand case of Wikeley v Kea Investments Ltd [2024] NZCA 609 illustrates the risk of using AI without a sufficient human review, with the following commentary in a footnote:

We note Mr Wikeley’s original memorandum in response dated 10 September 2024 which Mr Wikeley withdrew after the apparent use of generative artificial intelligence in its drafting was drawn to our attention by respondent counsel. The use of generative artificial intelligence was not initially disclosed by Mr Wikeley, but was evident from the references to apparently non-existent cases. No further comment is necessary except to note the relevant guidance recently issued by the judiciary: Guidelines for use of generative artificial intelligence in Courts and Tribunals: Non-lawyers (Artificial Intelligence Advisory Group, 7 December 2023).

Mr Wikeley was a self-represented litigant (i.e. he had no lawyer), a class of persons who increasingly use AI to draft legal documents and who are unlikely to be insured for the consequences. Of more concern from an insurance perspective are qualified lawyers who do the same. A number of examples have emerged from the USA, but also in common law jurisdictions such as England and Wales. In Ayinde v The London Borough of Haringey and Al-Haroun v Qatar National Bank [2025] EWHC 1383 (Admin), two otherwise unrelated cases were referred to a Divisional Court of the King’s Bench because the lawyers involved in each of them were suspected of having used AI to prepare legal documents without having carried out proper checks. The cases were referred under the Court’s power to regulate its own procedures and enforce duties that lawyers owe to the Court. 

The judgment begins by discussing lawyers’ duties to the Court in relation to the use of AI, and sets out the various breaches of ethical obligations the improper use of AI can involve and the sanctions that may be imposed. It then discussed the circumstances of each case. In Ayinde, a lawyer cited in argument five case authorities that did not exist, suggesting that they were the result of ‘hallucinations’ that AI tools are known to create. The Court also noted the use of American-style language which it also took as an indication that an AI tool had prepared the submissions. A wasted costs order was made and the lawyer – who unwisely denied using AI and made further misleading statements in an attempt to extricate herself - narrowly missed facing contempt proceedings, primarily because of her youth and inexperience. In Al-Haroun, the first instance judge found that correspondence and witness statements provided by a lay client to his lawyers made numerous references to cases that did not exist, or where they did exist, cited them for propositions they did not support. The lawyers admitted they had used the material provided by the client without checking it. The Court referred them to the Solicitors Regulatory Authority for further investigation. Amusingly, the judgment concludes with a list of similar cases from a number of jurisdictions, including Wikeley.

This sort of thing is not limited to the legal industry. In Australia, it was recently reported that Deloitte Australia will partially refund AUD440,000 that the Australian Government’s Department of Employment and Workplace Relations had paid for a report that was littered with apparent AI-generated errors, including a fabricated quote from a federal court judgment and references to non-existent academic research papers. The report was published on the Department’s website in July 2025, but a revised version was published in October 2025 after a Sydney University researcher alerted the media that the report was “full of fabricated references.” In fairness, Deloitte reported that their use of AI had been disclosed and their conclusions did not change when the errors were corrected.

Where lawyers or other professionals use AI tools to generate work product without adequate checking, the outcome will often be limited to embarrassment and regulatory action against the individual or firm. Where professionals use AI improperly for other substantive work, however, the consequences may be much more serious for both the professional and the client. Where a lawyer improperly uses AI to prepare important advice upon which a client relies, for instance, this could result in a substantial liability claim. An accountant or auditor could become liable for loss suffered by a client due to financial mismanagement, or an engineer or architect could face liability for works that failed or required remedy because of design defects. It is not surprising that insurers are interested in knowing whether insureds appreciate the risks and what they are doing to manage those risks.

Use of AI with confidential or legally privileged information

Confidentiality is also at risk from the improper use of AI. Generative AI systems require access to extensive volumes of data to ‘train’ their models. Some of this data is obtained from the information that users enter into the systems. This presents obvious risks where the information is confidential. While the information itself may not be made available verbatim, or in its entirety, it may inform the responses the AI model gives to other inquiries in a way that reveals the confidential information.

Even widely used and reputable AI tools are susceptible to this. As it is inherent in the nature of the tools, the providers of general (i.e. non-specialist) AI tools are not usually willing, in their terms and conditions of use, to agree to keep users’ information confidential. Providers of specialist tools may be more willing to provide this sort of assurance.

This presents difficulties for users who wish to input confidential information to generate work product. Some professional services firms have responded to this by building their own AI tools, such as our firm’s AIMEE bot. Ownership and control of the AI tool provides assurance that confidential information will not be used to educate a model that will then provide information to others. This is a limited solution, however, because for most users an in-house tool will not achieve the levels of sophistication that a well-resourced specialist provider will be able to achieve.

A related risk is that material generated with the use of AI will inadvertently contain or rely upon material that is subject to intellectual property rights or obligations of confidentiality owed to third parties. Users may need to verify the data sources and their entitlement to use them, which can be difficult where the AI tool does not provide reliable source data. If users rely upon AI generated content for their own work product, advice, designs, marketing material or other purposes, they may be exposing themselves to claims for breach of intellectual property rights or breach of confidentiality. 

Specific factors insurers will want to see

Insurers will expect to see that insureds have set expectations and guidelines for the proper use of AI tools by their workforce. This will include:

  • Clear guidance as to which AI tools are appropriate for use – some are more reliable or appropriate for certain tasks than others, particularly with respect to confidentiality (discussed below)

  • Appropriate training to educate the workforce with respect to the risks of AI-generated work product, including the risks of “hallucinations” and incorrect conclusions

  • Most importantly, rules or processes that involve a human check upon work product which has been prepared with the assistance of AI

Insurers wish to see that insureds have considered these risks carefully and have put in place the necessary strategies to deal with them. At present, this work is in its infancy. In its 2024 AI Index, Datacom reported that only 13% of businesses using AI had established audit assurance and governance frameworks and fewer than half had implemented staff policies for AI usage. Only 33% provided awareness training for employees. This should raise concerns for insurers.

The New Zealand Government has issued guidance to the public sector in the form of the Public Service AI Framework. Its purpose is described as being to support Public Service agencies to explore and adopt Generative AI systems in ways that are safe, transparent and responsible, and which effectively balance risks with potential benefits. Fundamental aspects include governance, security, procurement, skills, misinformation and accountability. A separate section deals with customer service considerations, including transparency, bias, accessibility and privacy.

We expect that insurers’ expectations will develop and increase as AI tools become more sophisticated and their use becomes more widespread. We expect that insurers will increasingly expect to see:

  • Developed, formal procedures to govern the use of AI, including checks to ensure that those procedures have been followed

  • Evidence of specific training and education programmes that achieve required standards

  • Consideration being given to what tasks AI is appropriately used for and what are not

  • Use of reputable and recognised AI tools for confidential information

We expect insurers will also increasingly ask questions about insureds’ use of AI. Topics may include the following:

  • What is AI used for? Is it used for routine, low-risk automation or administrative tasks, or higher-risk functions such as decision support for critical matters or systems, such as in professional advice, healthcare, or industrial control systems? The greater the consequences of failure, the greater the likely scrutiny.

  • What is the AI system provenance and the training and data environment? Is the AI system developed and the model trained in-house, or delivered by an external vendor? What training data sets were used and are there known limitations or data-quality concerns?

  • What governance & human oversight will be used? Who is responsible for AI outcomes and is there always a human reviewer for important decisions or work product? What validation and monitoring will be employed? Are there ongoing monitoring and validation programmes to check and validate the model? Has the insured experienced prior incidents involving AI and what remedial actions have been taken?

  • What security and data protection rules will apply? How is confidential data stored and protected? Are there vulnerabilities in the system and does it rely on vendors who may be offshore? 

  • Is there regulatory awareness? Has the insured considered regulatory risks and is it monitoring emerging regulatory developments?

When businesses cannot provide clear, documented answers to these questions, insurers may respond with higher premiums, reduced capacity, sub-limits or exclusions. Some insurers may develop policy wording that explicitly references AI, such as, for example, introducing exclusions or sub-limits for loss resulting from AI outputs that are used without adequate human oversight.

Concluding remarks

For insurers, AI presents new liability, regulatory and business risks. We expect to see insurers adopting increasingly sophisticated approaches to interrogating customers as to their use of AI and developing expectations around governance, oversight and risk-management. Insureds who can demonstrate a professional and disciplined approach to AI risk will be well placed to manage those inquiries.