AI exposure warning for market

327

Growing awareness of the scale and severity of financial risks posed by artificial intelligence (AI) means that, in the next few years, London market underwriters will need to review their policies in a search for hidden AI exposure.

The warning comes from global law firm Clyde & Co speaking at a seminar on the impact of AI on underwriting.

In much the same way as the market spent several years hunting for hidden – so-called non-affirmative – cyber exposures within non-cyber insurance policies, so the market will likely need to conduct the same exercise for AI.

According to Neil Beresford, partner at Clyde & Co, the reason this search will be necessary is a growing realisation that AI can affect a broad range of risks in ways that underwriters had not foreseen.

Beresford cited the case of a major international airline which experienced a week of severe disruption when its systems were accidentally interrupted. The claims arising run into tens of millions of pounds.  Other examples include hundreds of millions of dollars of trading losses caused by minor coding errors.

Beresford said: “It’s becoming apparent that AI risk can affect a very broad range of insurance products including general liability, professional indemnity and contractors cover.  This AI risk is not just being found in cyber products, it’s gone mainstream. And what’s giving rise to concern is that these policies containing hidden AI pose a greater risk than ever before. We’re now seeing claims in which interrupting an AI system’s functioning has caused massive financial losses.

“When these AI-related claims occur, they are of a different order of technical complexity. The insurance industry will need a suite of experts to understand how a failure has occurred and what caused it. Was it the hardware, was it the software, was it human intervention or was it the logical – but morally neutral – response of the AI system to its environment?

“Underwriters will need to think about their exclusions very carefully when considering risks with an AI element.”

At the Clyde & Co seminar on the underwriting challenges created by AI, Beresford and associate Natasha Lioubimova examined several case studies of AI failures and their impacts. These included:

·         Deaths and injuries to workers in car assembly plants caused by robots. One such example was Wanda Holbrook, a maintenance technician, who was killed in 2015 by a robot at a Michigan auto-parts maker.

·         Microsoft’s AI chatbot Tay that, having been operational for 16 hours, began to post abusive and racist messages on Twitter.

·         Amazon’s 2017 service outage, which was caused by an engineer’s typographical mistake.

Beresford and Lioubimova also discussed a hypothetical scenario in which a carebot programmed to learn from its environment struck one of its elderly charge’s grandchildren because it had interpreted the children’s playfighting as a means of showing affection. One of the major challenges created by AI, according to Clyde & Co’s experts, was how to determine if an AI product was defective. In the carebot scenario, the device was designed to learn from the behaviour it encountered. In this instance, the robot witnessed children playfighting and assumed that hitting one another was a sign of affection.

Beresford said: “This is a fascinating hypothetical scenario. The robot was programmed to learn from its environment and that’s precisely what it did. So was it defective? This scenario also raises the issue of consumer standards. To what standard would we expect a care robot to operate? At present, those standards don’t exist as they do in other areas of consumer law.”

Beresford and Lioubimova concluded that robots such as a carebot may ultimately need compulsory insurance.

The lawyers also noted two common features of claims: first the absence of any international coding standards in the field of AI by which to assess the existence of a defect, and secondly an all-too-frequent lack of back-up systems to cope with the failure of AI systems.