Menu

Insurance companies are betting on AI and mass data analytics in a battle against fraud that costs billions

 

  • Fraud costs insurers in the United States about $308.6 billion every year.
  • Nearly 60% of insurers already use AI to combat fraud.
  • Third-party developers are piloting generative-AI tools to aid fraud investigators.
  • This article is part of “Build IT,” a series about digital tech and innovation trends that are disrupting industries.

Insurance companies are facing a slew of challenges. Already plagued by inflation and haunted by the climate crisis, they’re also in an arms race against fraud.

The day-to-day of this computational war might not be as dramatic as Alan Turing standing in front of a 7-foot-wide computer to decipher the Enigma code. But the insurance-fraud battle follows the same premise: As fraudsters use new tech, so, too, must the detectors. 

Many insurers agree that AI, more than any other technology, will be the game changer in this space over the next five years.

The driving force behind this race is money, and lots of it. Of the $2.5 trillion Americans pay into the insurance industry each year, the Coalition Against Insurance Fraud estimates that insurers pay out $308.6 billion of it on fraudulent claims. That means 12% of what customers in the US pay is funneled to dishonest claimants.

Losses from insurance fraud are nearly double what they were 30 years ago. On the line are funds that could otherwise go toward potentially life-changing payments. And insurers are feeling the heat from digital fraudsters far more than other online industries

It’s made them eager to use their counter-fraud teams to see what else AI can do. 

Fraudsters are using AI

Nearly 60% of insurance companies already use AI such as machine learning to help detect regular old fraud, let alone the new challenge of fraudsters having AI at their fingertips, too.

Scott Clayton, the head of claims fraud at Zurich Insurance Group, said shallowfakes — manipulated images made manually with the help of photo-editing software — keep him awake at night. But a flood of AI-based forgeries, or “deepfakes,” is another threat on the horizon.

“I kind of half joke that when deepfake affects us significantly, it’s probably about the time for me to get out,” Clayton said. “Because at that point, I’m not sure that we’ll be able to keep pace with it.”

Scott Clayton, the head of claims fraud at Zurich Insurance Group.

And this isn’t a problem of the future. Arnaud Grapinet, the chief data scientist of Shift Technology, said that in recent months, he’s seen an uptick of deepfaked claims turning up in his data. 

“The proportion doing it is still low, but the thing is, people doing it, they do it at scale,” Grapinet told Insider.

An AXA Research Fund study on its market in Spain found that most fraudulent claims are for real incidents, but the claimant tacks on exaggerated damages. These opportunistic fraudsters usually fake it only once and for less than 600 euros, or about $635. 

On the other hand, around 40% of fraud is premeditated, and these cases can cost insurance companies upwards of €3,000, or around $3,170, according to the study.

This costlier category is where deepfakes are starting to come in. Unlike the one-offs committed by opportunistic fraudsters, those who use deepfakes have the power to create hundreds of forged images.

Arnaud Grapinet, the chief data scientist of Shift Technology.

So counter-fraud teams are turning to software development kits like Microsoft’s Truepic and OpenOrigins’ Secure Source that record camera data that verifies the authenticity of an image. While these technologies alone won’t be able to detect opportunistic fraud, they’re certainly becoming part of the modern fraud investigator’s tool kit. 

Current AI tech in insurance delivers fraud alerts, and GenAI additions will be personal assistants

When handlers review a claim, they might also receive an alert flagging suspicious activity. At that point, it’s passed to a human to investigate whether there really is fraud.

“The reality is that we’re still relatively immature in terms of using true AI in fraud detection,” Clayton said.

But the Insurance Fraud Detection Market is expected to grow from $5 billion in 2023 to $17 billion in 2028

Most programming in current fraud-detection systems is rules-based. If an insurer tells the program a particular kind of evidence is suspicious, such as an abnormal frequency of uploads, the engine knows to flag those cases to investigators. 

Rules-based systems are a relatively low lift for developers at insurance companies to use and maintain, but it’s also difficult to add new rules or to know which rules to hard code in the first place.

In the past 10 years, various third-party developers like Friss, IBM, and Shift Technology have started tailoring machine-learning systems to insurance companies. Rather than just hard coding rules for the engine to follow, data scientists can show it thousands of examples of fraudulent materials, and it discovers fraudulent patterns on its own. 

Shift Technology’s fraud-detection software identifies and highlights a network of fraudulent claims.

For example, Shift Technology has shown its model millions of materials from its clients and data partners, such as claims, medical records, correspondence between attorneys, first notice of loss, and pictures of damages. Representatives from the company said its current model finds three times more fraud than manual or rules-based tools.

And developers are working to apply AI to insurance through more than just their current machine-learning systems. 

Grapinet and his team are piloting a generative-AI system to help investigators with tedious tasks like scrutinizing 100-page documents. The less time they have to spend reading records, the more they can spend arbitrating complex cases.

AI insurance-tech applications are challenged by data availability and regulation 

One of Shift Technology’s top priorities is adding transparency to their AI.

“When you have AI interacting with humans, what’s very important is explainability,” Grapinet said. “You cannot just have a black box.”  

While transparency ranks among insurers’ top concerns for using AI, it’s surpassed by worries about data quality, lack of data, and model bias.

“For any given insurer, it’s very difficult for them to build their own internal fraud model because you need a lot of data for AI to be trained and to learn and to improve over time,” said Rob Galbraith, the author of “The End of Insurance As We Know It.” 

Rob Galbraith, the author of “The End of Insurance As We Know It.”

As insurers weigh their appetite for third-party software against developing a proprietary system, those third-party startups and enterprise companies are leveraging their ability to host massive, cross-market datasets. 

“Seeing those cases that are connected to not just a single insurer, you’re not going to see that stuff trusting the 50-year grizzled insurance investigator who is really, really good at their job, but just doesn’t have the breadth to see all of that that’s going on,” said Rob Morton, the head of corporate communications at Shift Technology. 

But as more scrutiny shifts to AI, there are also regulators to contend with. Employees with the expertise and bandwidth to manage data compliance and documentation are in high demand. 

Then there’s the question of how to regulate third-party providers, and the insurers working with those providers, especially since just a handful of companies might become tools for a large part of the industry.  

“It’s still a very evolving area; the best practices aren’t fully set in stone,” Galbraith said. 

And with third-party models and proprietary models alike, AI models can be hard-pressed to detect forms of fraud that they didn’t learn from their training materials. 

“We’re only as good as the stuff we know about,” Clayton said. “The more that we invest and the more that we spend in terms of detection tools, the more that we find.”

Read the original article on Business Insider

Link to source