Artificial Intelligence Using Large Language Models Can Accurately Evaluate Risk in Liability Claims

It Is All About the Prompts and Post Processing Techniques

The race to use Artificial Intelligence (AI) and Large Language Models (LLMs) to improve results is on and the Insurance Industry is engaged. Rapidly the Insurance Industry has moved beyond discussing whether the Insurance Industry should use AI and LLMs to identify the best use cases for these new techniques and implementing solutions. One key use case is the evaluation of risk in liability claims. Understanding the drivers of exposure in liability claims leads to more timely and accurate reserving, improved mitigation opportunities and better underwriting decisions. Historically, risk evaluation in liability claims has been a highly manual process that requires experienced professionals to carefully review mountains of data and to make the connections. Fortunately, AI and LLM techniques can improve the efficiency and accuracy of this critical work. However, an accurate evaluation requires using the right prompts (questions) to extract the risk signals that matter.

The Injury Checklist an Insufficient Approach

Not that long ago many insurers primarily relied on a list of injury types that often resulted in sever claims. For example, claim professionals were asked to promptly escalate claims involving death, disfigurement, loss of limbs, brain trauma, paralysis. These injury checklists became so ubiquitous that they were even included in policy claim loss reporting requirements. Severity analysis was primarily focused on the type of alleged injury. As severity analysis and predictive analytics moved beyond the simple checklist with escalation approach, the focus remained on linking the alleged injury to evaluate exposure. Injury data remains a key factor in evaluating liability claims but finding the relevant injury information using AI and LLM techniques requires the right prompts.

For example, a prompt that only looks for reference to a particular type of injury may return results that include ruling out the injury is present. This can be meaningful information, but it can also result in misevaluation of a signal. Well designed prompts coupled with post processing techniques answer more sophisticated questions that distinguish claims involving a mild (temporary) traumatic brain injury from a sever (permanent) brain injury from no evidence of a traumatic brain injury. Prompts can also be used to identify claims that include signals that a more severe injury may be developing.

Accurately evaluating alleged injuries remains fundamental to evaluating risk in liability claims but any model that focuses only on the alleged injuries is missing critical components. Experienced claim professionals can readily point to examples of unexpected results driven not by the severity of the injury but by causation or liability factors.

Causation or Liability Exposure

Jurors do not like defendants who are perceived to be bad actors. In theory, unless punitive damages are awarded, jurors should compensate claimants for the injury suffered and not “punish” defendants for actions they consider to be more than mere negligence. Jurors come from the real world where human behavior can override legal theory. Nuclear verdicts are often driven more by egregious liability facts. Jurors will frequently award higher damages in cases involving distracted or impaired driving, cases in which defendants disregarded clear safety concerns or cases in which defendants earned extraordinary profits despite the obvious risk of injury. Thus, it is critical that exposure analysis look for and evaluate risk signals that inflame the anger or sympathy in jurors.

Moreover, skilled attorneys representing claimants utilize techniques to increase the likelihood that jurors will award substantial damages. Some of these techniques may only become visible when deployed in trial but the foundation for those arguments may be present in pleading and discovery well before jury selection. Accurate AI and LLM evaluation require prompts that look for the foundations of reptile theory, excessive discovery, often funded by third parties, lifecare plans that are beyond reasonably necessary. These and other liability related signals drive exposure, particularly in those jurisdictions that tolerate or expressly permit these strategies.

Claim Handling Exposure

In many jurisdictions, claimant’s attorneys can increase exposure by challenging claim handling decisions. If not properly identified, evaluated, and responded to time limit, policy limit and settlement demands with specific conditions for acceptance can create potential exposure beyond policy limits. Underwriters may have intentionally attempted to limit exposure by offering lower policy limits. That technique can be effective provided that the claim is properly handled. Accurate AI and LLM models should include prompts looking for and in the context of the law of the relevant jurisdiction evaluating signals that claimant’s attorneys are laying the foundation to assert improper claim handling.





Leave a Reply

Your email address will not be published. Required fields are marked *