By DuWayne Kilbo on Tuesday, 23 April 2019
Category: Underwriting

A Harbinger of Things to Come in Other States?

 

Algorithmic and Predictive (aka Accelerated) Underwriting Hits a Speed Bump in New York

In January of this year, the New York Department of Financial Services (DFS) issued an Insurance Circular Letter on the Use of External Consumer Data and Information Sources in Underwriting for Life Insurance.  According to the DFS, the purpose of the Letter was "to advise insurers authorized to write life insurance in New York of their statutory obligations regarding the use of external data and information sources in underwriting for life insurance."

While on the surface this seems like a rather mundane and benign Letter, it's rattled and caused countless internal discussions within life carriers that use algorithmic and predictive data modeling. These data driven models are notably employed in processes such as accelerated underwriting.

Central to the DFS Letter is:

"An insurer should not use an external data source, algorithm or predictive model for underwriting or rating purposes unless the insurer can establish that the data source does not use and is not based in any way on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, or sexual orientation in any manner, or any other protected class."

And further:

"Where an insurer is using external data sources or predictive models, the reasons for a declination, limitation, rate differential or other adverse underwriting decision provided to the insured or potential insured should include details about all information upon which the insurer based such decision…."

So what does this mean and what is the Letter actually saying?

Using technology to speed an application through the underwriting process and to improve a consumer's buying experience is widely viewed as both expedient and beneficial.  After all, who doesn't want this?  Clearly this is a win-win for all involved.

However, the Letter emphasizes that carriers need to be absolutely certain they are not violating the rights of protected classes of people. And, if adverse action is taken—perhaps even defined as simply removing an applicant from an accelerated underwriting process — carriers must be 100% transparent as to the reasons for this action.

Many of the underwriting data models in use today are very complex, employing a profusion of data points and analytics. As a result, it may be extremely difficult to know all interaction between the data and whether or not there is unfair discrimination at some point in the analytical process. To try to explain an adverse outcome in specific detail may be extremely difficult as well.

In speaking with some of my carrier friends, they acknowledged the implicit reassurance of "trust us, the model doesn't discriminate" from some third party data services contracted by the carriers.  However, the DFS has laid the burden of proof of non-discrimination at the foot of the carriers:

"An insurer may not simply rely on a vendor's claim of non-discrimination or the proprietary nature of a third-party process as justification for a failure to independently determine compliance with anti-discrimination laws. The burden remains with the insurer at all times."

So for the carriers there is no wiggling out of this high standard. It's either comply or withdraw their predictive and algorithmic underwriting models from New York.

The DFS, I'm told, has reached out to all carriers doing business in the state to answer a slate of questions related to their specific predictive and algorithmic underwriting models. Answering these questions may ultimately require some tweaking of models, or more likely providing clarity to the DFS as to why a specific program doesn't discriminate.

The Stakes are High

The stakes are high for what the DFS is demanding.  On one hand, the industry needs to find ways to remove obstacles and write more prospective insureds, and the use of data driven models that provide the opportunity to forgo life insurance exams and lab requirements is a clear path to help accomplish this goal. On the other hand, carriers need to be careful with the data they are using and what the intended or unintended consequences may be.

While the actual letter of the law concerning discrimination may not be violated using various predictive models, the data being employed may provide an inadvertent way to violate its spirit. Some models in use today take into account home ownership, credit attributes, educational attainment, business and professional licensing among other things. Could these and other data by their very nature discriminate against a protected class — either by themselves or in conjunction with other data points?  These are difficult questions to answer without seeing the underwriting outcomes generated by various models.  If protected classes maintain a lower level of underwriting acceptance, then the answer is "maybe" and the process deserves much more scrutiny and review.

In addition, there is concern among carriers that what the DFS is doing will cause other states to follow with similar or even more extensive questioning concerning data driven processes.  For all the positives gained with technology-driven models, carriers will be required to spend a considerable amount of time and energy to defend, modify, or remove what they are doing today.

A Step Backward?

This could be a step backward.  But if some models are found to be discriminatory then that's OK — these need to be weeded out as soon as possible. The industry doesn't want the negative publicity generated from processes that may be problematic or discriminatory.  We've all worked hard to develop a positive consumer image and we need to maintain and build upon where we are today.

Windsor will update you as further developments occur with the DFS and other state departments of insurance. Stay tuned as this topic continues to unfold.

Related Posts

Leave Comments