Big Data, Big Discussions

Neil Sprackling, president of Swiss Re Life & Health America Inc., shares his thoughts on fairness in the use of data and algorithms for risk selection Interview by Stephen Abrokwah

Neil Sprackling
Neil Sprackling
The subject of unfair discrimination in insurance risk selection has taken center stage in the past couple of years, with various state regulators (e.g., New York and Colorado) adding their voices and providing guidance on the use of big data (external data sources) and artificial intelligence (AI) in underwriting to prevent unintended and unfair discrimination against protected classes.

Over the past two decades, there has been an explosion in the volume and speed of data, at a pace never seen before. This has led to the adoption of data-driven approaches and the use of predictive algorithms in various aspects of the insurance value chain, including underwriting. Called accelerated underwriting, the goal is to create a more efficient buying process and to reduce the frictions that exist today in the traditional underwriting process.

The key question being addressed is how can the industry leverage data-driven and algorithmic approaches to risk selection in a manner that creates a pleasant customer experience, is priced competitively and follows a process that treats everyone fairly?

In this interview, Neil Sprackling, president of Swiss Re Life & Health America Inc., shares his thoughts on the topic of fairness with the use of data and algorithms in life insurance underwriting. He also discusses the topics of unfair discrimination and the continuously evolving regulatory discussions, and he presents ideas around what needs to be done going forward.

Over the past year, discussions surrounding fairness in underwriting and proxy discrimination concerning new data and algorithms have increased. What exactly is proxy discrimination?

Currently, there is no widely accepted definition of proxy discrimination. As such, the industry is moving away from using that term.

One proposed legal definition is the “intentional substitution of a neutral factor based on a protected class for the purpose of discriminating against a consumer to prevent that consumer from obtaining insurance or obtaining a preferred rate.”

In the deliberations concerning this topic, regulators and industry representatives realized that insurers already are prevented from engaging in this behavior. As a result, the term “unfair discrimination” is now being used.

The National Association of Insurance Commissioners (NAIC) has suggested developing a white paper to create a foundational understanding of key terms, like proxy discrimination and unfair discrimination, to advance a solution-driven dialogue.

Why is the insurance industry now facing increased scrutiny on certain underwriting methods?

Insurers increasingly are turning to nontraditional data sets, sources and scores. The methods used to obtain traditional data—that were at one time costly and time-consuming—can now be done quickly and cheaply.

As insurers continue to innovate their underwriting techniques, increased scrutiny should be expected. It is not unreasonable for consumer advocates to push for increased transparency and explainability when insurers employ these advanced methods.

What is the latest regulatory activity on this topic in the various states and at the NAIC?

Activity in the states has been minimal. In 2021, Colorado became the first (and so far, only) state to enact legislation requiring insurers to test their algorithms for bias. Legislation nearly identical to the Colorado law was introduced in Oklahoma and Rhode Island in 2022, and it is likely other states will consider similar legislation. Connecticut is finalizing guidance that would require insurers to attest that their use of data is nondiscriminatory. Other states have targeted specific factors, but most have adopted a wait-and-see approach.

The NAIC created a new high-level committee to focus on innovation and AI, but it has become clear that a national standard is not likely at this time.

What are some of the data sources that are deemed problematic, and what is the solution?

The two sources under the most scrutiny are credit scores and, to a lesser degree, motor vehicle records (MVRs).

Consumer advocates and certain regulators argue that the use of credit scores disproportionately affects minorities and results in higher insurance rates for people of color. MVRs are inextricably linked to policing in this country, and, as a result, are alleged to be inherently biased.

The solution, according to some, is to ban the use of these factors. The state of Washington is finalizing a regulation to do just that.

Is the use of data that is compliant with the Fair Credit Reporting Act (FCRA) and Health Insurance Portability and Accountability Act (HIPAA) deemed acceptable in algorithmic underwriting?

In general, data that is FCRA and HIPAA compliant passes the litmus test of accuracy to be used in algorithmic underwriting. This is due to the strict laws to protect consumer information under both regulations.

FCRA- and HIPAA-compliant information can be verified and corrected. While the data passes the accuracy test, it still cannot be used for unfair discrimination.

Does the restriction only pertain to external or third-party data (nonmedical data)?

The restriction has been focused mainly on new data sources that are being embedded in the life insurance underwriting process.

Much of today’s underwriting and risk selection process is based on correlation and not causation. How are companies and/or the industry dealing with questions regarding the use of data and algorithms that bear no direct causal relationship to insurance losses?

During the debate surrounding the Colorado law, the insurance industry advocated for the removal of a provision that would have banned the consideration of correlated factors.

Insurers use many underwriting factors that do not exhibit a causal relationship. The Actuarial Standard of Practice No. 12 allows for pricing and prediction based on correlation and not necessarily a causal relationship.

Insurers should focus on educating regulators and policymakers on why correlated risk factors should be allowed. It should be noted that spurious correlation is not permitted and should be actively guarded against.

Where do we go from here as an industry to create an actuarially sound risk selection process that does not result in unintended consequences?

The industry will need to make hard decisions and take actions it traditionally has been reticent to take.

First, I believe the industry should advocate testing not only the outputs or results of advanced algorithms, but also the inputs. Biased information going into an algorithm will produce biased results. No amount of impact testing can change that.

Lastly, I believe the industry should insist on distinctions among data sets, data sources and data scores. Biases may not exist equally in a particular data source, set or resulting score, hence creating situations where only one element needs to be addressed to mitigate unfair discrimination.

Neil Sprackling is president at Swiss Re Life & Health America Inc.
Stephen Abrokwah, Ph.D., FSA, CERA, MAAA, is vice president, senior client manager, at Swiss Re Life & Health America Inc. He is also a contributing editor for The Actuary.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2022 by the Society of Actuaries, Schaumburg, Illinois.