When is Your Own Data Not Enough?
How using external data can strengthen results June/July 2018My career started with a data blind spot. I originally did not want to be an actuary, and I did not know that analyzing data could be a career choice. Fortunately, and in spite of myself, after finishing my undergraduate degree in mathematics, I got my first job as an actuarial student at a startup insurance company. I was its only actuarial student, supporting its one actuary.
As often happens in startups, this gave me the opportunity to build my skills from scratch and then do virtually every actuarial function in the company within a short period of time. In these early years, while I learned a great deal about the insurance industry and what it meant to be an actuary, what I came to realize is how much actuarial work required me to quickly gather, scrub and analyze various data, and communicate results that would inform important business decisions.
I also came to realize that I was mainly working only with my company’s own data, simply because there was essentially no external or industry-level data for a company like mine blazing a new market. This theme seemed to follow me, or vice versa, as I persisted in my actuarial career in the individual retirement savings and income market, which has grown to become a multi-trillion-dollar market.1 With such growth comes a trove of experience data, and perhaps some wisdom. Combined with the incredible power of modern analytical tools, this leads hopefully to fewer data blind spots.
So that is the quick run-down of how I got here, writing this article, with way more experience data than I ever imagined, personally and professionally. My aim is to share some important things that I have learned along the way about how to analyze and use data in actuarial work and how more data tends to dramatically improve results. I will then illustrate some of this learning using industry experience data from the variable annuity market. Throughout, my focus will be on guiding principles rather than numerical precision or technical wizardry.
Common Scenarios When Data May Not Be Enough
Actuaries have had centuries of training in data analysis, much under the heading of “credibility theory,” the details of which are beyond our scope here. If you are reading this, you probably know them anyway, so hopefully we can agree that the basic purpose is to balance the use of company- or product-specific data with broader data from other companies or the industry at large. Even so, there are situations where this can be difficult:
- Innovation. Like me early in my career, we sometimes find ourselves in situations where there is ostensibly no data yet. For example, new product types or expansion to new jurisdictions. What to do? Be as conservative as possible, then cross your fingers? Rely on expert judgment? These and other methods may be useful, but it is often beneficial to look beyond the narrowly defined problem to other similar markets or products where there is extensive data, then analyze that rigorously to help inform the expert judgment and other methods that will inevitably be needed. Think in shades, rather than black and white. Failure to acknowledge relevant data does not make it go away.
- New world. Systemic shocks or large secular changes—such as precipitous stock market drops, negative interest rates, regulatory changes, genetic testing or the internet of things—can make it very tempting for actuaries to zealously exercise their expert judgment, dismiss prior data as irrelevant and start anew. This can be a big mistake. While the exact numbers or formulations may change, deeper underlying relationships in the data typically persist and offer wisdom. There are reasons why we still study the Dutch tulip mania of the 1600s, the stock market crash of 1929 and mortality data that predates 21st century health care.
- Limitations with your own data. This is the gravity well and focus of traditional credibility theory—the data for your company or product may not be large, seasoned or varied enough to reliably tell the whole story, even when you think you know the main plot elements. For example, if your fixed indexed annuity block with lifetime income guarantees has not yet reached the end of the surrender charge period, then you would probably be unwise to ignore corresponding experience from the larger and more seasoned variable annuity market with similar features. Relevant data is out there. Invest in the quality of your actuarial work and in your company’s risk management—get the data and use it intelligently, which will be much more than rules of thumb from traditional credibility theory based on simplified assumptions.
In summary, gathering and analyzing data are extremely important no matter the circumstance. At times, more or less professional judgment may be required, and external data can be helpful to stakeholders in corroborating your judgment. There may be a range of reasonable answers, but judgment without data is not one of them.
Ask the Difficult Questions
In the course of analyzing experience data for individual companies and across industries for many years, I have compiled this list of questions that actuaries would be well-served to ask in any data analytics work:
- Data breadth. Have you gathered all data that could reasonably be expected to be relevant? Is it precisely relevant for the matter at hand, or is some judgment required? How granular is the data? How far back should it go? Are there outliers that should be noted or discounted?
- Data quality. Is the data scrubbed and fit for purpose? Have you reconciled it to control totals?
- Range. Are you plodding forward one-dimensionally, “unlocking” from one version of assumptions to the next, or do you have a sense of the range of outcomes and actual-to-expected ratios relative to your assumptions? Can you separate random fluctuations from changes in underlying trends?
- Confirmation bias. “No material change” is often the path of least resistance, especially when analyzing aggregate data across many years. Look closely at the time series and its composition, and analyze the data with a variety of people and techniques, in order to avoid missing important changes.
- Whither the future. To what extent might future events trigger a departure from historical data trends? How likely are they, and to what extent can you quantify them when you develop assumption models for the future?
- Capacity. Do you have the human and technological capacity to do the necessary analysis? Are your constraints related to people, talent, data or computational power?
- Time. Even with all of the above, do you have the time and prioritization to deliver meaningful and actionable analysis quickly enough to be useful?
Of course, I cannot answer these questions for you. But I have found these to be critical to the performance of the high-quality data analysis, calibration and assumption-setting required for great actuarial work.
Actuaries Are Poised to Answer the Difficult Questions
As we are frequently reminded, the amount of data, its availability and our power to analyze it are in increasing abundance. And actuaries are not alone in the business of analyzing data, whether related to our traditional insurance domains or otherwise. But we do have many advantages that others simply do not have, and these advantages help us to answer difficult data questions where others falter.
- Science + Art + Code. At its best, complex data analysis tends to require much more than just data, statistics and computer code. Subject-matter expertise is vital, as it guides us in asking the right sorts of questions, rejecting the wrong sorts of answers and applying the artistic je ne sais quoi. Combine that with another code, our Code of Professional Conduct,2 and we have a very powerful value proposition.
- Professional standards of practice and other guidance. We have been doing data analytics for a very long time, and through this, professional standards have emerged. To name a few, we have the Credibility ASOP, Data Quality ASOP, Setting Assumptions ASOP exposure draft, PBR implementation guidance and a whole section of our professional society devoted to Predictive Analytics and Futurism.3 Actuaries are not lone rangers or a loose confederacy. We are well-trained professionals united by shared and publicly documented high standards.
- Putting the answers to work. Actuarial science is an applied science. Great data, great analytical techniques and great answers mean very little if they are not implemented in a practical manner. Our profession has a long and well-documented track record of success in doing this with (pun intended) high credibility.
Altogether, while data analytics as a field unto itself has only emerged fairly recently, and we as actuaries are certainly increasing our focus on it, it has always been one of our essential elements. Within our traditional insurance domains and well beyond, we are uniquely positioned to continue to lead and excel in providing essential and practical data analytics services to our companies and clients.
Illustration: Variable Annuity Policyholder Behavior
Variable annuity policyholder behavior provides an excellent illustration of the principles presented due to its critical importance to the financial risk of the products, the array of factors that are influential and their changes through time and market circumstances, and the increasing sophistication of analytical processes that actuaries have brought to bear to analyze this data. A robust exposition is beyond scope here, so I will focus on a few key aspects.
Model Background
As the name implies, a generalized linear model (GLM) is a more flexible generalization of the traditional regression models that have been used for centuries to fit linear models to data. They effectively allow for response variables that have non-normal error distributions.
A logistic regression model is used for binary response variables (e.g., surrender the policy or not, live or die). By way of a linear “log of odds” function, it allows for easy calculation of the estimated probabilities for the values of the response variable.1
1 Frees, Edward W., Richard A. Derrig, and Glenn Meyers, eds. 2016. Predictive Modeling Applications in Actuarial Science. New York: Cambridge University Press.
Arguably the most important variable annuity innovation of the last 20 years is the guaranteed lifetime withdrawal benefit (GLWB), which has been one of the key drivers of hundreds of billions of dollars in sales.4 This feature provides the policyholder with a lifetime income benefit in the event that the account value of the variable annuity is reduced to zero, subject to certain conditions. The ultimate cost for companies to provide this benefit depends on many factors, including the amount of the benefit, the performance of the investment funds within the variable annuity, and policyholder behavior including lapse and income utilization before the account value is reduced to zero. With respect to policyholder behavior, each company should ask itself the basic question—is my own data enough?
Generalized linear models (GLMs) such as logistic regression models have become important tools for actuaries trying to answer this question. (See “Model Background” sidebar for more details.)
An actuary working at a company with a representative block of variable annuities with GLWB uses R software to fit a logistic regression model to its own policyholder income utilization data. The resultant model indicates that the following factors are highly predictive of income utilization behavior:
- Attained age
- Tax status
- Policy size
- Prior income utilization
- Interaction terms that capture nonlinearities in the above relationships
For each of these factors, the model output includes a corresponding coefficient estimate and standard error term. Unfortunately, the intrinsic limitations due to the size and composition of this company’s block mean that the standard error terms for some of these coefficients are relatively large (about 10 percent), meaning this model does not provide a high degree of fit to the historical data. This is naturally disconcerting to the actuary.
n-fold cross-validation is a sampling technique where the data is randomly partitioned into n equal “folds.” In turn, n times (n–1) of the folds are used to calibrate a candidate model that is tested against the 1 fold held out.5
The actuary also uses a fivefold cross-validation to test the predictive power of the model against data held out from the model calibration. The resultant actual-to-expected error ratios for the five “folds” average 1.5 percent. This seems vaguely encouraging to the actuary, but she does not feel like it is enough, for her or her company’s stakeholders. She would be much more comfortable putting forth a model with better fit to the historical data and higher predictive power. But how?
The answer is by using the exact same methodology, but applying it to a block of data 40 times larger that corresponds to similar products across the industry. Obviously, this requires access to the industry-level data, but it also requires subject-matter expertise and professional judgment in selecting the similar products and appropriate time period. Using this larger industry data set, the resultant standard error terms are about 20 times smaller (about 0.5 percent), indicating a much better fit to the historical data. And the predictive power metric—the average actual-to-expected error ratios—has improved by a factor of five, to 0.3 percent.
With the dramatically improved model fit and predictive power metrics, along with the sensibility of the model factors themselves based on her subject-matter expertise, the actuary is now quantitatively and qualitatively comfortable. She will put forth this model, or perhaps a customized blend of the company- and industry-based models, for her company’s use in product pricing, hedging and risk management, and reserves and capital, and she will plan to review and update it periodically as more company- and industry-level data emerges. This is enough.
Looking Forward
So for me, and all of us, we now have a lot more data than when I started. And this gives us a much more solid foundation, for annuities and any other products, to use our unique combination of analytical skills, Code of Professional Conduct and standards, and practical mindset to deliver excellent work so that our companies and clients continue to grow and thrive. I believe that this is necessary and sufficient—exactly enough—as our legacy for the next generation of actuaries.
References:
- 1. Insured Retirement Institute. 2017. “IRI Issues First-Quarter 2017 Annuity Sales Report.” June 6. ↩
- 2. Society of Actuaries. 2001. Code of Professional Conduct. ↩
- 3. Actuarial Standards Board. 2013. “Actuarial Standards of Practice.” ↩
- 4. Kalberer, Tigran, and Kannoo Ravindran, eds. 2016. Non-traditional Life Insurance Products With Guarantees. London: Riskbooks. ↩
- 5. James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. An Introduction to Statistical Learning: With Applications in R. New York: Springer. ↩