Actuarial Insights on Artificial Intelligence
Conversations with industry leaders on AI and the future July 2024Photo: Shutterstock/metamorworks
While attending the International Actuarial Association (IAA) AI Summit in Singapore, I had the opportunity to talk with Greg Heidrich, chief executive officer of the Society of Actuaries (SOA). Heidrich noted that over the next decade, he expects nearly one-third of current work tasks to be replaced or significantly modified, implying a profound change in the nature of work.
Heidrich has been keeping AI development firmly on the radar of SOA leadership. He noted a three-pronged approach:
- For credentialing actuaries, how do we help our current and future members be prepared?
- How can we safely embed AI in our own operations?
- Globally, how do actuaries play a role in shaping the future?
During our discussion, Heidrich noted research from Princeton University and NYU Stern School of Business that ranked “actuaries” in the top three of what is established as artificial intelligence occupational exposure (AIOE). He also noted that SOA Managing Director of Technology and Innovation Alice Locatelli is helping plan for and manage innovations. The SOA is meticulously creating on-demand educational content about AI in a collaborative effort that involves staff actuary Jon Forster, ASA, MAAA, a core group of actuarial volunteers, and an e-Learning team.
A key objective of the IAA AI Summit was to facilitate the sharing of knowledge and experience. With this in mind, I sought out actuarial leaders for a Q&A session to gain their insights. I had the pleasure of speaking with the following individuals:
- Robert Eaton, FSA, MAAA, principal and consulting actuary at Milliman
- Dorothy Andrews, ASA, MAAA, Ph.D., senior behavioral data scientist and actuary at the National Association of Insurance Commissioners (NAIC)
- Jacky Ng, ASA, AIAA, CERA, data science chapter lead for the APAC region at Swiss Re
- Toby Hall, FSA, FCA, MAAA, president and CEO of Roosevelt Innovation
What initially attracted you to AI?
Eaton: I’ve always enjoyed interacting with machines, programming and finding ways for computers to do work for us. Generative AI—in particular language models—provide a new interface for people and machines. I was inspired to see the new wave of chatbots (ChatGPT) and to envision a future of interacting with computers and (I think) robots.
Andrews: I have been doing statistical and actuarial modeling my whole career. So, I understand AI and machine learning models at their fundamental levels. However, when I started studying media psychology and became a media psychologist (Ph.D.), I began to understand how much human autonomy and decision-making were being shifted to AI technologies (a new form of media) with blind trust and minimal, if any, human oversight. The reliance on mathematical-based technologies—because math is viewed as trustworthy and accurate—is resulting in discriminatory and harmful impacts on humans. AI has become the “New Jim Code,” a phrase coined by Ruha Benjamin in her book Race After Technology (Wiley, 2019), which is digitizing discrimination. Research has shown that as humans, we are more trusting of AI technologies than human judgment, even though human judgment is required to develop AI technologies. It is quite a paradox!
Why is getting involved in AI important to you?
Ng: I am passionate about technology and believe one should embrace and make the most of it. If we, as an actuarial profession, can embrace and embed it in our daily work and be proficient in data science just like those triangles or applying life tables, this would be quite liberating in making our work more enjoyable and expanding our work beyond the traditional domains. Hopefully, by participating in the IAA AI initiative, I can lend a hand in shaping the future of the actuarial profession and attracting new talents.
I have been fascinated with the latest bleeding-edge technology since I was young. Data science didn’t exist as a university course back then—the closest was computer science—and I thought that was the best complement to actuarial science at the time. Throughout my career in a reserving role producing quarterly results, I often find myself challenging the status quo in long-standing manual processes—automating them with scripts and programming—be it on extracting data, cleaning it and eventually with results validation and analysis. I also was optimizing codes for the reserving models to run faster in producing the increasingly granular results required for management and regulators alike. Even so, the improvements were evolutionary at best. To be revolutionary, I took the plunge to learn data science six years ago and have never looked back. Today, I often serve as the bridge between data scientists, actuaries, underwriters and business stakeholders.
Hall: I can boil down my involvement in AI to four main reasons.
- Enhanced data analysis: Datasets are getting larger and more complex. AI will help actuaries perform this portion of their jobs far more efficiently. This includes spotting trends that would have otherwise been missed. This should free up time for interpretation and application of results—where actuaries shine!
- Cost reduction: Some portions of an actuary’s job still revolve around somewhat manual tasks where human involvement really doesn’t add a lot of value. AI likely will be a way to get those tasks done more cheaply and with the same (or better) quality.
- Competitive edge: Speakers at the summit said it best: AI will not replace actuaries. However, AI-enabled actuaries will replace non-AI-enabled actuaries.
- Risk management: New technologies always bring new risks. Actuaries (as experts in risk analysis and management) would be wise to follow technological developments closely.
I see an excellent chance to give back to the profession that has been so good to me, to stay on top of emerging technology trends, and to build a network of smart, like-minded (and fun) professionals. So far, I have to say it has exceeded my expectations.
Andrews: We must constantly remind ourselves that AI tools are not sentient beings and don’t understand the human stakes of AI results. We must be ever vigilant to resist blindly relying on AI results. Harmful discrimination should not be tolerated no matter the agent, human or AI. It is too easy to justify discrimination when AI commits it because of its complex underlying mathematical apparatus. Research also has shown that AI discrimination disproportionately and adversely affects communities of color. This issue continues to motivate me to work in this space.
How do you use AI in your current work?
Andrews: I primarily use my background in AI and machine learning to assess the technical accuracy and unfair discrimination potential of models insurers submit for regulatory approval. I also use my background to regularly present advanced statistical topics to regulators, breaking down complicated concepts into easy-to-understand, digestible bites. It is important to close the knowledge gap between regulators and industry with its legions of data scientists. While it is not necessary for regulators to become data scientists to regulate AI, it is important for them to have an understanding of how it works and can harm consumers. I am committed to continuing to educate regulators on AI issues.
Eaton: I use AI in a few different ways today:
- We use AI through machine learning in our actuarial models, such as Milliman’s LTC Advanced Risk Analytics.
- At my company, we have an internal language model that allows us to input proprietary data in a secure environment; we’re exploring how useful this can be for summarizing work or finding patterns across projects.
- We are building a large language model (LLM) chatbot to field questions about insurance regulations and other compliance issues.
What do you envision your work will look like with AI in five to 10 years?
Ng: Being the chapter lead of a data science team, ironically, I don’t use AI much in the typical sense. However, I use it for developing and applying AI algorithms in solving business problems, such as predictive underwriting modeling. A lot of time is spent reviewing and understanding the results of the AI models and their governance—be it explainability, fairness or transparency. Since AI is a buzzword these days, I often need to explain to stakeholders what AI really is and isn’t—and any limitations we may have. While others are reaping the benefits of AI, I am busy in the background making sure these AI models are behaving as they should.
Hall: As the technology and models improve, I think we will see AI increasingly putting “hands on the steering wheel” when performing analysis. It is exciting to think about how much time we will get back from even simple things like coding a predictive model. This is going to move the actuarial skill set toward explaining models and results to regulators and interpreting the results of our models while moving away from routine data cleaning and coding.
Eaton: I think certain tasks we spend a lot of time on today—writing proposals, creating presentations, triaging emails—will be done more efficiently through language-based AI models with access to proprietary business data. I expect robotics to play a much greater role in providing care to elderly and others with long-term care needs, and thus informing our actuarial and insurance estimates. Finally, I think AI will be easier to use and provide swift access to advanced modeling techniques, allowing actuaries to create more and deeper models and estimates of the world around us.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
Copyright © 2024 by the Society of Actuaries, Chicago, Illinois.