All Things Artificial Intelligence

One actuary’s look at AI principles, standards and best practices Mitchell Stephenson
Photo: Getty Images/SvetaZi

When OpenAI released ChatGPT in 2022, it achieved the fastest growth of any application to date. It reached 100 million users in two months, as reported by Reuters. Shortly thereafter, Google and Microsoft released tools with similar capabilities. This rapid expansion and scope of generative artificial intelligence (AI) tools—those that create images, text, videos and other media in response to prompts, per Coursera—started what experts dubbed “the fourth Industrial Revolution.”

It also led industry experts to issue dire warnings. Geoffrey Hinton—known as the godfather of AI—stated, “It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.” In a Center for AI Safety statement, executives from leading AI companies, including OpenAI, warned, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

The idea of AI as a threat to humanity is not new. Before John McCarthy coined the term “artificial intelligence” in 1955, Isaac Asimov published iRobot about a robot uprising. In each decade since, there have been similar examples in popular media. In 2001: A Space Odyssey (1968), a computer named Hal wipes out crew members on its spaceship. In West World (1973), androids become self-aware and rebel against their creators. In Terminator (1984), a defense system attacks the humans it is meant to protect. In The Matrix (1999), machines enslave humanity in a simulated reality. In 2004, 20th Century Fox Studios released a movie version of iRobot. In Avengers: Age of Ultron (2015), an AI system attempts to eradicate humanity. In The Creator (2023), humans battle AI in a post-apocalyptic world. There are countless other examples throughout the past eight decades.

Why Today’s AI Tools Are More Alarming Than in the Past

Given the depiction of AI as a threat to humanity in popular media since its inception, what makes this recent round of AI tools so alarming that the executives who created them warned about the threat of extinction?

  • Widespread availability. The rapid growth of ChatGPT demonstrates how quickly technology can reach a broad base. It reached 1 million users in five days, per Exploding Topics. As of March 2024, the United States—its largest user base—accounted for only 11% of users. Per Global Newswire, AI’s value in insurance is projected to be ~$80 billion by 2032, increasing from $4.5 billion in 2022.
  • Number of data points. The amount of data AI tools are using to train is increasing drastically. According to Forbes, 90% of the world’s data was generated in the last two years. Per Medium, the first version of ChatGPT—released in 2018—had 117 million parameters. The November 2022 version, GPT 3.5, trained on 175 billion parameters. The number of ChatGPT 4 parameters is not yet known, but estimates place it at 1.7 trillion. If original estimates of 100 trillion parameters are true, it would match the number of neural connections in the human brain. In insurance, big data can lead to new insights about customers, identifying trends that impact customer experience and company results, and analysis that can drive business strategy.
  • New capabilities. Generative AI can answer questions, author essays and write software. Per NBC News, it passed an MBA exam. Ghacks.net reports that it coded an entire game. The Intelligencer details that it passed the bar, scored a five on an Advanced Placement (AP) exam and built websites. As of March 2024, ChatGPT had not passed an actuarial exam, but odds are that it eventually will. The new AI capabilities that translate to insurance include summarizing policies and documents for customers and employees, translating information across languages and customizing responses to customer inquiries.

Generative AI tools also make mistakes and invent facts, which is known as hallucinating. In one case reported in Fortune, a law firm submitted a brief in which ChatGPT fabricated historical cases. According to Techopedia, Google’s promotional video about its AI chatbot Bard made an inaccurate claim, and Microsoft Bing AI’s demonstration incorrectly summarized facts.

AI tools heighten risk in several categories. As articulated in the Federal Housing and Finance Authority Advisory Bulletin, these include model, data, legal, regulatory and operational risk. Heightened operational risks include IT infrastructure, information security, business continuation and third-party risk.

The rewards of using AI include increased efficiency and productivity. AI tools can free up employee time for other activities or reduce expenses, and they can create additional insights, personalize results and improve customer interactions and continuity.

How to Govern the Heightened Risk From Using Generative AI

A governance framework for the use of AI should start with ethical principles that should drive standard requirements and best practices. Actuaries should reference applicable Actuarial Standards of Practice (ASOPs) and Code of Conduct Precepts. Here is a compilation of common ethical principles surrounding the use of AI accompanied by standard requirements, best practices and professionalism references for actuaries. Each principle includes an industry example demonstrating the significance of the associated requirements.

Avoid Bias

It is important to avoid bias (also known as fairness and equity) caused by limited or unrepresentative data and differentiation based on protected classes. We’ve seen this play out in the insurance industry. According to Lexis Nexis, multiple insurers face class action lawsuits over AI use. The Organisation for Economic Co-operation and Development (OECD) reports that, in one suit, natural language processing created negative bias in voice analytics depending on the customers’ race.

Standard requirements to avoid bias may include subject-matter expert review during model development to ensure training data is reasonable, sufficient and appropriate. It also may include a compliance review to ensure the tool does not violate protected class rules, and companies may require ongoing monitoring to ensure that results remain unbiased. In the case of the aforementioned class action lawsuits, the insurers could have better analyzed historical data for implicit bias that resulted in algorithms potentially carrying those forward in customer interactions.

Best practices for ensuring fairness and equity could include a template for disclosure about model development data, a definition of data elements used and peer review of training data. For actuaries, ASOP 23 (Data Quality) addresses whether data is appropriate, reasonable and sufficient. ASOP 12 (Risk Classification) addresses establishing and testing risk classes and the relationship of risk characteristics to expected outcomes.

Make Results and Use of AI Transparent

Also known as explainability or interpretability, this refers to the need to understand and explain AI-generated results. Per Vox, in 2021 the insurance app Lemonade tweeted that it gathered “100x more data than traditional insurance carriers” from users, including “nonverbal cues.” This sparked concern about the data collection process and led to policy cancellations.

Standard requirements for transparency may include maintaining an inventory of permissible AI use cases, including model classification, risk identification and risk rating. Requirements may include documentation, testing and performance tracking for each AI tool. Requirements also may include disclosure language for any direct customer interaction with the AI tool. In the case of Lemonade, the company could have been more transparent at the time of data collection. In contrast, the company Root collects similar data to assess driver behavior for pricing car insurance, but Vox says, “potential customers know they’re opting into this from the start.”

A decision tree to identify relevant requirements could be a best practice for actuaries. This should clarify whether the AI tool will follow the model risk framework, testing and documentation requirements, and the required nonmodel risk and control reviews. Companies should identify points of contact who can provide support in identifying these requirements.

References for actuaries include ASOP 56 (Modeling) for guidance to ensure model risk mitigation is reasonable and appropriate. Actuaries also should reference ASOP 41 (Actuarial Communications) for guidance about actuarial report disclosures, including limitations and cautions about uncertainty and risk.

More From The Actuary

Read more about AI as it relates to actuarial work:

Protect Privacy

It is imperative to protect customer privacy consistent with relevant laws. These include the European Union General Data Protection Regulation, the California Consumer Privacy Act and the Health Insurance and Portability Accountability Act. As reported by BBC, Italy banned ChatGPT, citing privacy concerns about collection and storage of personal data and availability of unsuitable answers to minors.

When using AI, standard requirements for privacy protection may include disclosure—upon request—to customers about what companies do with their personal data. Other standard requirements may include ensuring that data used for AI tools cross-reference protected categories in privacy policies and that AI tool output does not include personal information. In the case of ChatGPT, earlier diligence to ensure and articulate compliance with privacy laws may have prevented its disallowance in Italy.

Best practices to protect privacy include privacy requirements training, having designated points of contact for privacy questions and tools that flag personal information. Actuaries may reference Actuarial Code of Conduct Precept 9 for guidance: “An Actuary shall not disclose to another party any Confidential Information unless authorized by the Principal to do so or required to do so by law.” Actuaries also may reference ASOP 23 for guidance on performing data review.

Ensure Accountability

Also known as human agency, an accountable individual must ensure the use of AI tools meets requirements. Per LexisNexis, one insurance class action lawsuit claims faulty AI screened claims and points to the lack of human review of claim denials.

To thwart such a lawsuit, standard requirements that ensure accountability may include assigning an accountable party for each AI tool to ensure there are appropriate controls for each heightened risk, as well as training staff who interact with AI tools. This may include periodic attestation that the accountable individual understands requirements and a human review for each use. In this particular class action lawsuit, a required human review—at least for a period—may have revealed discrepancies in the human and AI-determined outcomes.

Best practices for ensuring accountability include training for accountable parties, a requirements checklist and subject-matter experts to provide advice and reassurance to accountable individuals. Actuaries can refer to Actuarial Code of Conduct Precept 1: “An Actuary shall perform Actuarial Services with skill and care.” Additionally, ASOPs 56 and 23 provide guidance on understanding the model and using data, respectively.

Make Sure Tools Are Reliable and Safe

Having robust and accurate tools addresses the need to trust results through operational and analytical stability and ensures that AI tools meet their intended purpose. Zillow took more than $500 million in losses due to an AI tool overvaluing purchased properties, according to Inside Big Data. This caused its stock to plummet and resulted in a 25% workforce reduction.

Standard requirements to make sure AI tools are reliable and safe may include ensuring there is a method to monitor performance and address out-of-tolerance results. This may include ensuring previous versions are available if production versions become unreliable. It also may include scenario testing to ensure results are dependable, and monitoring, reporting and communication protocols associated with AI tools in use. For Zillow, early and routine monitoring may have detected model drift and enabled the company to cease use of the tool or revert to a prior model sooner.

Templates to store output analysis, periodic attestations that results are consistent with intended purposes, and documented plans to revert to prior versions if needed are all best practices to ensure accuracy. ASOP 56 provides actuarial guidance on ensuring the model is reasonable in aggregate, reliance on models developed by others and output validation. ASOPs 56 and 54 (Pricing of Life Insurance and Annuity Products) provide guidance on sensitivities, while ASOP 12 covers the concepts of “reliable and safe.”

Additional Considerations

Per the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, additional considerations in establishing an Artificial Intelligence Systems Program include committee structure and senior management ownership of AI strategy, internal audit review, whether requirements will be codified in existing or new policies and standards, additional considerations for reviewing and obtaining third-party tools, and evidence retention to demonstrate compliance.

Bringing It All Together

These tools can do incredible things. AI is already in use for transportation, programming, manufacturing, agriculture and health care. Envisioned uses include addressing climate change by improving models through machine learning, combating world hunger and reducing global inequality and poverty. In insurance, AI can benefit all parts of the value chain, including marketing, distribution, underwriting, policy acquisition and claims management. Although AI comes with risks, when managed ethically, it can benefit humanity tremendously.

Mitchell Stephenson, FSA, MAAA, has about 25 years of experience focusing on modeling, model risk and governance, and controls. He is based in Simsbury, Connecticut.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2024 by the Society of Actuaries, Chicago, Illinois.