Explainable AI, Generative Models, and the Next Era of Actuarial Science
A conversation with Kittipon Sarnvanichpitak, FSA, on incorporating AI in decision-making and more
December 2025Photo credit: Shutterstock/Gen AI
For its December technology focus, The Actuary Asia connected with Kittipon Sarnvanichpitak, FSA, Principal Data Scientist at AIA in Bangkok, for a broad discussion on artificial intelligence (AI), including emerging techniques relevant to actuarial work, as well as advice for young actuaries as AI continues to transform the profession. Here, Kittipon shares his perspectives from a career spent in the life insurance industry.

Could you share some examples of emerging AI techniques that are particularly relevant to actuarial work?
Kittipon Sarnvanichpitak: There are several AI techniques that have gained traction in recent years. While their adoption varies across insurance sectors and is mostly tangential to actuarial functions (being used more frequently in operations, sales, marketing and customer services), some techniques are increasingly relevant and usable in actuarial work as supporting tools. Here are descriptions of a few key techniques:
Explainable machine learning (ML). While not really an emerging AI technique, as the majority of the improvement efforts are now focused on generative AI, explainability remains crucial if we want to ensure model quality and transparency. Explainable ML is a set of processes and methods that allow users to understand and trust results from ML algorithms. ML models have trade-offs between predictive accuracy and interpretability. Think about a linear regression model that comprises only a handful of variables; such a model is straightforward to explain and interpret, it can even be written in the form of mathematical formulas, and it is useful for explaining simple linear relationships.
However, these models have some prerequisite assumptions, e.g., the explanatory variables should have somewhat linear relationships to the target variable that you are trying to predict, and they are hardly usable without further complex adjustment if your goal is prediction accuracy. On the other hand, the more performant models (e.g., random forest, gradient boosting trees or deep learning models), which tend to have very high accuracy and are usable with real-world data prediction, and whose relationships are non-linear and unclear, are generally difficult to explain, like a black box (system that generates outputs without exposing steps or logic used).
That is why these complex ML models should be designed with explainability methods in mind. For those who are interested, some of the famous techniques are variable importance, Shapley additive explanations (SHAP), and partial dependence plots. These explainability techniques aim to make model behavior more interpretable, and most try to highlight which variables are influencing the model predictions, and sometimes, by how much.
They normally give an overview of which variables are used the most in giving prediction (global explanation), and on each predicted sample (local explanation), in what proportion and in which direction each of the variables has contributed to the final prediction. To further illustrate, suppose a tree-based ML model (e.g., a random forest or gradient boosting) is used to predict which customers are likely to lapse; such techniques can help actuaries understand how the decisions were made and which key variables, such as payment frequency, policy duration or channel, are driving the predictions. Even though model predictions from such models are generally not used directly in actuarial work as future lapse assumptions, the by-product insights might be influential in setting relevant assumptions going forward. Hence, this transparency is crucial for understanding, validating and communicating with stakeholders.
Generative AI. Tools like large language models (LLMs) are now capable of drafting assumption memos, summarizing meeting notes or generating preliminary analysis narratives. These outputs still require human review, but they can significantly reduce time spent on documentation and communication tasks. Such LLMs can also work as natural language processing engines that can help actuaries extract insights from unstructured data sources like policy documents or regulatory texts. An LLM can help summarize key clauses across hundreds of product filings or even compare different versions of regulatory changes. Insurance companies do have a large pool of old policy information, and many times, it is quite time-consuming to gather and research such documents. Generative AI can help actuaries identify relevant information and inconsistencies.
Another use for an LLM could come from a pricing actuary who would have to review wordings and numerical correctness on relevant marketing materials of insurance products. Before LLMs existed, there was no choice but for actuaries to proofread all documents to ensure correctness. With these tools, we can delegate such manual work to AI, and with high confidence, in my opinion, considering the capabilities available today.
Most of these techniques are still used as augmenting tools, not replacements for actuarial judgment. Their value lies in helping actuaries explore data more efficiently, generate hypotheses and support decision-making—not make final decisions themselves.
Are any traditional actuarial functions being significantly affected as AI evolves today?
Kittipon: It is not like AI has a direct effect on actuarial work, but I would say business functions’ adoption of AI seems to have had some mild effect on traditional actuarial functions, such as pricing or financial reporting. As core functions of the insurance business, our core responsibilities and values are still the same. The principle of how we do things is unchanged, but evolving AI has affected process improvement and efficiency more. The degree of impact varies depending on the adoption rate and the maturity of the organization’s data infrastructure, though. Some of these include:
- Data preparation: Actuarial work requires lots of data as starting materials; for example, in financial reporting, individual policy and insured data are required for financial projections, and they need to be arranged in a way that fits with proprietary financial projection software. Though not exclusively due to AI, I have seen a shift in management of the data pipeline that relies on modern programming tools, cloud infrastructure and database systems (more aligned with popular options used by other professions and industries) for improved calculation efficiency and speed. This opens the door for better integration with innovative solutions in the future.
- Involvement in innovative projects: From a responsibility perspective, as a manager of risks, it is inevitable that actuaries (especially pricing actuaries) are requested to be reviewers on some AI initiatives that involve a trade-off between risk and reward, e.g., campaigns that use AI to assess customer risks for a product offering, or AI-automated underwriting decisions. This directly affects the actuarial role in the sense that such risks need to be quantified; but to be able to quantify them, the actuaries need to understand how the models work and what the limitations are. As the use of AI spreads to multiple areas in insurance companies, there seems to be an increasing trend of actuaries getting involved in such AI-related initiatives.
- Underwriting and actuarial assumptions: While underwriting decisions are not made by actuaries, AI tools used in underwriting (e.g., predictive models based on health data or previous historical claims and disclosures) can influence the new underwriting decisions, and hence the overall risk profile of the portfolio. Actuaries may be involved in validating these models or assessing their impact on actuarial assumptions, pricing and reserving. While underwriting decisions are not made by actuaries, AI tools used in underwriting can influence the new underwriting decisions, and hence the overall risk profile of the portfolio.
- Communication and documentation: Generative AI tools can assist in drafting reports, memos and presentations. This can significantly streamline workflow, especially in environments where documentation is critical.
Overall, I perceive that evolving AI has been affecting actuarial functions in a good way, reshaping how the underlying tasks for such functions are performed. This is often achieved by enhancing speed, consistency and exploratory depth, liberating actuaries to focus more on such core actuarial work as actuarial analysis, stakeholder engagement, and governance of the process and results. More significant effects on the core work, such as pricing and reserving, will likely need much more time to develop, as it is not solely dependent on a company’s decision but rather on acceptance from the regulators as well.
Overall, what do you think of AI’s potential to disrupt actuarial jobs?
Kittipon: An actuarial job involves a series of complex decisions with nuanced reasoning and judgment, and I don’t think AI is capable of being that “complex” yet. There might be some part of an actuarial job that can be done by AI, but I don’t see actuaries (as a profession) being replaced any time soon. I think the key consideration is whether we need a “human-in-the-loop” for the job—not only in terms of capabilities for getting the job done with good quality and correctness, but also whether we are able to trust the decision, recommendation, or explanation if such deliverables come from AI, and whether the AI can be held responsible for the consequences.
Yes, AI technologies nowadays have been evolving at a terrific speed, and some of our work is technically automatable by AI (when the infrastructures allow them to do so). However, an actuarial job is not only about calculating numbers. It involves many parts where “human-in-the-loop” is still pretty much needed, such as making nuanced judgments, empathizing with stakeholders or using ethical reasoning. I believe there are miles of technological breakthroughs to be accomplished until we are able to fully trust AI to the same degree as humans. And, speaking hypothetically (based on current stage of AI technologies), even if AI can do all of our jobs well, in the end, there still needs to be someone who understands actuarial principles and systems enough to set them up, maintain them and keep them in check, making sure they are still fit-for-purpose over the long run.
What might actuaries do to enhance competence in an AI?
Kittipon: These emerging AI technologies are able to help us in many ways, and we can learn to embrace them to enhance our work quality and streamline our workflow. (As you may have seen from the way the Society of Actuaries has expanded its curriculum to include content regarding AI and ML as a requirement to complete Fellowship designations.) Understanding these relatively new concepts, at least enough to make an informed judgment on them, has become more important as part of an actuarial job. Three items of note:
- Understanding how these models and systems operate: AI is a powerful tool that can be used in a wide range of tasks, but different tasks require different kinds of AI and ML architecture, and along with the benefits they bring, new risks are also introduced. If we understand both capabilities and limitations, a proper risk management mechanism and governance process can be put in place to ensure effective and responsible use of such AI solutions.
- Engaging in interdisciplinary collaboration in AI and ML projects: Apart from understanding the concept, working alongside businesspeople, data scientists and other relevant parties in real projects will help actuaries understand the practical use of AI in real-world settings. Real-world experience on AI implementation and execution, embedded with business context, goes far beyond just a theoretical understanding, and helps push the boundaries of being just traditional actuaries into revolutionized ones, keeping the actuaries relevant in this rapidly changing world.
- Getting in touch with evolving technologies: On a personal level, getting used to new AI tools, especially generative AI, is always a good start. The tool that benefits you in personal life may be beneficial in professional areas as well. It may even spark new ideas on how to do things differently in a more efficient way. The world is moving in this direction, integrating new AI tools into old processes. To be relevant, we should learn to be part of it and improve processes with alternatives.
What ethical challenges do you see with incorporating AI into actuarial decision-making?
Kittipon: As far as I know, I do not see actuarial areas where the decision is fully delegated to AI; instead, AI might be used as an augment to provide supporting information that is used to back such decisions. It’s assuring that there tends to always be a “human-in-the-loop” for actuarial tasks (at least for now). Anyway, by being relatively black box compared to traditional actuarial methods, AI and ML introduce several ethical tensions:
- Risk-classification trade-off: Machine learning models are good at giving personalized and relatively accurate predictions (assuming good-quality data can be provided for learning). But the question is, do we really need such personalization in actuarial work? Based on my interpretation, I sense that the majority of actuarial work, e.g., pricing or reserving, involves setting sufficiently conservative assumptions to be able to cope with both known and unknown risks, and let the law of large numbers do its own thing. So there seems to be a contradiction between optimizing for individual risks versus conservative risk pooling. AI and ML could still be very useful when doing preliminary analysis work, though, but the final decision-making is still dominated largely by traditional actuarial principles and judgment.
- Regulatory concern: Many times, these ML models need nontraditional or even eccentric variables and predictive features to work well. This may be difficult to reconcile with regulation. In Thailand, our life insurance pricing still adheres to three key basic angles: age, gender and generic health condition classification (for substandard cases). These factors are time-proven statistically and are simple enough to justify and explain, which is not the case for other innovative factors that might be used in ML models to give the best results. Based on my view, the industry as a whole will need a long while until such factors can be well-proven and widely accepted.
- Bias amplification: Historical data can pose a problem if not well understood or verified. They tend to also have inherent bias that may not be clear at first glance, and AI models (that are trained or tuned on such data) can amplify and perpetuate such biases when used. This is not to say that there is zero bias in the data that actuaries themselves use to derive assumptions for pricing and projections, but such derivation tends to have more proper and detailed documentation and justification. It is, however, not always the case with AI and ML systems, especially ones that give nondeterministic predictions or evolve on incoming new data.
- Opacity and accountability: Even the strongest models can still be wrong, and many times we will not be able to completely explain a wrong decision or prediction. In the end, much actuarial work still requires accountability. Are we comfortable signing off on the results from AI that no one fully understands, especially in sensitive matters, such as financial reporting or setting prices for life insurance products that will stay for a lifetime?
I do think AI and ML can help with multiple areas in insurance companies, but most of the actuarial final work products still rely on traditional statistical techniques and actuarial judgment. This, however, does not prevent us from integrating AI and ML into the actuarial workflow to help with other work that is less sensitive, e.g., extracting data, gathering and summarizing information, or communicating with colleagues and stakeholders.
Do you have any advice for young actuaries on the evolution of AI?
Kittipon: Given the challenges mentioned above, it is even more important than ever for the next generation of actuaries to engage thoughtfully with AI. Some short advice:
- Stay curious: Learn not just how AI works, but also what it means for society and the profession. Learn how evolving AI may improve the profession as a whole and how it might allow actuaries to dedicate time to more important work.
- Do not abandon actuarial fundamentals: AI is only useful if paired with a good fundamental understanding of the problem and solution design. You still need a good understanding of the fundamentals to challenge and refine AI solutions.
Young actuaries today are future actuarial leaders. The profession needs your voice and guidance.
FOR MORE
Read The Actuary Asia article, “Focus on Growth.”
Read The Actuary Asia article, “Inspiring New Ways of Thinking.”
Access “AI Impact on Insurance Industries in Greater China and Asia” reports at SOA.org.
What is your vision of how AI might transform the actuarial profession in the next five to 10 years?
Kittipon: Considering how fast things are changing in AI, with newer and better models coming out every couple of months (or sometimes even weeks), it is extremely challenging to guess what things will look like in the future. However, I believe AI will continue to evolve as a supporting infrastructure for actuarial work, rather than a replacement for it. Regulatory compliance and robustness play a big role in our work, and that is one of the key reasons why I doubt the core actuarial profession will change much in five to 10 years, especially in the value our profession brings to stakeholders. However, these are my wild guesses about what the profession will look like with AI involvement:
- AI will be a catalyst in expediting actuarial work: With greater integration, AI tools will become more embedded in actuarial systems, helping with data preparation, assumption monitoring, and even automatic production of financial reporting numbers. Actuaries will increasingly act as reviewers and interpreters of these outputs, ensuring they remain fit for purpose, as well as logically and ethically sound. If data flows are better integrated throughout an organization, and are all synchronized and refreshed in a timely manner, even actuarial assumptions can be automatically studied and updated, given predefined principles and actuarial formulas. Actuaries can then play a larger role in validating assumptions, documenting limitations and ensuring regulatory compliance.
- Expansion of professional responsibilities: Apart from the demanded involvement in innovative projects that I have mentioned previously, actuaries will need to develop new skills in areas like model interpretability, model governance or even system design. This does not mean that actuaries will become data scientists, but rather that they need to have enough understanding of data science to critique and potentially suggest refinements to AI solutions. Some AI-related initiatives in actuarial areas can only be done by actuaries who understand the actuarial principles well; hence, there might be a demand for nontraditional actuarial roles to emerge and utilize these skills to build a system that works.
- Preservation of core principles: Despite technological advances, the actuarial profession will continue to rely on principles like conservatism, credibility and long-term solvency. AI may help actuaries explore more data or simulate more scenarios, but the final judgment will still rest on human reasoning and ethical responsibility.
Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.
Copyright © 2025 by the Society of Actuaries, Chicago, Illinois.
