Upping Your Game

How to use behavioral economics to optimize decision-making and become a better actuary Kurt J. Wrobel

Behavioral economics—this often-used term is a relatively new method for understanding how individuals make decisions. The concepts have been successfully applied across a wide range of fields, and the economics profession has recognized its influence with three Nobel prizes.

One concrete application of behavioral economics has been at technology startups, where companies are developing programs to reduce operational costs and create a better overall consumer experience. Lemonade, for example, is a property and casualty insurance startup that uses the principles of behavioral economics to ensure the truthfulness of submitted claims. Instead of using more traditional insurance techniques, the company requires people to sign an honesty pledge and submit a video describing their property loss. Using algorithms based on behavioral economics, the company makes an evaluation of the claim and makes a payment quickly—in as little as three seconds!

Behavioral economics can be seen as a reaction to traditional economic theory and assumptions. In traditional economics, people are assumed to rationally maximize their happiness or “utility function” with a given income level. In making purchasing and other decisions, people are assumed to logically review all available information and systematically make decisions to optimize their happiness. Because people are assumed to always behave rationally, this framework implicitly expects that people can incorporate complex information and data to make the best possible decision. A tangible example of this rational decision-maker who always makes optimal decisions based on all of the available information is Spock from Star Trek.

In contrast to this rational framework for decision-making, behavioral economics assumes people will make decisions based on survival strategies. In an ancient environment—where we needed to survive rather than make rational and statistically sound decisions—our primary goal was to develop simple explanations that would prompt immediate actions to ensure survival; it was not to make well-reasoned and statistically sound decisions based on data. Daniel Kahneman described this biologically evolved behavior in his book Thinking, Fast and Slow: “We are born prepared to perceive the world around us, recognize objects and fear spiders.”1

As actuaries, this insight is important. Most people do not have a predisposition for answering questions in a statistically sound manner like most actuaries. This is not a value statement, but simply recognition of reality. After spending years studying actuarial science, we have developed skills to answer important business questions using statistical tools, and we are much more likely to use them compared to people in other professions.

Potential Blind Spots for Nontechnical Individuals

Nontechnical leaders and managers can have a number of blind spots when it comes to making statistical decisions. Here are some I have experienced in my actuarial career.

  • Mistaking correlation with causation. A large data set can reveal many interesting observations that could appear to be causal, but in reality they are only correlated with one another. The mistaken view of causation versus correlation could lead to a poor decision if a proposed action is only correlated to an action rather than being causal. While this mistake could be acceptable in an ancient environment where mistakes are not as costly, it could be very costly in an environment where an unsound statistical decision could lead to a significant financial loss.
  • Insensitivity to sample size. Many people have remarkably little concern about drawing conclusions from small sample sizes. While this approach can occasionally be appropriate if no other data is available, it is our job to ensure conclusions are not drawn on a small number of observations where the results could be subject to wide variability.
  • The Black Swan problem. Nassim Taleb has discussed the problem of people putting too much importance on recent historical events without accounting for the likelihood of a low probability event—a so-called black swan that could have a profoundly positive or negative impact.1
  • Narrative bias. People love stories that highlight their preferred version of events. A story that links a successful outcome with their skills or the actions of their team is attractive and likely to be believed. The problem, of course, is that this version of events may be inaccurate and the decision-makers could overstate their skills in producing an outcome.
  • Outcome bias. While few would argue that results do not matter, an outcome could be favorable even if the decision-making process was deeply flawed. If only the results of the outcome are judged, one could inadvertently believe that a flawed process produced a favorable outcome.
  • Availability bias. People can be influenced to increase their perceived likelihood of an event if it can be easily recalled or if the information is readily available.
  • Focusing too much on the trivial (Parkinson’s law). With the advent of big data, the opportunity to analyze more information has expanded exponentially. With more information, people are much more likely to focus on trivial information that is unlikely to improve decision-making.

Reference
¹ Taleb, Nassim Nicholas. 2007. The Black Swan: The Impact of the Highly Improbable. New York: Random House.

Considering the lessons from behavioral economics, recommendations developed from statistical tools must account for this gulf between rational statistical decision-making and the survival strategy response among those who do not have a background in risk assessment. In addition, while we may use a well-reasoned analytic framework in most circumstances, we also need to understand the situations where actuaries and other technical experts will have blind spots that can be predicted through behavioral economics. We are susceptible to acquired blind spots, which we need to recognize when making decisions. This article will discuss how the principles of behavioral economics can be used to improve our own practice as actuaries and help ensure better decisions for our organizations—including communicating with nontechnical leaders, managing statistical biases among nontechnicians, understanding biases among actuaries and other technical experts, and using the power of teams to manage individual biases.

Creating the Best Possible Story: Communicating With Nontechnical Leaders

Behavioral economics literature suggests that information needs to be presented in a clear and easy-to-understand manner to effectively influence a decision-maker. While this may seem obvious, research emphasizes this crucial point and its overwhelming importance in making a compelling presentation. One article referenced in Thinking, Fast and Slow, “Mind at Ease Puts a Smile on the Face,” succinctly relays how to develop a straightforward presentation so it is viewed favorably and in a sympathetic light. As Kahneman notes, “When you feel strained, you are more likely to be vigilant and suspicious.”2

Developing a good story—one that is concise and easy to understand—is the most important part of a clear analytical discussion. Think of it as a 60-second elevator pitch that must be explained simply and understood quickly. The story should start with a question or objective supported by high-level metrics that highlight the problem being confronted. Personal stories, visible actions and outcomes should be utilized along with data to create a connected story. The discussion should also include an operational plan and metrics to make sure the project stays on track. While the structure is fairly simple—question, supporting data, proposed solution, metrics to track progress—the real trick is connecting these sections with a compelling and simple narrative.

A powerful and compelling story is needed to manage competing stories that could lead to a less-than-optimal decision. In a large organization, people are keen to create stories—with clear narratives between an action and an outcome—that are advantageous to them. As suggested in behavioral economics literature, these stories will often ascribe far too much importance to an individual’s actions and recent events than to simple randomness. As frustrating as this could be for someone with an analytic orientation, we have little choice. Stories compete with each other and, in many cases, a compelling story with little analysis can win over a decision-maker more effectively than a well-developed analysis.

Beyond creating an effective story, several other techniques can be used to ensure cognitive ease and acceptance of your broader narrative. Repetition, particularly in informal discussions that precede a more formal presentation, can be effective at enhancing a technical discussion and protect against your audience becoming suspicious of a new concept. These informal discussions also have the added benefit of identifying weak spots in a story and will allow you to address those problem areas before the formal presentation.

When having these informal discussions, always engage the final decision-maker in the process early. Early engagement will likely result in the decision-maker having a much more sympathetic perspective toward the final work product. Behavioral economics literature calls this the “Ikea effect” for the high value we put on objects that we have assembled.

IconWe are susceptible to acquired blind spots, which we need to recognize when making decisions.

In terms of an actual PowerPoint presentation, a strong focus on clarity should be paramount. Use pictures or graphs (called the “picture superiority effect”), limit the number of words on each page, and keep the presentation short. These techniques contribute to clarity and to the ultimate acceptance of your position.

Finally, the presenter should lead with a simple story before beginning the presentation and conclude with one as well. A simple story that connects disparate ideas and facts is most likely to produce the best measure of success—the ability for the audience to succinctly articulate your position in a few minutes.

Statistical Blind Spots Among Nontechnical Managers

While the starting point for any presentation is a compelling story, we also need to manage the statistical blind spots that nontechnical managers have. This vigilance helps ensure that competing storylines based on faulty analysis do not impact the decision-making process. By discussing these technical problems early on before conclusions are developed, we can also make nonrigorous analysis out-of-bounds in the decision-making process.

The list of potential blind spots is long (see sidebar). As one communicates and works with nontechnical leaders, it is important to consider these biases and others that could form the foundation of their thinking.

Understanding Blind Spots Among Technical Experts

As technical experts, we see the blind spots among nontechnicians as frustrating limitations that must be overcome in order to improve decision-making. Although we may believe we are immune to the problems of statistical thinking, technical professionals can also have blind spots, and we need to recognize the most common pitfalls.

There are biases that are more likely to be found among technical experts. Like the biases that managers outside of our profession have, it is important to keep these in mind when doing analytic work and making big decisions.

  • Confirmation bias. In an effort to prove previously held beliefs correct, one may comb through data to find information that supports a position and put less weight on information that does not support that same position. While this bias could be completely unconscious, the end result is a less-than-optimal decision that confirms a previously held belief.
  • Pro-innovation bias. Most people like new concepts and inventions. In the excitement of an innovation, many people may overlook its shortcomings and the usefulness of its application. Technically oriented people, in particular, may find an application of a complex statistical formula useful even when it may add little to improve the decision-making process.
  • Information bias. Actuaries and other technical experts love additional data. While this data has the potential to better inform decision-makers, in many cases it will simply add complexity and not positively affect the decision-making process.
  • Empathy gap. Data is power, and we may understate its importance in relation to others. Statistical observations could impact an individual’s livelihood and self-worth, and this needs to be considered as part of any presentation.
  • Automation bias. Many technical experts love automation. Automation can make a process more streamlined and reduce the number of resources required to accomplish a task. The goal to achieve ever more automation, however, could introduce the potential for significant errors that technical people may fail to fully consider. For example, a more automated system may have fewer checks and balances and may be more likely to produce a major error than a manual system.
  • The law of the instrument. Many analysts prefer to employ the techniques they use in their disciplines. Economists will employ linear regressions. Accountants will require a final true-up of assumptions. Marketing professionals will relate a question to a generational cohort that references millennials or Generation X. Most people have this inherent bias, and we need to account for this as we consider the best technique to answer a business question.

Managing Teams: Creating an Environment to Avoid Bias

This article has focused on techniques to avoid bias and ensure an optimal decision-making process. Whether one is a technical expert or a nontechnical leader, most people have individual biases that need to be managed. Some people might be susceptible to easy-to-understand stories, confirmation bias, models that are too complex, an incentive structure that produces a biased conclusion or many other biases.

Other than making every attempt to understand your own biases and the biases of others, it is important to create an environment where a team can limit the biases of any one individual. The best approach is to create a team that feels empowered to question the conclusions of any one individual in the group. This competitive tension among team members with firm positions has the potential to improve the collective decisions of the entire team.

While team members have the potential to reduce bias of any one team member, they can also create a social environment that could lead to less than optimal decision-making if not properly managed. For example, if a particularly strong-willed individual aggressively takes a position, other people may agree with this person because of his or her conviction rather than a thoughtful consideration of the argument. This is known as the “bandwagon effect.” Similarly, a team many not want to aggressively question someone out of simple courtesy (known as “courtesy bias”). In both cases, the social dynamic should be managed to ensure everyone has a meaningful voice in the process.

Solutions could include requiring people to provide an opinion in written form before the discussion begins, or rigorously enforcing an inclusive discussion and decision-making process. The creation of competing teams required to take opposing positions—even one that is different than what they believe—could also benefit the discussion and the decision-making process.

As actuaries, it is important for us to take the world as it is, rather than what we want it to be. While we may think the world should be completely rational and free of the statistical blind spots, it is not, and we need to guide our own work toward this reality rather than the ideal. We need to create compelling and easy-to-understand stories that can compete with less statistically sound ones that lead to suboptimal decisions. We need to anticipate and manage the statistical blind spots made by less technical managers. At the same time, we also need to manage our own blind spots and look for well-run teams to help manage the biases of any one individual. This process is much less concrete than a traditional actuarial analysis, but it’s also necessary to make better decisions and become a more effective actuary.

Kurt J. Wrobel, FSA, MAAA, is chief financial officer and chief actuary at Geisinger Health Plan in Danville, Pennsylvania.

Copyright © 2018 by the Society of Actuaries, Chicago, Illinois.