Improving Strategic Risk Management

Keeping the science in actuarial science Bryon Robidoux

Photo: iStock.com/syolacan

Strategic risk management is the art of protecting risk from its dangerous downside and exploiting risk for its potential opportunities. As actuaries, we must constantly ask:

  • Which risks can I exploit due to my competitive advantage?
  • Which risks are insignificant, so I can leave them naked?
  • Which risks are dangerous and should be hedged?

Given the dynamic nature of risk, these choices can be framed within a real options approach, such that there is value in the option to expand, contract or delay.1 As time progresses, our competitive advantages may change; risks may emerge, so we can no longer go naked; and business may mature to the point it is not a significant risk and hedging can be stopped. The real options give organizations the ability to learn from what is going on around them and modify their behavior accordingly.2

The optimal time and methods of executing real options are mostly up to judgment, though. To improve strategic risk management capabilities, actuaries need to create nimble organizations that are built for learning and optimized to exercise our options by improving our judgments. To do this, we must answer the following questions:

  • Where are the potential errors in our judgments?
  • What are the properties of good judgments?
  • How can we learn to make better judgments?
  • How do we design an organization for learning?
  • How do we incentivize within the organization, so that learning takes place?

Errors In Judgment

Before talking about direct sources of error, let’s first define an error metric. In his book, Noise: A Flaw in Human Judgment, Daniel Kahneman uses mean squared error (MSE) as his overall error metric, which this article also will adopt.

The error in any single measurement is the bias + noise. Bias is error that is systematically off target in a particular direction. For example, you try to hit a bull’s-eye on a dart board, but all the darts hit the 20 at the top of the board instead. Noise is the variability in the outcome, so in a similar example, the darts would be spread erratically all over the board.

Statistical bias and noise are orthogonal to one another, so reducing either one will reduce the error in the system. There is not a 1-to-1 relationship between statistical bias and noise compared to psychological bias and noise. Psychological bias and noise are not orthogonal because psychological bias can cause both statistical bias and noise.

Statistical Bias

In general, there is a lot of focus on bias but little focus on noise. Bias feels more tractable because it is easier to explain with causality due to its directional consistency.

Our brains are machines specifically designed to look at the facts of a situation and find a coherent story to explain what happened. We will search for answers until the facts match up with our experiences, which Kahneman calls the “illusion of validity.”

Psychological Biases

There are many types of psychological bias, but a few that are especially applicable to actuaries are:

  • Overconfidence
  • Fundamental attribution error
  • Hindsight bias

A close relative to the illusion of validity is the overconfidence bias of making accurate predictive judgments even when there is limited data available. When making predictive judgments, it’s common for professionals to underestimate the amount of both epistemic and aleatory risks. Kahneman calls this “objective ignorance.”

This underestimation of risks naturally leads to a psychological bias of fundamental attribution error, which is a tendency to assign blame or credit to agents for actions and outcomes that are better explained by luck of objective circumstances.3 Due to hindsight bias, most situations appear to be certain and easily explainable after the fact.

Statistical Noise

Given the random variation of noise, it does not fit well in the causality box—it is difficult to come up with a coherent story due to the erratic pattern. As Kahneman so eloquently put it, “Causally, noise is nowhere; statistically, it is everywhere.”

Noise requires statistical thinking. This is challenging because it requires System 2 thinking, which is slow, methodical and energy intensive. Stories and causality require System 1 thinking, which is fast and less energy intensive relative to System 2. This inability to explain noise can make justifying noisy results difficult in the business world because stories help drive decisions.

Psychological Noise

To address noise, we need to break it down into its fundamental components. As defined by Kahneman, noise can be broken into level and pattern noise. Level noise is the variability of average judgments made by different people in the same situation and with the same given facts. Pattern noise is the noise in judgments by an individual.

Pattern noise can be dissected further into stable noise and occasion noise. Stable noise is the idiosyncratic response by a person to the same condition. Occasion noise is random variation in judgment based on unrelated external factors, such as mood, temperature or order of wording.4

The best way to find noise is by performing a noise audit to analyze past judgments by looking at their consistency and identifying the amount of noise and bias in the decision-making process. As an example of applicability, Kahneman successfully implemented a noise audit on an insurance company’s underwriting practices to find the noise in claim assessments. A similar type of audit could be made in the actuarial domain for asset and liability management or hedging processes—both should have a sufficient volume of judgments to get decent statistics.

Properties for Making Good Judgments

The Intelligence Advanced Research Projects Activity (IARPA), which is a part of the U.S. National Intelligence Community, holds an annual competition called the Good Judgment Project to forecast the outcomes of world events. Since 2011, they have analyzed the methods competitors use to win, and they call the winners “superforecasters.”

According to Superforecasting: The Art and Science of Prediction, they came up with “10 Commandments for Aspiring Superforecasters” and how to improve their judgments. My five favorites are:

  1. Strike the right balance between inside and outside views. This simply states that no situation is truly unique. If you are determining the probability of an event happening, then the starting prior distribution should be the average occurrence in the population.
  2. Strike the right balance between under- and overreacting to evidence. This is the ability to update your beliefs by reading the tea leaves but not succumbing to wishful thinking.
  3. Strive to distinguish as many degrees of doubt as the problem permits—but don’t go overboard because nuance matters, and there is more to uncertainty than a simple scale of certain, maybe or impossible.
  4. Don’t treat commandments as commandments because guidelines are best with an uncertain and not exactly repeatable world.
  5. Bring out the best in others and let others bring out the best in you by understanding the arguments of the other side, helping others clarify their arguments and learning how to disagree without being disagreeable.5

At some level, these five commandments boil down to being able to rethink problems, unlearn old perceived truths and be flexible to consider other points of view.

Adam Grant explains that forecasting is less about what we know and more about how we think.6 When the factors that led to the best forecast were studied, the ability to change one’s beliefs was more predictive than intelligence, grit or ambition. He further states that to have better judgment, we must think like scientists because they always are experimenting to try and find the truth, and being wrong is part of the job. Therefore, we need to make sure we don’t forget the science in actuarial science. We need to experiment and find ways to update our beliefs as emerging risks evolve, and we must learn new techniques for managing risks.

Learning How to Improve Judgments

Psychology studies have shown that learning how to make better judgments (like superforecasters) requires training, selection and teaming. The training curriculum is teaching probabilistic reasoning, common psychological biases, the value of averaging independent predictions and finding reference classes for comparisons to similar events. Selection is picking the best forecasters by holding internal competitions to see which forecasters perform the best over time. Teaming is having multiple forecasters debating one another’s predictions, so they can hear opposing views.

Training

Given that every insurance organization has committees responsible for overseeing and making decisions, it is worth highlighting the importance of keeping individual judgments independent to reduce bias in the committee’s decision and prevent harmful noise. The order in which the group receives information can cause a decision to disproportionately sway with the information that comes first. Furthermore, when the group reaches a consensus, they often will end up with a more extreme point of view than the average outcome of their original point of view.

These phenomena are called group cascade and group polarization, respectively. One useful method for dealing with group cascade and group polarization is the mini-Delphi, which requires participants to create separate and silent estimates or judgments, then explain and justify them, and then make new estimates based upon the explanations and judgments.

Teaming

Teaming is all about debating and coming to a consensus on judgments. Intelligence quotient (IQ) is important for superforecasting, but teaming is where emotional intelligence (EQ) can improve the outcome. EQ is the ability to identify and regulate one’s emotions and empathetically recognize other people’s emotions to effectively communicate and build healthy relationships. IQ and EQ are mostly independent forms of intelligence.7

Grant reminds us that without EQ, our beliefs and ideologies would be tied to our identities. This is especially true with predictive judgments that have high levels of uncertainty. Without thinking about it, we slip into either preacher mode to show we are right, prosecutor mode to show someone else is wrong or politician mode to rally our supporters. The irony is that when we are in these modes, the receiving parties are less likely to listen and conflicts are more likely to arise.

Furthermore, Grant states that high-functioning groups keep their relationship conflict to a minimum, but they are willing to have task conflicts and competing ideas from the onset. It is OK to wrestle differences to the ground, but make sure conflict is focused on the problem and not toward any person or group.

Conflicts can be filled with tons of emotion because pride, passion, fear and insecurities are all trapped deep within our psyches. You may think it is possible to divorce emotions from your decisions, but our brains are wired to first go through our amygdala, which controls emotions, before going to our front lobes, which control higher-level thinking. People who have damaged their amygdala are unable to make decisions because they lack the ability to care about the outcome.8 The amygdala also controls the fight-or-flight response, so as tension and frustrations rise, it is important to be emotionally intelligent and not take differences in opinion personally.

Designing an Organization for Learning

To improve our judgments, we must have organizations built for learning and rethinking their past strategies. When it comes to an organization’s ability to learn, it must consider its corporate structure. You can think of corporate structure like software architecture. (To learn more, watch Brendan Burns’ presentation at Microsoft’s 2020 .NET Conference: Focus on Microservices called Why You Should Care About Microservices. It is not important to understand how the microservices technology works, but how it allows the organization to quickly adapt to changes.)

A monolithic corporate structure has several silos with many different non-cohesive responsibilities. The data and projects in one silo may affect and be dependent on other silos or, worse, be redundant. A service-oriented corporate structure based on microservices, on the other hand, has loose coupling and tight cohesion. Each team provides a cohesive set of services through a well-defined interface, and each service has a contract that states the type of data it expects to receive, the expected behaviors and outputs, and a deprecation policy of when and how the service might change. Therefore, a service-oriented corporate structure allows for massive collaboration with minimal coordination within the organization. The interfaces encapsulate the changes to the service, and the deprecation policies guarantee the stability of the interface and its behavior.

Why is corporate structure important for learning? According to Grant’s book, Think Again, there is a misconception that strong leaders stick to their decisions and do not waver. I believe this is due to outdated monolithic organizational structures—because moving in a new direction would cause decision whiplash due to all the coordination required. Service-oriented organizations, like Amazon and Netflix, use a scientific approach and do small and controlled experiments to quickly learn what works. They can rapidly change directions based upon what they learn and not suffer the painful consequences of massive coordination of business units.

Incentivizing an Organization to Learn

How do we build an organizational culture that promotes learning? According to Grant, we need psychological safety and process accountability. Psychological safety is about fostering a climate of respect, trust and openness in which people can raise concerns and suggestions without fear of reprisal. Process accountability is about considering how carefully different options were examined during the decision-making process. Only when we combine the two do we create a learning zone where people feel free to experiment and question one another’s experiments in the spirit of improving outcomes.

On the other hand, in performance cultures, Grant describes that we tend to cling to best practices and only reward based on outcomes. Best practices imply there is nothing to improve and nothing left to learn, so they discourage learning. Only rewarding based on performance incentivizes people to stay with the status quo—there is a fear of making mistakes. A lack of psychological safety makes people afraid to speak up, so improvement does not happen.

Conclusion

To optimize the real options of strategic risk management, actuaries need to optimize their ability to learn and make better predictive judgments. The power of superforecasters is their intelligence and, more important, their ability to learn and rethink how to approach problems. They are emotionally intelligent and able to divorce their views and beliefs from their identity, so they can be open-minded and comfortable in being wrong.

The behavior of superforecasters is counter to the misconception of a strong leader as someone who is decisive and sticks with their decisions. This belief is entangled with outdated corporate structures due to the massive coordination required in a monolithic corporate entity.

Once the corporate structure is modernized and learning is incentivized through process accountability, organizations become agile, able to quickly adapt from their experimentation and transition based on their ever-changing environment. It is only once we specifically design organizations to be learning machines that we can make better judgments to improve our strategic risk management.

Bryon Robidoux is an adjunct instructor at Maryville University. He is also a contributing editor for The Actuary.

Statements of fact and opinions expressed herein are those of the individual authors and are not necessarily those of the Society of Actuaries or the respective authors’ employers.

Copyright © 2022 by the Society of Actuaries, Schaumburg, Illinois.