Case studies for model error events are difficult to come by. The errors we have found in models remain confidential and discussing errors in general does not have the same impact of an actual case study. I have looked through the public realm for case studies that relate to the financial institution arena and have found the following examples:

b

Fannie Mae

 While in the midst of changing its accounting system, the company’s finance team relied on spreadsheets to make some needed calculations required by a new accounting standard. One problem. The spreadsheets contained errors that skewed results by over $1.1 billion. The company discovered a $1.136 billion error in total shareholder equity due to “honest mistakes made in a spreadsheet used in the implementation of a new accounting standard,” and it had to restate its 2003 third-quarter financials.

b

Fidelity

The company’s well-known Magellan fund was forced to cancel a $4.32/share year-end dividend distribution. The problem?  A missing negative sign. A tax accountant omitted a minus sign when s(he) transcribed the net capital loss (of $1.3 billion) from the fund’s financial record to a spreadsheet. This turned the loss into a gain, causing the dividend estimate to be off by $2.6 billion.

And these are just errors from spreadsheet models. Errors in acquired (third party) models can arise from both an error in a formula (sometimes customized for the financial institution) as well as errors in the user driven assumptions. Which brings me to the topic of why we care about model risk (besides the obvious reason that the regulators have told us to).

A few weeks ago I was listening to a Webcast by Jos Gheerardyn, the CEO of Yields.io entitled Model Risk Management during Periods of Stress. Jos has created a company that provides firms with the ability to perform model validation using an A.I. based platform, a very unique and innovative approach. Jos and I are focused on model risk management but from different, yet complementary, vantages. As such, I am always interested to what Yields.io is doing because what they are doing is where the rest of us will be heading.

This presentation, in addition to discussing in a very understandable way how technology makes model validation more efficient and effective, started off with a reminder of why we are interested in model risk management and model validation. I am going to recap some of the points he covered because I think it is a good idea to revisit why we do what we do in model risk management.

What is a Model?

As defined by the FRB, “The term model refers to a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques and assumptions to process input data into quantitative estimates.” This definition tells us two different things. The first is that it is a very broad and all-encompassing definition and the second is that it is up to the industry (and us) to define what models are material and therefore in need of model validation.

As an example, many years ago I developed a model for teller staffing based on historical teller transactions. I had a nice data set that included volumes and how long it took to process a transaction. Were there errors in the model? Most likely. But even if there were significant errors that, say, called for no tellers or 100 tellers during a shift, the oversight by the eyes of the “Expert” would have meant the impact of an error would have been minimal and correctable. This is, in my eyes at least, an immaterial model from the standpoint of oversight and validation. But keep in mind that materiality is in the eyes of the beholder.

In contrast, the models that we are most involved in validating these days are Loan Loss Reserves (ALLL and ACL), Asset Liability Management (ALM) and Anti-Money Laundering (AML), along with risk rating credits and stress testing. Almost all community financial institutions have defined these models as critical to the organization, sometimes with the helpful(?) guidance of the regulators. The critical model list does not yet typically include things like data mining or voice response unit models. I anticipate at some point they will become critical, especially as the link between interpreting verbal instructions and actions becomes more interwoven.

What Can Go Wrong With a Model?

Again, using the definition from the FRB, “The use of models invariably presents model risk, which is the potential for adverse consequences from decisions based on incorrect or misused model outputs and reports. Model risk can lead to financial loss, poor business and strategic decision-making, or damage to a banking organization’s reputation. Model risk occurs primarily for two reasons:

  • A model may have fundamental errors and produce inaccurate outputs when viewed against its design objective and intended business uses
  • A model may be used incorrectly or inappropriately or there may be a misunderstanding about its limitations and assumptions.”

Does this happen? You bet! And keep in mind that the risk does not always arise from innocence. Attempts to support a particular position (portfolio growth or capital management, for example) can lead to the modeling of the reward and a downplaying of the risk. The most significant exposure we see with models is when the owners of the model are unable to explain the assumptions they use. One set of model documentation we read through several years ago had a reference to the other financial institution from which the assumptions had been “borrowed”. Instead of developing their own documentation, they had simply taken a copy from the other financial institution and put their own name on it. Except in one place, that is. The assumptions did not fit the institution’s requirements and, as a consequence, they were in trouble with the regulators.

Are Models Ever Invalid?

The short answer is no, since most models produce some sort of result that relates to their purpose. If you were modeling to predict the rising of the sun and you used as the independent variable the length of time it took to bake a cake, you could say that the model inputs are invalid for the purpose they are being used. But I will bet there is a “Quant” out there who would be willing to challenge my presumption. We rate models as weak, moderate or strong. In fact, we are reluctant to apply the label of Satisfactory, as is applied in many audit ratings, because it implies the model is satisfactory for the purpose it is being used. And as anyone who is currently modeling the impact of COVID 19 on their organization can tell you, the lack of historical pandemic data provides a high degree of uncertainty, which certainly affects satisfaction.

Weaknesses that arise during a Stress Environment (sometimes called a crisis)

Stress usually means some change has taken place that makes things different than they were. How is that for an ambiguous statement! As Jos stated in his presentation, there are three types of model weaknesses; Inconsistency, Blind Spots and Surprise.

Inconsistency means that the model inputs are inconsistent with the model. Determining credit loss rates as a result of the Pandemic and Government intervention is an example of this. Using recent losses to predict future losses will result in understating projected losses by a significant amount.

Blind Spots refers to the fact that we don’t know how the pandemic and long term low interest rates will affect actions like the prepayment of loans or savings rates. Is the right measure being used and how do we know?

Surprise reminds me of the joke about the Economist who, when presented a set of results from the economy said, “That’s great. But how does it work in theory?”  Models are supposed to represent a simplified state of reality and often the real world becomes complicated.

I wrote this blog in order to remind me of what models are and how we need to view them and make certain they are working for us. We often look at the results from a model and think, “This must be the right answer because the computer calculated the answer”. There is an ongoing need to ensure the model was constructed with documented rationale and that the results, good or bad, are understood and actionable.

Happy Modeling,

 John

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *