No, not this kind of AI.
A few years ago, I was attending a session on using AI in the financial services industry (I will ignore the definitional vagueness of AI – marketing versus concept). The speaker was focusing on compliance and, in particular, Anti-Money Laundering (AML). I asked the presenter how do you explain and defend the model (it was their model they were presenting) and the speaker launched into a discussion on using the documentation collected during implementation; an acceptable but non-AI approach as it doesn’t explain the learning and adaptation process that goes on while the model is being used. Documentation is an important component (and one I don’t see enough of) but it does not suffice to explain the modeling process and results. I left the conference still excited about AI but unconvinced it was ready for acceptance in the industry. What I had witnessed at the conference is something we are all familiar with, “brochure-ware”, whereby the appearance of smartness was reflected in the PowerPoint presentation, but the full end-to-end development was not complete.
According to some experts, doing less is the key to addressing the finite attention axiom (discussed in the blog a few weeks ago). In risk management, time and resources are always a challenge. The number of steps between identification and mitigation alone can seem to be ever-growing, taking up to more than a full team just to keep up with the constantly changing risk climate. Because of this, any step to reduce the manual effort has the added effect of freeing up resources in other areas of the risk management process.
One of the most time-consuming aspects of risk is the process of risk modeling, from aggregating data to actively quantifying the risk factors. I won’t go into the specifics of that modeling here (a topic for the other day, perhaps!), but if you’re visiting this blog, you’re probably familiar with at least part of the effort required. A few years ago, as conversations about using artificial intelligence (AI) in other financial fields took form, risk professionals hoped the new technology would provide an avenue to automate some of the layers of risk modeling, improving the results while reducing the hours required to perform this task. From then to now, however, that aspect of modern technology hasn’t fully matured in financial and risk models. That’s not to say there haven’t been some developments in risk management AI and machine-learning (ML) over the last few years, though.
For many people, AI still feels like something that appears in movies like I, Robot, or Ex Machina or even Blade Runner. In our present, this isn’t quite what AI looks like. Think of it more like a computer program that takes in a bunch of variables about its current environment, analyzes it, and determines the best course of action. For risk, this meant the first feeding these programs a ton of data, from transactional history to client behavior data to financial data and back. From there, the hope was that these models would perform analysis on this massive amount of data, applying machine-learning methods to estimate the PD for a particular loan.
The thought was that AI would be a be-all, end-all solution: if the early tests were both accurate and precise, risk modeling could easily be handled by these AI/ML algorithms. What ended up being the case, unfortunately, was that generally there was not a noticeably improved outcome based on these new, AI-based models. In certain predictive modelling metrics, such as the area under the curve, machine-learning models simply couldn’t keep up with refined logistic regressions, and AI didn’t succeed as well in the area of forecasting losses as expected (as with all conclusions, this one comes with a caveat, in that the models that use machine-learning and excel when there’s non-linearities in the data actually perform better than those refined logistic regressions). In short, AL/ML is still maturing (and has been over at least the last 15 years) in the risk modeling arena.
Pitfalls in Modern-day AI Modeling
The lack of outperforming its competitors in the modeling space is far from the only concern that AI and machine-learning brings to the table. One of the biggest “issues” comes from machine-learning’s biggest strength. Because ML functions around identifying new patterns and iterating on its algorithms a little bit at a time, the associated equations that a given machine-learning model uses get more and more complex the longer it runs. In one way, this is a positive: AI can see patterns a human might miss and can create an algorithm that a human wouldn’t have considered based on that pattern. In another way, however, this is a negative. Every iteration after the first adds a layer of complexity that becomes increasingly less explainable for a pair of human eyes, even if that person is an expert in the field. This was the issue that wasn’t addressed at the conference I attended and is the cautionary tale to be understood regarding AI/ML software and the software sales cycle.
But why is that a problem? Well, understanding what factors are driving an organization to make a business decision is crucial, especially in a field like risk management where the ultimate goal is to adjust operations to reduce potential loss. If the “why” of a decision made based on an ML algorithm isn’t fully explainable, it ends up compounding on itself and unintentionally creating another systematic source of risk such as compliance risk and regulatory concerns. If these predictive models result in a negative loan decision, for example, the Equal Credit Opportunity Act (ECOA) stipulates that the reasons that the credit decision was made are to be provided. With an AI model making that decision based on an algorithm, you would need to be sure that explanation is still achievable and comprehensible to those who are not credit modeling or AL/ML professionals.
None of the above is to be taken to mean that there is no chance that AI/ML is not now and will not be a critical part of the future of the risk modeling process, just that the technology is not quite at the point where it supersedes the models currently in place. AI is consistently becoming a veritable gold mine of information in the banking sector, in the investing sector, and generally across the whole private sector. This is with good reason, as computers can recognize patterns, compute new equations, and simulate financial patterns at an extremely rapid pace. Even so, in risk management and modeling, this new technology needs more time and development before it can be fully trusted. When artificial intelligence and machine-learning models are accompanied by “explainability” and accuracy above and beyond what we’ve seen so far, you might see a full transition in risk management to AI/ML methods.
Alan Cooper asks in his book, The Inmates are Running the Asylum (1999), “What Does “Done” Look Like?”. For AI/ML, it is a question still to be answered.