If only AI was as simple as a robot underneath…

Artificial Intelligence (AI)/Machine Learning (ML) and model risk management is a topic I’ve addressed in prior blogs.  The image of HAL from 2001: A Space Odyssey and its nefarious actions is burned into many people’s minds as they think about AI. We are far, far away from this concept of AI and, since AI/L is still in its infancy, there are many things to mistrust about it. For many, if not most, people, these methods are black boxes: some data goes in, and some conclusions based on that data come out. A key to moving forward from the use of such tools is to establish the explainability of the model, ensuring that there’s at least a few members of the organization utilizing the technology that understand and can explain the methodology for both internal governance and external regulation reasons.

Explainability is only the top-level of the machine learning enigma. Beneath that and driving concerns around the increasing automation in modeling and mathematics, is another big E word: Ethics. When designing models around machine learning algorithms, the actual process of taking data and applying formulas to it is only a fraction of the rollout process. Deciding what data can (and should be) accessed, how to guarantee fairness and avoid discrimination, and how to establish a robust but safe AI/ML system. Addressing this “human” need is all a part of the maturing of AI/ML’s exciting potential.

ALTAI and the Value of Oversight

Addressing and adopting solutions to these concerns is something that is already underway and is worth learning more about, if for no other reason than to be able to ask critical questions of software vendors touting their solution. If you’re part of an institution that uses AI and operates in the EU, you’re likely familiar with the Assessment List for Trustworthy Artificial Intelligence (ALTAI). ALTAI set forth a list of seven requirements, each of which must be met to guarantee that a model uses “Trustworthy AI.” These seven requirements are:

  1. Human agency and oversight,
  2. Technical robustness and safety,
  3. Privacy and data governance,
  4. Transparency,
  5. Diversity, non-discrimination, and fairness,
  6. Environmental and societal well-being, and
  7. Accountability.

These attributes are required to be addressed in AI solutions by the European Commission (EC) when deployed in the European Union. We should consider using this sort of guidance when implementing AI/ML models for risk management purposes in the US. Let’s go over each one a little bit to get a sense of why it’s good to follow these guiding principles.

In many ways, the first two (1 and 2) on the list are somewhat linked. For a “safe” AI to exist, there needs to be some sort of human oversight into the way a machine-learning algorithm shifts over time. If, for example, a model that previously took a minute or two to run is suddenly taking ten minutes, or something like the probability of default changes by several points for a majority of the cases, a person needs to take a look at what changed to correct that behavior as well as document the review and the actions taken. Without this oversight, a machine-learning algorithm can suddenly become very different (or, in the worst cases, very wrong) and it will affect not only explainability but also accuracy and security.

With the security lens in full view, the natural progression in conversation heads to modern cybersecurity risks with AI. For example, one specific type of this risk is a dangerous cyber-attack that can dramatically impact the results of a model: data poisoning. Data poisoning can be as simple as intentionally feeding a large number of bad data into an algorithm to make it classify results or interpret data incorrectly. On a small scale, this could mean something as simple as weighting different factors incorrectly, but on larger scales, it could mean a massively incorrect algorithm comes out of the other side of the machine learning process, throwing all the work put into getting a working model down the drain. The “Oops Factor” doesn’t do you much good when this happens. Covering numbers 4 and 7, transparency and accountability now come into play. If you’re prepared to handle these discrepancies, ensuring your model’s results are trustworthy and accurate is a much simpler task.

Privacy and Non-Discrimination in AI/ML

The questions of personal data privacy and algorithmic bias are particularly hot button issues in the United States, and with good reason. As everything about people’s lives, finances, and habits becomes clearer online, the vast wealth of accessible information about individuals and institutions brings a temptation to use pieces of that information as much as possible in algorithms. This could result in a form of informational redlining. However, with that information comes a new source of risk: disclosure risk. Disclosure risk is also known as re-identification risk; that is, can an individual or institution be identified based on the collection of information in the database about that individual? There are many ways to reduce that risk, such as full anonymization of data in the dataset and increased data security, but it’s definitely a source of risk that should be assessed and accounted for by ML algorithms.

But of course, disclosure risk is far from the only risk that exists in the space of individual rights and machine learning algorithms. Bias is a growing issue with these algorithms, and one that needs to be accounted for, not just to ensure the algorithm is equalized but also to be confident that the results are fair. You might remember the machine-learning Microsoft chatbot that was deployed on Twitter to build a conversational model and that ended up gaining particular prejudices from the interactions it had on the site. Machine learning algorithms in model risk also face the potential of gaining particular biases based on various individual level attributes, which have all sorts of different sources from the programmer’s ideologies to technical limitations to unexpected usages of the algorithm. The main, agreed upon way to handle this was called “equalized odds” by a group of computer scientists at Carnegie Mellon (more information on that here), which attempts to reduce bias risk by more-or-less measuring it and accounting for it in the model itself.

Number 6, Environmental and societal well-being, can be seen by some as less important than the others. The thought that, as long as I benefit from the use of AI nothing else matters, is short-sighted and potentially destructive. One only has to remember the use of DDT and its effect on the environment (and us) to realize how not taking this point into account can remove the short-term benefit from the results of and AI model. Isaac Asimov’s first law of robotics states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. While a financial model may seem to be far removed from this ideal, the low docs/no docs loans of the early 2000’s, which implied behavioral information outweighed incomplete or inaccurate borrower information, caused great harm as one of the factors leading to the great recession.

Trustworthy AI

The goal of all of these requirements is to make certain that we comprehend, analyze, and resolve potential ethical conundrums that AI and ML develop naturally over time. The technology itself, and its adaptation into the space of risk management and risk modeling, is still in its infancy, so these guidelines, while helpful now, might only make up the initial stages of what will be a constantly evolving set of regulations and insurances in the space. Validating the trustworthiness of AI is on par with explainability in its importance. If we can explain the methodology used by the model, but can’t verify its fairness, precision, or accuracy, then the validity in the model is equally as questionable as it would be if it were vice versa. Without that validity, the massive strides we can take toward automating risk management and modeling through AI/ML will, unfortunately, be steps in the wrong direction.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *