With the promise of machine learning and AI also come challenges to successful auditing and evaluation. One challenge is the ability to validate these new models—the more complex the modeling becomes, the more difficult they are for reviewing, communicating, and controlling model risk. Many companies’ model risk management teams struggle to keep up with the pace of change and evolving complexity of models; we are seeing real world examples of where predictive analytics have gone wrong and heightened regulatory expectations are beginning to intersect with the rise of these models.
While traditional methods like unit testing and static validation continue to be useful, additional methods such as using metadata to track data flows over time or variable significance tests evaluating model structure will become essential for model validation as well. Model validation must take a new approach to assess technical robustness, algorithm integrity, explainability, and freedom from bias in the models adopted.
This article provides an overview of activities that should be considered by an actuary when validating a predictive model. It describes several components to consider when validating a predictive model and provides an examination of various methodologies that can be implemented in the validation process including:
- Various model selection methodologies like hold-out analysis, variable significance test, and performance assessment
- Importance of data in model selection
- Data validation methodology
- Model interpretation and model tracking