Given the explosion of data generated by the increasingly complex infrastructures of today’s financial institutions, industry leaders are struggling to avoid drowning in information flowing in from legacy systems, cloud environments, mobile apps, and the myriad technologies that underpin their operations.
To better understand how financial services organizations can harness emerging technologies—like artificial intelligence (AI) and machine learning (ML)—to improve the quality of data that drives today’s decision-making processes, we turned to Tom Haslam, managing director, KPMG Digital Lighthouse Services, and Brian Radakovich, managing director, KPMG, Financial Services Data Service.
What is this concept that KPMG is calling “Ambient Data Management”, and how does it enhance data quality to drive better insight and business decision making for enterprises?
Haslam: As enterprises wrestle with an exploding amount of data from a growing number of sources in a variety of different formats, it has become increasingly difficult to help ensure that decisions are made based on high-quality information. KPMG Ambient Data Management uses advanced AI and ML algorithms to monitor key elements of the data—such as quality, lineage, metadata, and master reference data—as it moves through an organization’s data pipeline. This provides organizations with the ability to identify and work with the highest quality of data possible to gain enhanced insight from investments that have been made in analytics and accelerate the resolution of problems that need to be solved.
Radakovich: This is important for a number of reasons—especially in the financial sector. Global institutions face an increasing amount of pressure from regulators, clients, and internal auditors to improve the quality of their data. Within financial services, traditional methodologies of data quality management are still executed in a manually intensive manner that requires lots of human intervention. Given the sheer amount of data in today’s environment, these traditional methods cannot provide data quality management at scale. Nor can it be done quickly.
Because of the limitations of rigid, rulesbased approaches, KPMG has developed advanced analytics techniques—which harness AI/ML to improve the quality and speed of decision-making processes in financial services institutions.
Can you explain how this works in financial services institutions?
Haslam: Imagine a dataset that has hundreds of columns and ten, hundreds, or even thousands of rows. It would be nearly impossible for a human to build rules that assess the data quality of such a massive and diverse dataset. But a machine, on the other hand, can look at all of the data and immediately begin to distinguish what data is correct and what data is anomalous if the analytical engine is properly set up.
When looking at high volumes of data in the context of historical trends—as data moves through an organization— machines can find anomalies in the data to enable analysts and decision makers to assess those anomalies.
Radakovich: We have applied this approach to a number of use cases with financial services clients. Our original goal was to apply ML and AI to financial reporting and regulatory data to help ensure complete and accurate reporting in financial statements and data submitted to regulators.
After success with that approach, we began to apply the same approach across the enterprise data lifecycle in order to create an authoritative source—or point of data capture—for the financial institution. This innovation has had a very positive effect on data quality across the enterprise.
As we expanded the application of AI and ML to specific critical parts of the data lifecycle—including systems of record and other authoritative sources used by analysts and decision makers—financial institutions have been able to use the data for a growing number of applications.
How aware are financial institutions about the role that AI/ML can play in enhancing data quality and contribute to mission-critical objectives?
Radakovich: Most institutions don’t have the experience or ability to leverage KPMG Ambient Data Management in a live production environment. This is where the KPMG Data Lighthouse and Financial Services teams can step in.
We have helped enterprises with developing the organizational structures that are necessary—such as creating data centers of excellence that pair data scientists with traditional data quality analysts—for developing models and algorithms that can help manage and resolve data quality issues.
Haslam: One of the key imperatives before CEOs and businesses today is the effective execution of digital transformation initiatives. This affects nearly every aspect of an institution —from organizational understanding of customers, to product design, regulatory reporting, and internal operations.
While digital transformation is complex, it should remain grounded in data. Without accurate data that is immediately available and well-governed, digital transformation is unattainable. By focusing on data quality, we help decision makers see, understand, and act on the best information for enabling digital transformation.
The application of the KPMG Ambient Data Management solution leads to greater trust in enterprise data. Greater trust in data quality—in turn—facilitates the development of new financial products, better analysis, and more effective adoption of new technologies. When institutions get enterprise data quality right, it improves the digital culture that must be in place to achieve digital transformation objectives.
Are there governance implications to implementing this AI/ML-based approach to data gathering for insights across the enterprise?
Haslam: Integrating KPMG Ambient Data Management does require reorganization of an enterprise’s technology and its technology infrastructure. New approaches are needed to develop the pipelines that allow data to move from system to system in an effective manner. If the technology and infrastructure are reorganized with care, then the enterprise benefits from data that is a much higher quality even as it is created and maintained at a lower cost. As a result, more data can be processed more quickly to resolve real anomalies while eliminating false positives and false negatives. The combination of the Digital Lighthouse technical experience and the business knowledge of the KPMG Financial Services Data team makes it possible for organizations to realize the value of these techniques.
Radakovich: Once the right structures are in place, institutions can begin to establish better data quality management systems. The initial process begins by looking at troublesome—typically large—datasets where an institution cannot apply traditional rules-based methods to data quality management.
Through the deployment of algorithms, institutions can start to uncover anomalies in the dataset and begin grouping the anomalies. Our teams can then prioritize these groups of anomalies to determine which of them are true data quality issues. We then refeed the data back into our models to introduce the AI/ML process of retraining. As the model goes through the retraining process, the data becomes more and more refined. This allows the teams to zero in on the true data quality problems.
What are the elements that define a successful implementation and drive positive business affects for these types of initiatives?
Haslam: We work with organizations to understand the metrics that they use to measure the business objectives they need to achieve when implementing new initiatives. This includes:
Overall data quality
- The number of anomalies found per data set;
- The rate at which data flows through the pipeline;
- Business trust in the data;
- Number of errors found in regulatory filings; and/or
- The number of customer complaints due to erroneous information.
The extent to which improvements across these metrics correlate with positive business outcomes determines the return on investment that is secured – whether those are increased revenue, cost reductions or risk mitigation.
Are AI and ML at a level of maturity that elicits confidence in a highly sensitive sector like financial services?
Radakovich: AI and ML technologies have advanced significantly over the last few years.
The key factor for financial institutions revolves around developing strong governance structures that help ensure no bias exists in the algorithms or the model. The correct structures provide assurance that the machine is making the correct decisions.
Another important factor revolves around the training of the system at the start of the initiative. As data is fed back through the algorithms, careful attention has to be paid to fine-tune the model to a high level of precision.
What are the steps that financial institutions and organizations of all sizes can take to harness AI and ML to improve data quality management?
Haslam: According to a recent KPMG study, organizations that are leveraging ML and AI the most are planning to make greater investments in the technologies as compared to those organizations who are lagging. These advanced organizations are the ones that are poised to move from traditional analytics to use of ML in applications for data quality management.
Radakovich: For enterprises that are just beginning the journey, the first step is to identify a troublesome dataset that presents an issue to the business, and apply the algorithms and model and retaining process. Gaining visibility into the true anomalies allows the organization to resolve those issues and realize the improved business outcome. Armed with understanding of how this approach works, the organization can then implement this approach to data in other parts of the business and evolve how data quality management is done on an enterprise-wide level.