Controlling AI

AI driving transparency, explainability, and trust

Algorithms can be destructive when they produce inaccurate or biased results, an inherent concern amplified by the black box facing any leader who wants to be confident about their use. That is why there is hesitancy in handing over decisions to machines without being confident in how decisions are made and whether they’re fair and accurate. This is an AI trust gap.

Gaining confidence with your AI

The benefits of AI will only fully emerge when algorithms become explainable (and, hence, understandable) in simple language, to anyone. The trust gap exists because there is no transparency of AI; instead, there is an inherent fear of the unknown surrounding this technology. Gaining trust also involves understanding the lineage of the AI models and protecting them (and data that forms them) from different types of adversarial attacks and unauthorized use. Critical business decisions made by AI affect the brand—and consumer trust in the brand—and they can have an enormous impact on the well-being or safety of consumers and citizens. No one wants to say “because the machine said so.” No one wants to get AI wrong.

This report is for leaders involved in the world of Artificial Intelligence and Machine Learning algorithms. The business and compliance imperative to understand and be confident in AI technologies has reached critical mass. This paper explains the urgency and describes methods and tools that can help leaders govern their AI programs.

Controlling AI
The imperative for transparency and explainability

Key developments

  • Gaining trust around AI is a top goal of leaders.
  • New policy initiatives and regulations around data and AI signal the end of self-regulation and the rise of a new oversight model.
  • Most leaders aren’t clear on what an AI governance approach should be.
  • Companies are struggling to decide who is accountable for AI programs and results.
  • A framework that includes technology-enabled methods can help address the inherent risks and ethical issues in AI.
The true art of the possible for Artificial Intelligence will become unlocked as soon as there is more trust and transparency. This can be achieved by incorporating foundational AI program imperatives like integrity, explainability, fairness and resilience.
—Martin Sokalski, Principal, Emerging Technologies, KPMG (US)

Key to governing AI: a framework that enables transparency

Most leaders don’t know how to close the trust gap because they don’t know how to govern AI and see the big picture of its operation.

Controlling AI

An effective framework will help organizations gain confidence in their AI technology. Such approach should dig deep into AI at the enterprise and individual model to help ensure that key trust imperatives are integrated and controlled throughout. It should continuously assess and maintain control over sophisticated, evolving algorithms by putting in place methods, controls, and tooling that secure the trust anchors along the lifecycle, from strategy through evolution. It should also provide clear guidance for the organizations—stakeholders across various management and oversight functions.

KPMG’s AI in Control

KPMG developed the AI in Control framework to help organizations drive greater confidence and transparency through tested AI governance constructs, as well as methods and tooling along the AI lifecycle, from strategy through evolution. By design, this framework addresses the inherent risks outlined in the sections above and it includes some of the key recommendations and leading practices for establishing AI governance, performing AI assessments, and building continuous AI monitoring and visualizations.

Components of AI in Control include:

AI governance

  • Develop AI design criteria and establish controls in and environment that fosters innovation and flexibility.
  • Assess current AI-related governance framework and perform gap analysis to identify opportunities and areas that need to be updated.
  • Integrate risk management framework to identify and prioritize business-critical algorithms and incorporate an agile risk mitigation strategy to address cybersecurity, integrity, fairness, and resiliency considerations during design and operation.
  • Design and implement an end-to-end AI governance and an operating model across the entire lifecycle: strategy, building, training, evaluating, deploying, operating, and monitoring AI.
  • Design a governance framework that delivers AI solutions and innovation through guidelines, templates, tooling, and accelerators to quickly, yet responsibly, deliver AI solutions.
  • Design and set up criteria to maintain continuous control over algorithms without stifling innovation and flexibility.

AI assessment

  • Conduct a diagnostic review of an enterprise AI program and governance to evaluate the current state and applicability of existing governance elements to AI as well as current operating model and readiness for AI at scale. This will include a capability and maturity assessment, as well as roadmap and recommendations for achieving target state.
  • Conduct assessment of individual AI and ML algorithms: testing of controls, evaluation of design, implementation and operation of the algorithm based on four trust anchors—integrity, explainability, fairness and resilience.

Continuous monitoring and dashboards

  • Create full visibility into metrics related to the trust imperatives including key performance and risk indicators including Board, Executive, and Program level reporting focused on key relevant AI KPIs and KRIs.
  • Enable continuous monitoring of key controls and metrics—what is working (or not) across your AI/ML models.
  • Provide view of the upward/downward trend over a time period, based on controls and testing.
  • Have ability to respond and correct issues as they arise. For example, bias is introduced in the learning model, or prohibited features are being used in decision making, etc.
  • Conduct an assessment of your AI model(s) or a health check of your broader enterprise AI program.

Uncovering the full potential of your AI

Today’s organizations rely heavily on algorithm-based applications to make critical business decisions. While this unlocks opportunities, it also raises questions about trustworthiness. As we enter an age of governance by algorithms, organizations must think about the governance of algorithms to build trust in outcomes and achieve the full potential of artificial intelligence.

Learn more about KPMG AI in Control.

Controlling AI
The imperative for transparency and explainability