Minding the trust gap: Four anchors that instill trust in AI
Minding the trust gap: Four anchors that instill trust in AI
Insight

Minding the trust gap: Four anchors that instill trust in AI

Artificial intelligence has the potential to be world-changing, but leaders have trust concerns.

Artificial intelligence (AI) has astounded the world—from the discovery of subatomic particles to the first photograph of a black hole. And we still don’t understand its full potential.

Without a complete understanding of algorithms, business executives have been hesitant to integrate AI in everyday decisions, and KPMG’s 2019 CEO Outlook found that 66 percent of leaders surveyed overlooked insights provided by computer-driven data analysis simply because they were contrary to their experience or intuition. Leaders involved in the adoption and implementation of AI and machine learning algorithms recognize that “because the machine said so” does not help to foster C-suite trust in AI’s capabilities.

However, gaining trust around AI is a top goal of leaders, and we’ve found four anchors that help build trust and transparency in AI as well as provide rationale for critical business decisions made by algorithms.

“The true art of the possible for Artificial Intelligence will become unlocked as soon as there is more trust and transparency. This can be achieved by incorporating foundational AI program imperatives like integrity, explainability, fairness, and resilience.”

The four anchors include:

  • Algorithm integrity. By inspecting the foundation of the algorithm through its provenance and lineage of training data, controls over the training model, and model evaluation metrics, leaders can help ensure the algorithm’s integrity and confirm no changes have compromised the original goal or intent. 
  • Explainability. Understanding how and why a model-produced output (insight, decisions) is essential to trusting the system, especially when taking action based on probabilistic results. Success depends upon the assemblage of ground truth that is clean, sufficient, and appropriate, as well as the continuous assessment of results.
  • Fairness. Fair AI and algorithms need to be as free from bias as possible and maintain that fairness as they evolve. Careful consideration with proxy data helps to keep inadvertent bias from creeping into results during runtime. Furthermore, effective monitoring mechanisms and tooling can help identify and mitigate unintended bias as the algorithms continue to learn and evolve. 
  • Resilience. Robust and resilient models begin with an ability to effectively protect, secure, control, and monitor algorithms and the ecosystems in which they operate. This requires a secure adoption that holistically addresses risks through a specifically designed architecture, including anomaly detection using AI concepts like generative adversarial networks. Continuous monitoring models and controlling access to models helps to protect results.

Ultimately, trust in AI and machine learning depends upon accountability of AI results, but few organizations have solid practices in place. Most leaders don’t know how to close the trust gap because they don’t know how to govern AI and see the big picture of its operation. Many organizations also lack the tools and expertise to gain control and a full understanding of algorithms and introduce transparency into results, but this can be achieved through the right integration strategy and methodology.

To learn more about how to govern AI programs and gain confidence in AI technologies, please read Controlling AI: The imperative for transparency and explainability.