The third episode in KPMG's AI in control podcast series
This episode focuses on responsible AI – the policies and actions needed for AI to drive desired outcomes, while building trust.
AI has recently shifted from the experimental phase to full application. This rapid wave of implementation, coupled with exponential changes in capability, challenges understanding how the technology is being applied, as well as how to stay in front of the governance of the developing technology.
Decision-making has long been associated with instinct and experience. But what happens when AI doesn’t align with what your gut tells you? It’s only through transparency that the two can be reconciled. By understanding the deliberate framework that’s built around algorithms – the data integrity, the training, testing, validations, controls and regulations – responsible AI results.
During this podcast, Dr. Sander Klous, Data & Analytics leader for KPMG in the Netherlands and Professor of Big Data Ecosystem at the University of Amsterdam, Todd Lohr, and KPMG’s Digital Lighthouse Network leader, KPMG in the U.S. sat down with Samantha Gloede, a Managing Director in the firm’s US Advisory practice, to discuss:
- common pitfalls to avoid when implementing responsible algorithms
- forthcoming government legislation around AI - and why industries surprisingly welcome that regulation
- how an “AI compass” can identify the risk of critical algorithms (impacts of failure and the likelihood of failure) in addition to supporting decision-making
- how to create “contracts of trust” that increase transparency and build public confidence.
Presenters
Related content