Ethical AI: Five guiding pillars

Pragmatic insights that will inform policies and actions for AI to be fully productive, beneficial, and trusted in the enterprise

Many children alive today will grow up to be AI natives. The presence of intelligent machines in their lives—at work, at play, as consumers and as citizens—will be as natural to them as digital experiences now are for us. They’ll tap into the intelligence of AI as fluidly as we look for answers on the internet. And they won’t be able to grasp that we once spent a great deal of our time on repetitive tasks that seem made for machines.

We are living in an era where we can put insights from AI to work on an extraordinary range of societal and scientific challenges. On a business level, AI propels new product development, enables epic customer experiences, and changes the nature of work itself. But to be truly successful, it must be deployed responsibly.

Corporate responsibility is not a new mission, but it has become a more complicated one as machine learning assumes a larger role in how work is done. As a matter of urgency and obligation, enterprises must consider how to address the immense societal impact that will come as work and decision-making change profoundly.

Ethical AI: Five Guiding Pillars offers insights that can help you re-imagine your business model and transform your workplace around it, thereby enabling your organization to build and deploy AI models that have integrity and transparency. At stake are business outcomes—and ultimately, the trust and confidence of your customers, your employees, and society at large.

Ethical AI is about taking action. KPMG has distilled the actions necessary to point an organization toward a “true north” of corporate and civil ethics around AI. This report aims to help business leaders create policies and actions needed for AI to drive desired outcomes and benefits while building and maintaining trust.

1. Prepare employees now:
Partner with academia or other knowledge-leading organizations to help create or pilot the programs that will directly address the need for new skills and help employees adjust to the role of machines in their jobs.

2. Develop strong oversight and governance:
Establish clear policies about the deployment of AI, including the use of data, standards of privacy, and governance of leading practices.

3. Align cybersecurity and ethical AI:
Build strong security into the creation of algorithms and the governance of data.

4. Mitigate bias:
Ensure the goal and purpose of critical algorithms are clearly defined and documented. Attributes used to train algorithms must be relevant, appropriate for the goal, and allowed for use.

5. Increase transparency:
Create “contracts of trust.” Let the public know how you are being transparent and what decisions about their personal data will mean to them.

We’ve found that many business leaders are at a loss to translate talk into concrete governance and control in a meaningful way.

Almost no organization is truly prepared at this moment to grapple with—much less solve—the profound questions posed by AI technologies. But they can take specific steps now. Download the paper to learn more.