Letting intelligent automation take flight

Risk and governance will help you safely land your automation goals.

Avoiding turbulence on your intelligent automation journey

Intelligent automation is changing the world of business, right before our eyes. This new wave of advanced technologies has the power to exponentially increase enterprises’ speed, scale, quality and precision, drive never-before-seen levels of operational efficiency, and both complement and augment human skills.

The rampant digitization of labor means traditional ways of operating business are becoming obsolete. Smart machines now perform activities, and even make decisions, previously left exclusively to humans—and they do it faster, more accurately and at far greater scale. 

That means the days when employees clock in to work just to repeat mundane, manual tasks over and over will soon be a distant memory. According to recent KPMG research, 89 percent of technology leaders are maintaining or ramping up investment in innovation, including in digital labor.1  Other KPMG research found that artificial intelligence (AI), cognitive computing and robotics are among the top technologies that will drive business transformation.2

What is intelligent automation?

Intelligent automation is the continuum of technologies companies use to automate both transactional and knowledge-based business processes. Today, smart bots create reports, assist auditors, analyze tax information, conduct legal research, advise on medical treatments, provide investment guidance, and detect security breaches. Examples abound from every sector: the smart assistant on your mobile phone that tells you today’s weather, the customer service chat bot that helps you submit an insurance claim, and the feature on your car that lets it park itself.

Intelligent automation is not just one technology—it’s a range of tools with different advanced capabilities. At KPMG, we categorize these tools along a spectrum, ranging from robotic process automation (RPA), which automates very rudimentary processes such as transaction processing, to cognitive automation, which mimics human activities such as hypothesizing, reasoning, and deriving insights from masses of unstructured data.

Given its clear benefits and countless use cases, it’s no wonder intelligent automation has become a mission-critical initiative. The race is on and there is simply no turning back. But as a leader in one of the many organizations considering a takeoff—or already in flight—you know embarking on such a broad and important digital transformation project is no time to throw caution to the wind. 

That’s where intelligent automation risk and governance comes in. A well-designed risk and governance function helps ensure your intelligent automation program avoids a turbulent flight or a crash landing—that any and all risks associated with the digital transformation are effectively identified, evaluated and mitigated (or in some cases, accepted).

In this paper, we’ll examine:

  1. What happens when intelligent automation isn’t appropriately controlled and managed
  2. Where risk and governance fall in the intelligent automation ecosystem
  3. A framework for integrating risk and governance into the intelligent automation program

When bots go wild

Can an enterprise lose control over an entire army of bots? 

Can the intelligent automation tools your business relies upon run amok, performing unintended and potentially damaging actions?

Yes. Without a proper approach to managing risk, these two hypothetical scenarios can easily move from science fiction into dark reality. As with any disruptive technology, the rapid adoption of and reliance on intelligent automation is transforming the enterprise risk landscape, exposing businesses in new ways, creating more vulnerabilities, and increasing the level of complexity. Whether in a single function or across the enterprise, implementing intelligent automation creates not only technology risk, but also regulatory risk, financial risk, operational risk, and reputational risk. 

Yet, a significant number of organizations are not thinking about intelligent automation’s potential risks and governance considerations. KPMG’s 2017 Information Technology Risk Management Survey found that one-quarter or more of organizations have adopted cognitive computing (25%), robotic process automation (32%), or artificial intelligence (34%), yet failed to include these emerging technologies in IT risk assessments.1

Below we highlight a few of the unique risks your intelligent automation could expose your business—and your customers—to:

Business disruption

Skill gaps. Inconsistent developer training. Lack of change management processes. Insufficient cybersecurity. Lack of or ineffective controls. A slew of factors could create an unstable bot environment and increased bot failure rate. And when your bots stop working, so does your business.


  1. How will you design your automation program with appropri-ate risk considerations in mind to reduce the opportunity and impact of critical bot failure?
  2. How will you ensure you’re able to recover from a business disruption impacting your automation platform, especially when it affects bots used for mission-critical processes?
  3. How will you manage changes to the automation environment while maintaining integrity, functionality, compliance and proper controls?


A lack of well-defined guidelines around your automation program can prevent you from meeting governance, risk, controls and compliance requirements. This could damage your relationships with partners, auditors, and regulators. Big fines could ensue. Or non-compliance can lead to a program that lacks stability and is subject to operational failures. Always remember that organizations deploying intelligent automation programs may need to provide assurance that they are abiding by relevant regulatory and compliance guidelines. Identification of impacted internal and external compliance requirements should be one of the first considerations when implementing an intelligent automation program. 


  1. How will you maintain data security and privacy during storage, processing and transmission?
  2. How will you secure bots from unautho-rized access to prevent data leakage, in-tellectual property theft, or introduction of malicious code into system processing?
  3. How will you ensure the processing of each transaction and activity has an acceptable level of confidence and integrity, as required by compliance?

Business and IT control failures

The lack of proper access and authentication controls for bots creates issues with accountability, segregation of duties and potential for unauthorized transactions. Poor controls integration and monitoring may also result in unnoticed control failures. Depending on which controls are at fault, you could face major problems with security, integrity, compliance, or even business continuity. Proper access provisioning, secured authentication, segregation of duties and secure application integration need to be enabled within the program. A lack of these security and privacy considerations may lead to data loss and cause operational and reputational harm. In addition, data security and privacy requirements should be built into the design of bots, including appropriate logging and auditing capabilities.


  1. How will bots be provisioned and what access will they have?
  2. How will you create appropriate levels of transaction traceability to audit bot activities?
  3. How will you ensure automated controls are performed completely and accurately, especially those subject to control testing?
  4. How will you ensure bots don’t have too much access, allowing them to override existing controls?

Unintended actions

In cognitive environments—which rely extensively on training data and machine learning—algorithms that are not regularly audited, monitored, tested and managed can lead to skewed or biased results that only worsen over time—especially if the automation is unable to detect malicious or incorrect input. Ultimately, the system will make inaccurate decisions, whether that means giving a customer bad investment advice, developing an incorrect marketing or credit bias for a segment of clients, or drawing the wrong conclusion about the viability of a new product launch idea. 


  1. How will you staff teams with the right talent to pilot, build and train cognitive solutions?
  2. How will you prevent data manipulation within the cognitive system?
  3. How will you regularly manage cognitive algorithms to ensure the accuracy and integrity of AI-driven decisions?
  4. How will you monitor the algorithm’s conclusions and identify degradation of the algorithm, which may require retraining?