Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Is my AI secure?

Understanding the cyber risks of artificial intelligence to your business.

You are likely working with or seeing the use of Machine Learning (ML) and Artificial Intelligence (AI) to improve your business processes, software, and products, but do you know if it is secure? 

In many cases, AI and ML can help solve a myriad of problems that have persisted for years. If you are a data scientist, this feels like a golden era where new products are hitting the market daily, and the options for meeting your project requirements are unbounded. This includes new workspace options, toolkits, data options, and cloud services, making AI implementations faster and easier. However, as AI projects become more prevalent, data scientists and cyber professionals are starting to wonder “Is my AI is secure?”.  This question seems simple, but generally, data science is shrouded in mystery.

Artificial Intelligence systems perform tasks that typically require human intelligence, such as visual perception, speech recognition, logic reasoning, and learning. Machine Learning uses algorithms to produce computer systems that can learn and adapt without following explicit instructions. ML is the basis for some production-based learning systems in organizations today. For this blog, we will use these terms interchangeably, even though many AI solutions are supervised instead of unsupervised.

As part of my continued work, and in speaking with many data scientists, it is clear that the security implications of AI are not well understood. Many data scientists I have worked with are brilliant but do not fully understand the cyber consequences of bringing together AI pipelines that output decisions to important problems without significant human oversight. Using multiple platforms, toolkits, data sources, and the inclusion of many team members increases some of these risks.

It is also clear that cyber professionals generally see the realm of data science as a "black box" and do not fully understand the challenges of securing these learning systems. In many cases, AI is a complete mystery. In several instances, I have seen data science and cyber teams both believe these risks are being "covered" by the other team, even though no collaboration is occurring that would allow for this synergy. This blog will look to uncover some of the common AI challenges and risks to consider. Future blogs will look to dig deeper into some of the specific threats and provide additional guidance.

As we think about how to bridge the data science/cyber divide, it is important to review the threats to AI pipelines you might consider as you develop your AI projects. MITRE has put together ATLAS, which documents critical Adversarial AI threats into a reference for data scientists and cyber professionals. Adversarial AI threats intentionally manipulate an AI pipeline to produce inaccurate results. ATLAS is part of the MITRE ATT@CK framework, which provides a more holistic cyber framework. In addition, drafts have been published of the forthcoming NIST AI Risk Management Framework and the EU AI Act. These attempt to deal with both adversarial and other AI-related risks. Several critical threats seem to come from these and other guidelines.

Data poisoning is one of these significant threats to consider. Data poisoning is when an adversary (attacker) interferes with a model's operation by changing training data to modify pipeline outcomes. It can have outsized impacts on the outcomes of the project. Researchers have shown that even small changes in the training data can have significant impacts on outcomes.1

Model theft, or model extraction, is a threat that happens when an adversary observes model behavior through repeated queries in an attempt to steal the model itself. As new business models and products are becoming dependent on these expensive models, model theft has the potential to create significant damage to organizational competitiveness.

Model evasion is a threat to AI systems by adversaries looking to avoid detection systems. These could be systems like spam filtering and malware detection. These can be similar to poisoning attacks, but generally, these attacks attempt to force the model to classify incorrectly during the inference (or prediction/post-training) phases. Knowledge about a particular system or pipeline can make these attacks more effective.

Lastly, model inversion or data extraction attacks look to recover the features used to train the model. These attacks would result in an attacker being able to launch a membership inference, which could result in the compromise of private data. Inference attacks aim to reverse the information flow of a machine learning model, providing information that is not intended to be shared by the model explicitly.

These techniques are becoming better understood by adversaries as AI technologies become mainstream. However, neither data science nor cyber teams have built mechanisms to improve awareness, pipeline visibility, threat detection, mitigation, and response. This is likely because other security threats are more easily accomplished and, therefore, top of mind. However, as investments in AI continue to grow and become central to an organization's innovation approach, attacks can become more prevalent. The time to prepare for these attacks and the forthcoming regulations is now. To manage these risks, it can take improved collaboration between AI and cyber teams, awareness more broadly of these risks, improved tooling for visibility and protection, and investments that can allow organizations to stay ahead of the AI innovation wave.

KPMG has launched a cyber center of excellence (COE) for securing AI to help support the development of an AI security framework, as well as other services to help organizations prepare for and manage adversarial AI cyber threats. 

Footnotes

1Cornell university, Data Poisoning Attacks on Regression Learning and Corresponding Defenses (September 2020)

Explore more

Cyber security in the new reality

Working together to respond to the challenges.

Read more

Meet our team

Image of Matthew P. Miller
Matthew P. Miller
Principal, Advisory, Cyber Security Services, KPMG US
Image of Katie Boswell
Katie Boswell
Managing Director, Cyber Security Services, KPMG US

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline