Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Navigating the adversarial AI landscape

Threat Explainer

The emergence of Generative Artificial Intelligence (GenAI) technologies marks a significant leap forward. However, this progress also presents opportunities for more sophisticated attacks. According to RedSense, “there have been recent posts in prominent Telegram channels used by threat actors recruiting new affiliates with specific knowledge in GenAI”. Adversarial AI is an emerging threat and one that is imperative to understand. Let us examine some of these tactics, techniques, and procedures and how they are being used today.

The Rise of Unrestricted GenAI in Cybercrime

What is it?

In action

Unrestricted GenAI refers to AI models capable of generating creative output, such as text, code, or images, without human-imposed limitations. While this holds tremendous potential for innovation, it also presents a dual-use challenge. Cybercriminals are leveraging these technologies to enhance their methods of attack, making it easier to target new victims, develop malicious software, and exploit vulnerabilities. Tools like WormGPT, FraudGPT, and PoisonGPT exemplify how GenAI is being utilized for nefarious purposes, from crafting business email compromise (BEC) attacks to spreading misinformation. These types of resources can make novice cyber criminals more likely to succeed and enable already advanced criminals to scale their attacks.The following are examples of tools that have been traded in underground markets. 
  • WormGPT1 – The following are examples of tools that have been traded in underground markets. WormGPT 2 - Introduced around July 2023, researchers at SlashNext interacted with the chatbot to create a realistic BEC email template.  Further, according to RedSense , “WormGPT was purchased for exclusive use by Royal (now BlackSuit) in August 2023, and it is believed that the data brokers behind this may intend to sell similar tools to other threat actors.”
  • FraudGPT– Introduced in March 2023, is a paid service that advertises the usage for tasks such as writing malicious code, creating vishing scripts, creating phishing sites, building undetectable malware, and finding attack paths. In promotional content for buyers, the seller demonstrated how a user could use the chatbot to create a malicious “short but professional” text message to support a BEC.4 
  • PoisonGPT5 – Introduced in late 2023, cyber criminals are also using GenAI against itself. With techniques like this, criminals can poison the supply chain with a malicious model. As reported, researchers at Mithril Security modified an open-source LLM to spread misinformation.6

 

The Perils of Data Mining with AI

What is it?

In action

Data mining is the process of extracting valuable information and patterns from vast datasets. This has been a cornerstone for eDiscovery purposes yet, in the hands of cybercriminals, it becomes a tool for sifting through stolen data to find sensitive information for exploitation. After data is stolen from a victims’ network, criminals may mine it for sensitive elements such as credentials, intellectual property, financial information, personal data, and other sensitive information. This can be for other malicious activities including extortion and further network intrusion.
  • In January 2024, the KPMG cyber team identified the likely use of GenAI by ransomware actors to expedite the sorting of large volumes of stolen data. GenAI is initially leveraged to generate targeted search terms, facilitating the extraction of specific data types. Subsequently, post-exfiltration, GenAI can be re-employed to conduct natural language searches within the stolen data. This allows criminals to quantify unique customer records and identify those containing sensitive information, such as Personally Identifiable Information (PII), Protected Health Information (PHI), or Payment Card Industry (PCI).
  • In March 2024, DarkGPT7 an open-source intelligence assistant based on GPT-4 was introduced. It was designed to perform queries on leaked credential dumps, thus providing enablers to efficiently identify initial access into victim environments.

Deepfakes: The Blurred Reality

What is it?

In action
The term 'deepfake' refers to hyper-realistic audio or video content manipulated using AI, making it appear as though individuals are saying or doing things they never did. From impersonating CEOs to spreading misinformation on social media, deepfakes represent a significant threat to personal and corporate reputations, as well as financial security.
  • In March 2019, one of the earliest known audio-based deepfake attacks was recorded when a CEO of a UK-based firm was deceived into paying $243,000 USD.8
  • In January 2024, deepfake sexually explicit images of a celebrity spread through social media platforms racking up millions of views9. Additionally separate instances of celebrity deepfakes have been observed to scam people or promote products that the celebrity doesn’t actually endorse10
  • In mid-2023, the KPMG cyber team observed a ransomware group employing voice phishing (vishing) to impersonate IT staff and gain unauthorized initial access to networks. Vishing leverages AI to manipulate a threat actor's voice.
  • In early February 2024, an employee was deceived into paying fraudsters roughly USD $25,000,000 while on a video call with who the employee believed was the CFO.10
  • In a new report released by Freedom House 11 , a human rights advocacy group, researchers highlighted the use of GenAI across many countries “to sow doubt, smear opponents, or influence public debate.”

The Emergence of Super Phishing

What is it?

In action

In the past it was easy to spot potentially suspicious messages because of awkward phrasing, misspellings, and translation inconsistencies. GenAI provides cyber criminals the capability to personalize attacks, perfect grammar, and even mirror writing styles of specific people. This leads to more effectively deceiving targets to gain access to sensitive information or funds. Further by generating and leveraging unique content, it can avoid triggering standard spam filters and other detective controls.

  • In February 2023, according to RedSense, “Royal/BlackSuit specifically have shown a great deal of interest in chatbot technology. They developed the “HelTecHub” impersonation scheme, in which a fake healthcare conglomerate company was invented by the group, along with a landing page. This landing page featured a malicious AI-powered chatbot “helper”, which was then used to convince victims to open malicious attachments.
  • In 2024, the KPMG cyber team observed attackers leveraging stolen data to scale their initial access campaigns. Beyond extortion targeting the initial victim, stolen data, including contact information and business details, was likely fed into GenAI to generate a high volume of personalized phishing campaigns. These campaigns were meticulously crafted to target specific organizations and employee job roles, significantly increasing the likelihood of successful compromise.

Media Generation: The Double-Edged Sword

What is it?In action
GenAI’s ability to create new forms of media, such as music and art, has been celebrated for its potential to democratize creativity. However, this same capability can be misused to produce fraudulent works, leading to financial gain for cybercriminals at the expense of legitimate creators and consumers.
  • In April 2023, it was reported that an AI-generated song which featured AI vocals of a famous artists was removed from various streaming platforms after a complaint by Universal Music Group.13
  • In May 2023, a scammer sold AI-generated music for roughly $9,000 USD.14 
  • On November 21, 2023, Spotify announced changes to policies to fight streaming fraud.15
  • In 2024, there has been a series of lawsuits16 that have put into spotlight AI-generated art and potential copyright infringement.
Insight
Cyber security insights
Turn risk into advantage. Learn how you can anticipate better, move faster, and get an edge with technology that is secure.

Looking forward

It is expected that cybercriminals will continue to exploit these advancements to automate and scale their operations. So it remains crucial to understand the potential ramifications of the Adversarial AI landscape and prioritize the development of effective countermeasures to safeguard against these emerging threats. Cyber attorney Justine Phillips from Baker McKenzie advises: "The wise learn from their adversaries. Threat actors are watching us and learning our technical security defenses, how we use GAI, our laws, behaviors and technologies and collaboratively working together—we have to do the same to be as ready and resilient as our adversaries. Businesses seeking to develop "reasonable" or "defensible" security programs must keep their finger on the pulse of new vulnerabilities and tactics, and quickly adapt to respond to new threats. It will take the best people, processes and technology to be ready and resilient to these new GAI driven cyber-attacks."

To combat AI-based cyber threats effectively, organizations must invest in governance and trusted frameworks that can help prevent such attacks. Establishing robust cybersecurity practices, educating employees about the risks associated with AI, and collaborating with industry partners to share threat intelligence are key steps in building a resilient defense against these evolving challenges. To read more about our trusted AI framework, please go visit our AI security framework page.

Footnotes

  1. FlowGPT, “WormGPT”, (February 27, 2024).
  2. SlashNext, “WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks”, (February 27, 2024).
  3. Trustwave, “WormGPT and FraudGPT – The Rise of Malicious LLMs”, (February 27, 2024).
  4. Netenrich, “FraudGPT: The Villain Avatar of ChatGPT”, (February 27, 2024).
  5. FlowGPT, “PoisonGPT”, (February 27, 2024).
  6. Mithril Security, “PoisonGPT: How We Hid a Lobotomized LLM on Hugging Face to Spread Fake News”, (February 27, 2024).
  7. GitHub, “DarkGPT”, (February 28, 2024).
  8. Wall Street Journal, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case”, (February 29, 2024). 
  9. Bloomberg Law, “Understanding Deepfakes and the Taylor Swift Images”, (January 26, 2024)
  10. Forbes, “Look What You Made Me Do: Why Deepfake Taylor Swift Matters (forbes.com)”, (February 1, 2024).
  11. CNN, “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’”, (February 28, 2024).
  12. Freedom House, “The Repressive Power of Artificial Intelligence”, (March 1, 2024).
  13. Forbes, “Generative AI Is Revolutionizing Music: The Vision For Democratizing Creation”, (March 1, 2024).
  14. Engadget, “AI-generated Drake and The Weeknd song pulled from streaming platforms”, (February 29, 2024).
  15. Engadget, “Scammers used AI-generated Frank Ocean songs to steal thousands of dollars”, (February 29, 2024).
  16. Rolling Stone, “It’s Official: Songs Need 1,000 Streams to Earn Royalties on Spotify”, (March 1, 2024).
  17. The Verge, “How AI copyright lawsuits could make the whole industry go extinct”, (March 1, 2024).

Meet our team

Image of David Nides
David Nides
Principal, Cyber Security Services, KPMG US
Image of Dennis Labossiere
Dennis Labossiere
Manager Advisory, Cyber Security Services, KPMG US

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline