Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Landmark Actions Coming: The AI Act and Growing US Regulations

“Whole-of-government” actions increasing as agencies intensify their focus on safe, secure, and trustworthy AI/GenAI

flag flying in front of capital building

KPMG Insights

  • Landmark Actions: With the EU AI Act provisional agreement and the US Administration’s Executive Order, landmark AI/GenAI regulatory actions will quickly necessitate robust and effective risk management and controls.  Short-term impacts may include:
    • Limitations/Prohibitions on Usage: Certain uses of AI deemed ‘unacceptable’ could impact product or business strategies regarding planned usage or deployment of AI.
    • Impact Assessments: Required before certain ‘high-risk’ AI systems are put into the market, which could impact time to market for late-stage products.
  • Quick Pace: The quick pace of regulatory and agency actions focuses on the development, use, disclosures/marketing and testing of AI/GenAI across several sectors (technology, defense, financial services and insurance, government), and in the US include multiple federal agencies (e.g., CISA, DOD, FCC, FTC, NAIC, NIST, OMB, and SEC).
  • What’s Coming: Expect further finalization of rulemaking to occur in the EU and other jurisdictions and in the US across federal agencies as additional deadlines from the U.S. Administration’s Executive Order approach.

 __________________________________________________________________________________________________________________________________________________

December 2023

In the EU, the Council presidency and European Parliament reach provisional agreement on the AI Act (see releases here and here) including:

  1. Risk-based clarification on high-impact and high-risk AI models and systems that may be deemed to cause systemic risk.
  2. Revised governance and regulatory enforcement powers, including an AI Office within the Commission.
  3. Technical documentation standards, including records of programming and training, adversarial testing, and measures taken to assess and mitigate risks for high-impact AI systems.  
  4. Bans on ‘social scoring’ and AI used to ‘manipulate or exploit users’.

In the United States, recent AI/GenAI regulatory actions follow the Administration’s October 2022 Blueprint for an Artificial Intelligence Bill of Rights and October 2023 Executive Order (EO) (14110) on safe, secure, and trustworthy AI (see KPMG’s Regulatory Alerts, here and here, respectively), as well as the National Institute for Standards and Technology’s (NIST) release of its AI Risk Management Framework in January 2023. In recent months, Federal agencies have taken notable AI/GenAI regulatory actions.  The table below (from August to December 2023) outlines the growing focus and breadth of regulatory coverage to drive safe, secure, and trustworthy AI/GenAI.

Agency

AI Topic

Type of Action

Description

CISA

Roadmap for AI

Guidance

Issues the 2023 – 2024 Roadmap for Artificial Intelligence, outlining its five lines of AI-related efforts, including:

  • Responsible use of AI to support CISA mission (cyber defense).
  • Assure AI systems are secure and resilient by design through software development and implementation guidance.
  • Protect critical infrastructure from malicious use of AI.
  • Interagency, international, and public collaboration on AI efforts.
  • Expand AI expertise in CISA’s workforce.

Joint Guidelines for AI Development

Guidance

With the United Kingdom’s National Cyber Security Centre (NCSC), releases joint Guidelines for Secure AI System Development to help developers of any systems that use AI make informed cybersecurity decisions at every stage of the development process, including design, development, deployment, and operation and maintenance.

DOD

 

AI-related Efforts

Testing, Development, Integration

Prior to the issuance of the Administration’s EO, the following initiatives related to AI had already been established:

  • In August 2023, DOD established a GenAI task force, led by the Chief Digital and Artificial Intelligence Office (CDAO), to assess, synchronize, and employ GenAI capabilities across the DOD.
  • In September 2023, DOD established two AI “Battle Labs” at U.S. European Command and U.S. Indo-Pacific Command, in collaboration with CDAO’s Algorithmic Warfare Directorate and the Defense Innovation Unit, to expedite learning from DOD operational theater data (logistics, cyber, telemetry).
  • In September 2023, DOD’s National Security Agency (NSA) announced the creation of an AI Security Center to oversee the development and integration of AI capabilities within U.S. national security systems, including best practices, evaluation methodologies, and risk frameworks.

FCC

AI Calling Initiative

Information Collection

Notice of Inquiry focuses on the use of AI technologies in “unwanted and illegal telephone calls and text messages under the Telephone Consumer Protection Act (TCPA)” and seeks to gather information to define AI under TCPA, consider potential liability for AI developers who design systems that violate TCPA, and understand AI’s potential risks and benefits in the telecommunications sector.

FTC

Compulsory Process for AI Products/ Services

Administrative Process

Approves a resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using AI or claim to detect its use. The authorization is intended to streamline FTC staff’s ability to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena, in investigations.

AI Voice Cloning

Testing, Information Collection

Voice Cloning Challenge is intended to address “the present and emerging harms of AI or AI-enabled voice cloning technologies” and focuses on the potential risks and benefits of AI voice cloning technologies under the FTC Act, the Telemarketing Sales Rule, as well as the proposed Impersonation Rule. The rules for the challenge require that submissions address at least one of three “intervention points”: 1) prevention or authentication, 2) real time detection or monitoring, or 3) post-use evaluation.

NAIC

Model Bulletin on Use of AI by Insurers

Guidance

Membership adopts the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, which is to guide and foster uniformity among state insurance regulators regarding expectations for insurance carriers deploying AI. The bulletin emphasizes the importance of responsible governance, risk management policies, and procedures to ensure fair and accurate outcomes for consumers.

NIST (DOC)

U.S. AI Safety Institute & Consortium

Announcement

Establishes the U.S. Artificial Intelligence Safety Institute (USAISI) and a related consortium (comprised of organizations with technical, product, data, and/or model expertise) dedicated to equipping and empowering the “collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.”

OCC

Semi-Annual Risk Perspective Report

Report

Publishes the Fall 2023 edition of its Semiannual Risk Perspectives Report highlighting its view on key risks and issues facing the federal banking system. The report identifies AI and GenAI technologies as an “emerging risk” based on banks’ increasing utilization of these technologies for various risk management and operational purposes, posing challenges in areas such as compliance, credit, reputational, and operational risk, (e.g., potential bias, privacy concerns, and errors/fraud). Banks are expected to “manage AI use in a safe, sound, and fair manner, commensurate with the materiality and complexity of the particular risk of the activity.”

OMB

EO Implementation Guidance

Draft Guidance

In response to the Administration’s EO, releases draft implementation guidance that would require federal agencies to:

  • Establish AI governance structures in federal agencies (Chief AI Officer (CAIO).
  • Advance responsible AI innovation through enterprise AI strategies.
  • Manage risks from government uses of AI.

SEC

“AI-Washing”

Remarks

In remarks to a conference audience, the SEC Chair warns businesses against “AI-washing” or making false artificial intelligence-related claims, comparing it to “greenwashing” or overstating environmental or climate-related records. “Greenwashing” has been a priority of agency examinations and enforcement actions, as has the focus on fund names that suggest focus on environmental, social, or governance (ESG) factors.

Reliance on GenAI

Remarks

In remarks at an AI Summit, the SEC Chair warns that too many financial services firms relying on too few GenAI models for processes (e.g., trading, underwriting, etc.) could result in the potential emergence of a “monoculture” or a flash crash of the markets.

“Covered Technologies” and Conflicts of Interest

Proposal

Issues proposed rules (see KPMG’s Regulatory Alert, here) under the Securities Exchange Act and the Investment Advisers Act that would seek “to eliminate, or neutralize the effect of, conflicts of interest associated with broker-dealers’ or investment advisers’ interactions with investors through the use of technologies that optimize for, predict, guide, forecast, or direct, investment-related behaviors or outcomes.”

Dive into our thinking:

Landmark Actions Coming: The AI Act and Growing US Regulations

“Whole-of-government” actions increasing as agencies intensify their focus on safe, secure, and trustworthy AI/GenAI

Download PDF

Explore more

Get the latest from KPMG Regulatory Insights

KPMG Regulatory Insights is the thought leader hub for timely insight on risk and regulatory developments.

Meet our team

Image of Amy S. Matsuo
Amy S. Matsuo
Principal, U.S. Regulatory Insights & Compliance Transformation Lead, KPMG LLP
Image of Bryan McGowan
Bryan McGowan
US Trusted AI Leader, KPMG US

Thank you

Thank you for signing up to receive Regulatory Insights thought leadership content. You will receive our next issue when we publish.

Get the latest from KPMG Regulatory Insights

KPMG Regulatory Insights is the thought leader hub for timely insight on risk and regulatory developments. Get the latest perspectives on evolving supervisory, regulatory, and enforcement trends. 

To receive ongoing KPMG Regulatory Insights, please submit your information below:
(*required field)

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline