- High profile data breaches continue to highlight data privacy and security weaknesses and consumer harm, prompting an increased pressure to develop relevant public policies.
- Expectations for federal public policies and frameworks are complicated by new laws and protections introduced by individual states.
- Federal regulators are moving away from a “wait and see” stance and initiating steps to establish new policies aimed at the collection, use, protection, and retention of data as well as the related application of innovative technologies.
Public policy attention is being directed toward various elements directly related to the collection, maintenance, and use of data. These elements include privacy and security, cloud computing (primarily processing, such as data storage, networking, and analytics), and machine learning (ML) and artificial intelligence (AI). Recent public policy developments in each of these areas follow.
Data privacy and security
Regulators are taking actions to outline the parameters and expectations for privacy practices as well as to enhance data governance and strengthen consumer protection.
- In testimony before a House Subcommittee, the Commissioners of the Federal Trade Commission (FTC) requested Congress draft federal consumer data privacy legislation with clear and specific rules, and provide the FTC with targeted rule-writing and enforcement authority for that law. At present, the FTC actively enforces privacy issues through its Section 5 authority using a case-by-case approach.
- The National Institute of Standards and Technology (NIST) released a discussion draft of its voluntary privacy framework followed later by supplemental information based on feedback received from the discussion draft.
- The Department of Justice (DOJ) and the FTC announced an agreement with a large technology company settling allegations of data privacy violations. In particular, the company is alleged to have violated a 2012 FTC administrative order by misleading users about the extent to which certain third-parties could access users’ personal information, and further, to have violated the FTC Act by deceiving users about their use of this data and additional sensitive information. As part of the settlement, the company paid a $5 billion civil penalty and agreed, for a period of twenty years, to:
- Implement new compliance measures intended to improve user privacy and protections by creating multiple channels of compliance and mechanisms for executive accountability, including i) appoint an independent assessor to monitor conduct, ii) conduct privacy reviews for all new or modified products, iii) establish a new Independent Privacy Committee on the Board of Directors, iv) annual compliance certifications by the CEO, and v) reporting and record-keeping requirements.
- Permit monitoring by DOJ and FTC.
- New York State passed a new law, known as S5575B or the Stop Hacks and Improve Electronic Data Security (SHIELD) Act, which expands the scope of private/protected information as well as the scope of what constitutes a breach, and also requires businesses to implement “reasonable” data security safeguards, as outlined in the law. Key features of these provisions include:
- New categories of “private information,” including biometric information (such as fingerprint, voice print, and retina or iris images).
- An expanded definition of “breach” to include “access” in addition to “acquisition.”
- Expansive application, to include any person or business that owns or licenses computerized data that includes private information of New York State residents.
- A requirement for covered persons or businesses to develop, implement, and maintain “reasonable safeguards” to protect the security, confidentiality, and integrity of the private information including disposal of data.
- The Antitrust Division of the DOJ announced it is reviewing how “market-leading online platforms,” which include technology companies that dominate internet search, social media, and some retail services, have achieved market power and whether they are engaging in practices that may have “reduced competition, stifled innovation, and otherwise harmed consumers.”
Many organizations are dependent on third-party vendors for the rapid deployment or scalability of technology applications, which gives rise to issues and risks related to governance and accountability. The use of cloud computing services is accelerating, intensifying organizations’ dependence on the availability, integrity, and security of those services and compounding cybersecurity challenges. In some cases, organizations are using multiple cloud vendors, which adds complexity to their security and internal control structures. It is critical that organizations understand their role and the role of the cloud provider with regard to maintaining security.
- News reports indicate the Federal Reserve has initiated an ongoing oversight program for cloud repositories that serve as third-party vendors to banking organizations.
Separately, in its most recent Supervision and Regulation Report, the Federal Reserve identified plans to conduct a variety of horizontal reviews, including reviews of operational and cyber resiliency and AI. KPMG is aware that the federal banking regulators are also looking at banking organizations’ processes around cloud computing, machine learning, and AI.
- Recently, a large digital bank reported that it experienced a data breach of more than 100 million credit card customers and applicants. The data was stored on a third-party vendor’s cloud though the company stated that the incident resulted from a “configuration vulnerability” that was not specific to the cloud. Notably, the alleged hacker was a former employee of the vendor, highlighting the risk of insider threats. The company has taken a number of consumer-related actions with regard to the breach including notifying affected customers, offering credit monitoring and identity protection, and providing an information hotline.
Machine learning and artificial intelligence
U.S. regulators indicate they do not want to impede the development of “responsible” AI applications, especially to the extent the applications may expand access and convenience for consumers or bring greater efficiency, risk detection, and accuracy to operations. They have focused primarily on supervision though, in some cases, they are working together and with industry to help “increase the velocity of transformation” to AI and related technologies, while continuing to focus on safety and soundness and consumer protection. Examples include:
- The FDIC stated it is “critical that regulators offer guidance on how banks can use machine learning and AI technology.” Should interagency guidance not be possible, the FDIC would be willing to go forward independently.
- The FDA released a discussion paper outlining a proposed regulatory framework for AI and machine learning-driven software modifications in medical devices. FDA states the framework is a first step toward an approach that “allows FDA’s regulatory oversight to embrace the iterative improvement power of AI/ML.” FDA acknowledges that medical product regulation is not always well suited to emerging technologies and that additional statutory authority may be needed to implement the approach fully.
Although regulators have largely looked to existing laws and regulations to define the parameters, or set the “guardrails,” by which they evaluate the application of new technologies, the accelerating pace of change across developing technologies, business practices, and consumer expectations is prompting public policy makers to consider changes to the current supervisory framework.
Multiple legislators and regulators are beginning to take action, including at the state, federal, and global levels. Given the multiple and differing laws and regulations that organizations may be subject to, compliance risk will likely increase. Further, as new protections are introduced by individual states, the policy expectations for a federal framework in the U.S. increase in complexity and likely extend the timeframe for debate. Ultimately, lawmakers will be faced with determining whether national standards will preempt match, or expand on state protections.
In anticipation of future changes, institutions need to assess their innovation, AI, cloud, and related data strategies with agile incorporation of controls to address technology, security, and compliance risks. They should also be looking to supplier and third party procurement and vendor risk management from due diligence and throughout ongoing monitoring.
KPMG’s 2019 Oracle and KPMG Cloud Threat Report can be accessed here.