Insight

Assessing the eDiscovery impact of changes to the FTC’s model second request

Stricter requirements and new processes for the approval and use of e-discovery tools and methodologies

Phil Algieri

Phil Algieri

Managing Director, Forensic Technology, KPMG US

+1 212-872-6506

Charissa Bass

Charissa Bass

Director Advisory, Forensic, KPMG US

+1 704-371-5212

In a recent blog post, Holly Vedova, Director of the Federal Trade Commission’s (FTC) Bureau of Competition, outlined several “new process reforms” for Second Requests, designed to result in both a “more streamlined and more rigorous” approach.1 The FTC appears to be especially focused on revamping its eDiscovery procedures for Second Requests; the reforms specifically address topics such as the identification of custodians, companies’ maintenance of IT systems and potentially responsive data, the approval and use of eDiscovery technologies, and privilege log formats.

Perhaps the most significant change is the development of stricter requirements and new processes for the approval and use of eDiscovery tools and methodologies: “…the FTC’s second requests will now require each company under investigation to provide information about how it intends to use e-discovery tools before it applies those tools to identify responsive materials. Complete and accurate information is critical in any investigation and there are substantial benefits to ensuring up front that e-discovery processes will identify required information. In addition, this change will more closely align the FTC’s Model Second Request with that of the Department of Justice.”2

Of particular note is the statement that these updates are intended to bring the FTC’s guidelines more in line with those of the Department of Justice (DOJ). What does this actually mean for eDiscovery practice on FTC Second Requests? With some details of the new FTC guidelines now formalized in an updated Model Second Request, we can look to protocols in three specific areas—(i) general disclosure requirements, (ii) search terms, and (iii) Technology Assisted Review (TAR)—to anticipate how eDiscovery practice may be impacted on Second Requests.

General disclosure requirements

Specification 29 of the prior FTC Model Second Request governed parties’ disclosure obligations relating to eDiscovery, requiring them to “[i]dentify any electronic production tools or software packages utilized by the Company in responding to this Request for: keyword searching, Technology Assisted Review, email threading, de-duplication, and global de-duplication or near-de-duplication…”3 While there were additional requirements as it relates to the use of TAR, this Specification simply required the identification of the technology used and not a description of how the technology was employed. Further, the response was not due prior to implementation, allowing parties the flexibility to get started quickly with less rigid oversight.

New language in Instruction I(5) of the updated FTC Model Second Request, which is identical to Instruction 4 of the DOJ Model Second Request, includes additional guidelines, requiring parties “before using software or technology (including search terms, predictive coding, deduplication, or similar technologies)… [to] submit a written description of the method(s) used to conduct any part of its search.”Notably, parties before the DOJ must provide this information prior to use of the eDiscovery technology.

This is not a radical departure from past FTC practice and should be a relatively easy obligation for outside counsel and their eDiscovery providers to fulfill without having to reinvent the wheel. The disclosure of similar information is often required in other matters, and most eDiscovery practitioners will have standardized workflows for the applicable technologies with written descriptions of such procedures at-hand. Further, as described more fully below, parties utilizing TAR were already required under Specification 29 of the previous FTC Model Second Request to provide similar descriptive information, so doing so in non-TAR contexts should be straightforward.

The requirement to provide this information at the outset of the process, however, may prove to be more burdensome if it indicates the FTC’s intention to heavily scrutinize the eDiscovery technologies and methodologies used by parties prior to granting approval. If so, then parties may wish to build in additional time for these discussions and consider proactively starting this process as soon as possible.

Search terms

The DOJ and FTC Model Second Requests both specifically address the use of search terms as amethod of reducing data sets and identifying potentially responsive information. The FTC’s previous requirements were more general than those of the DOJ, requiring parties using search terms to simply “provide a list of the search terms used for each custodian.”5

Instructions in the updated FTC Model Second Request (again identical to those in the DOJ’s version), call for parties to provide more in-depth information regarding their use of search terms, including “(a) a list of proposed terms; (b) a tally of all the terms that appear in the collection and the frequency of each term; (c) a list of stop words and operators for the platform being used; and (d) a glossary of industry and company terminology.”6

Most of these additional items are fairly straightforward and should not prove overly burdensome for eDiscovery practitioners or outside counsel. However, responding to Item (b) (the tally of all terms in the collected data and frequency of each term) can take more time to complete and may require custom solutions depending on the eDiscovery processing engine and search index used. If employing search terms on future FTC Second Requests, outside counsel and eDiscovery providers should take steps in advance to determine how best to provide this information and whether doing so may require extended lead time or bespoke solutions.

As discussed in more detail below, the most significant difference to date in the use of search terms before the FTC and DOJ has been the ability to apply search terms prior to the application of TAR, which the DOJ does not permit. It is, at this point, unclear whether the FTC will adopt a similar approach, but doing so can reduce the cost- and time- saving benefits of TAR. It is also possible that the FTC will adopt a middle position, continuing to allow search terms to be used in conjunction with TAR but requiring that the data population excluded through the use of search terms be subject to a sampling and validation process similar to that used for documents excluded by TAR.

Technology Assisted Review

The updated FTC Model Second Request addresses guidelines for TAR workflows in two separate areas. First, Specification 30, maintained verbatim from previous versions, requires parties to:

“Describe the collection methodology, including (a) how the software was utilized to identify responsive documents; (b) the process the company utilized to identify and validate the seed set documents subject to manual review; (c) the total number of documents reviewed manually; (d) the total number of documents determined nonresponsive without manual review; (e) the process the company used to determine and validate the accuracy of the automatic determinations of responsiveness and nonresponsiveness; (f) how the company handled exceptions (“uncategorized documents”); and (g) if the company’s documents include foreign language documents, whether reviewed manually or by some technology-assisted method; and

Provide all statistical analyses utilized or generated by the company or its agents related to the precision, recall, accuracy, validation, or quality of its document production in response to this request.”7

This information is often maintained by parties as a standard practice and can typically be accessed with little disruption or delay to day-to-day eDiscovery processes, especially as there is no explicit requirement to provide this information before implementation of TAR. Parties have typically beenable to meet these requirements by preparing a simple memorandum outlining the requested data, often completed close to the completion of the eDiscovery process.

Second, new language in Instruction I(5) (again mirroring language in the DOJ Model Second Request) adds the following regarding the use of TAR:

For any process that relies on a form of Technology Assisted Review to identify or eliminate documents, the Company must submit (a) confirmation that subject-matter experts will be reviewing the seed set and training rounds; (b) recall, precision, and confidence-level statistics (or an equivalent); and (c) a validation process that allows Commission representatives to review statistically-significant samples of documents categorized as non-responsive documents by the algorithm.”8

These new instructions are functionally the same as those in Specification 30, with the exception that parties will now need to (i) provide confirmation that “experts” will complete the training rounds and (ii) incorporate a validation process that includes agency review of samples of predicted non-responsive documents.

Notably, the DOJ also maintains a Predictive Coding Model Agreement (separate and apart from its Model Second Request) that includes a wide range of additional standards and guidelines to which parties must agree when using TAR.9 While the FTC has updated the written TAR guidelines in its DOJ’s Model Second Request to match those in the DOJ’s, it’s unclear if the FTC intends to also adopt any of the more substantial requirements in the DOJ’s Predictive Coding Model Agreement.

The FTC’s decision in this area will likely have the greatest impact of the various changes discussed here, given the increasing utilization of TAR on large-scale matters and the fact that the requirements in the DOJ’s Predictive Coding Model Agreement are considered burdensome by some.10 According to the DOJ’s model agreement, parties must adhere to the following, at least when employing traditional TAR workflows:

  • No pre-culling of data sets: parties cannot use search terms or other analytical tools (such as email threading) to reduce data sets prior to the application of TAR
  • No supplemental human review for responsiveness: after documents are identified as responsive by the TAR tool, they cannot be excluded from production via either manual review as nonresponsive, or search methods, without written approval from the DOJ.11

These additional requirements have, in our experience, created hesitancy among some parties to use TAR before the DOJ, with the following concerns most commonly cited:

  • Parties have objected to the DOJ requirement that any documents classified by the technology as “Responsive” cannot be manually re-reviewed for responsiveness. While human review is certainly not perfect, parties may not want to relinquish all final decisions to an algorithm. One can imagine coming across documents incorrectly predicted by the system as “Responsive” that contain potentially sensitive information which parties do not want to produce (and would not be obligated to if not using TAR). In addition, without the flexibility to update responsiveness decisions, parties may need to invest additional time in correctly identifying privileged information in otherwise non-responsive documents (as opposed to simply excluding from production).

    Accurately identifying this privileged content, completing necessary redactions, and compiling privilege log entries are all tedious and time- consuming steps that can further increase cost or delay compliance.
  • The restriction on employing search terms before utilizing TAR may negate some of the potential efficiency gains of these tools. Many eDiscovery practitioners use both well-crafted search terms and TAR to reduce data sets in large-scale matters, and believe this combination to be more accurate and efficient than either approach on its own.11 Even when using TAR to make predictive decisions regarding responsiveness, most parties will still opt to review a portion of these documents for privilege, confidentiality, or key issue identification prior to production. Without the ability to reduce the underlying data set in any way prior to the application of TAR, the resultant output of predicted responsive documents that require manual review for these other criteria (i.e., privilege, confidentiality, key) may be so large as to reduce the benefits of TAR, especially given the other concerns noted above.
  • Negotiating and reaching agreement on TAR protocols in advance of starting the eDiscovery process can potentially delay compliance with the Second Request. While the DOJ process is fairly standardized, parties are often eager to begin as quickly as possible with the understanding that slight modifications to their eDiscovery protocols may be agreed to after starting. Having to settle on these protocols and standards in advance, even if this process is typically streamlined, can lead to lost time and add to parties’ reluctance to leverage TAR. The FTC’s statement that it intends to require parties to provide additional information before they apply certain eDiscovery tools may indicate the potential for similar delays in situations where there is disagreement or questions regarding a party’s proposed approach.

Notably, both agencies’ Model Second Requests and the DOJ’s Predictive Coding Model Agreement appear to contemplate what is known as a TAR 1.0 model, in which human review of an initial “seed set” of documents is used to train the system, after which it then makes predictive determinations for the overall data set. More recently, many practitioners have shifted to technologies that incorporate a Continuous Active Learning model (“TAR 2.0”), in which the TAR algorithm continually updates its predictive decisions based on ongoing human review. The DOJ has not publicly addressed the use of these updated models. Anecdotally, the FTC has been receptive to both TAR 1.0 and 2.0 workflows, although written guidance from both agencies appears to contemplate only 1.0 models. It will be interesting to see whether either agency updates its protocols to accommodate these approaches.

Impact on future Second Requests

The FTC’s changes to its specifications regarding the approval and use of eDiscovery tools and/or processes should have relatively minimal impact as it relates to general disclosure requirements (i.e., the approval of eDiscovery tools or proposed processes) or the application of search terms (in non-TAR contexts). As discussed above, these updates do not represent significant changes to past FTC guidelines and much of the newly required information should be easily accessible or routinely maintained by eDiscovery providers. Similarly, the recent changes made to the TAR protocols in the updated FTC Model Second Request, while imposing some additional requirements on parties, should not prove overly burdensome as written.

However, if the FTC also imposes requirements similar to those in the current DOJ Predictive Coding Model Agreement, this may significantly impact the use of TAR on Second Requests. As it stands now, it appears somewhat more common for parties before the DOJ to forego the use of TAR, finding that the potential delays and additional burdens associated with the agency’s written guidelines present new risks and reduce the perceived benefits of employing otherwise useful technologies. On the other hand, parties have generally found the FTC’s guidelines (as written and in practice) to be flexible and user-friendly. If the FTC expects parties to follow some of the DOJ’s more rigid requirements for TAR, it will be interesting to see how this impacts eDiscovery strategy on future Second Requests. In any event, parties before the FTC should expect their processes to come under heightened scrutiny, regardless of the technology or process used.


Footnotes

  1. Vedova, Holly. “Making the Second Request Process Both More Streamlined and More Rigorous During This Unprecedented Merger Wave,” September 28, 2021, https://www.ftc.gov/news-events/blogs/competition-matters/2021/09/making-second-request-process-both-more-streamlined.
  2. Id.
  3. Feb. Trade Comm'n, Model Second Request (2019) at 15, https://www.ftc.gov/system/files/attachments/merger-review/april2019_model_second_request_final.pdf.
  4. Fed. Trade Comm’n, Model Second Request (2021) at 12, https://www.ftc.gov/system/files/attachments/hsr-resources/model_second_request_-_final_-_october_2021.pdf. In practice, individual FTC staff may request additional details regarding the processes and workflows used for specific technologies.
  5. FTC Model Second Request (2019) at 16.
  6. FTC Model Second Request (2021) at 22.
  7. Id.at 11
  8. FTC Model Second Request (2021) at 22. See U.S. Dep’t of Justice, Antitrust Div., Model Second Request (2016) at 15, https://www.justice.gov/atr/file/706636/download.
  9. See U.S. Dep’t of Justice, Antitrust Div., Predictive Coding Model Agreement (2016), https://www.justice.gov/file/1096096/download.
  10. We recognize that practice might differ when before the DOJ, especially as these written guidelines have not been updated in several years and both technologies and leading practices have evolved during this time period. It is unclear whether or precisely how these requirements would apply to newer TAR methods that may be addressed in future updates.
  11. See DOJ Predictive Coding Model Agreement at 2-3.
    The validity of the approach of applying search terms followed by TAR is often debated within the eDiscovery community and is beyond the scope of this article.