Insight

Achieving scanning excellence in vulnerability management

Aspects of vulnerability discovery and how to adjust your vulnerability management accordingly

Ryan Budnik

Ryan Budnik

Manager Advisory, Cyber Security Services, KPMG US

+1 512-320-5200

Caleb Queern

Caleb Queern

Director, Cyber Security, KPMG US

+1 571-228-8011

Cyber security vulnerabilities in an organization’s infrastructure increase the likelihood or impact of a costly security incident we all want to avoid. Despite years of vulnerability management vendor innovation, discovering these vulnerabilities remains ripe for optimization. While scanning for vulnerabilities is conceptually straightforward, achieving the level of visibility expected by leadership requires understanding various nuances that affect how scanning targets are grouped, options are set, and data is updated. In this article we will highlight three aspects of vulnerability discovery and how you can adjust your vulnerability management activities accordingly.

Breadth

While using network subnets for scan targets is the leading practice for scanning efficiency, reporting on what was scanned is seldom done and is difficult without true negative records where IP addresses have been scanned but do not have connected hosts. Without false negatives, there is no method to differentiate between areas that were missed during scanning and those IPs that do not have hosts. To accommodate this, use one of two methods:

  1. Adopt the assumption that if a vulnerability was found at a given IP address then that corresponding subnet must have been scanned. While simple, this will require that the scan targets used in scan jobs are identical to those used while calculating the breadth metric.
  2. Toggle on the “report dead hosts” setting in your scanner which is normally disabled by default. This will produce a record for every IP address that was scanned but did have a host. While this setting will report many dead host records and could overwhelm the scanning platform in large networks, this is likely a small price to pay to enable troubleshooting of breadth and attestation of scanning coverage.

Depth

When we do collect vulnerability data from a host, are we getting a quick, “drive-by” glimpse of the vulnerabilities or a deep inspection of the host’s software and configuration to get a more comprehensive view of its vulnerabilities? These options are referred to as unauthenticated and authenticated scans, respectively. Authenticated scan data is of course preferred and can be relatively straight forward to monitor by looking at the tracking method of detected hosts; that said, we find that differences in scan types and their impact is not well understood or quantified by most teams. Having looked at multiple scanner test catalogues and client experiences, in our experience authenticated scans routinely find six times more vulnerabilities per host than those without authentication and four times more critical vulnerabilities per host. In addition to this, agents are often believed to be a silver bullet for scanning. While agents have several benefits (reduction of network load, deduplication of hosts with agentless-tracking, and reduction of false positives in DHCP environments), agents do not have detection parity with authenticated network scans for vulnerabilities such as SSL/TLS and others. Instead of 1:1 parity, we see that agents find about 95% of what authenticated network scans do. To supplement agents, teams should track agent deployment moving these hosts into their own group with a dedicated option profile to scan for the remaining issues in a subsequent scan.

Data Freshness

Not to be confused with scanning cadence, freshness also reflects on the ability to validate remediation, reduce vulnerability data staleness and minimize its concurrency with false positives. While it is common to see large portions of networks scanned regularly, there is usually also an equally large population that hasn’t been rescanned in months or years, giving attackers more opportunities to create expensive security events. Combat this by validating remediation and refreshing all data regularly. Efficient processes for this will require segmenting your scanners, targets and options using respective network subnets, firewall placement, scanner placement and agent deployment. Once this is done neatly you can begin optimizing the heartbeat of your vulnerability remediation and broader security program.

We hope this quick blog post helps organizations appreciate the effort required to deliver a modern vulnerability management program. Like most technology deployments, a “set it and forget it” approach is not likely to deliver the desired outcome but by focusing on outcomes like the ones above, achieving high performing vulnerability management is attainable. Large organizations should prioritize and execute improvements to the breadth, depth, and data freshness of their scanning efforts and consider semi-regularly measuring their vulnerability management program to track their progress. Quickly finding and remediating cyber security vulnerabilities will enable the business to invest more in innovation and growth.