- One of the best pool-cleaning robots I've tested is $450 off for Prime Day
- Apple's M2 MacBook Air is on sale for $749 for Black Friday
- I replaced my desktop with this MSI laptop for a week, and it surpassed my expectations
- AI networking a focus of HPE’s Juniper deal as Justice Department concerns swirl
- 3 reasons why you need noise-canceling earbuds ahead of the holidays (and which models to buy)
False positives: Mitigating concerns from cybersecurity-minded users
Author’s note: Views are my own.
Enterprise organizations may require that their product adhere to strict security requirements or undergo extensive vendor due diligence at onboarding. Technical analysis of the product, code or software via scanning or testing is often a step in this process. While a security assessment is a crucial component of any vendor management program, security assessments of a product can sometimes indicate risk when there is no risk. A typical example is when code scans or penetration tests are conducted by an outside party that flag “false positives.”
The occurrence of inaccurately flagged security alerts
A frequent occurrence when external parties conduct scans and tests of an organization’s product is the phenomenon of false positive Common Vulnerabilities and Exposures (CVEs). NIST describes false positives as “an alert that incorrectly indicates that malicious activity is occurring.” When an external party assesses an enterprise offering, the words “CVE” or “vulnerability” can cause unnecessary panic. However, false positive CVEs do not indicate that a product is insecure.
These are common errors that transpire when a code scanner or pen testing tool flags non-exploitable vulnerabilities. This occurrence happens for a myriad of reasons. Scanning tools provide value by alerting on as many findings as possible. From a marketing perspective, it is a significant benefit if a tool can alert hundreds or, in some cases, thousands of alerts. Many alerts might make users think they are extracting maximum value from the scanner. Another reason false CVEs appear is because many scanning tools focus on being comprehensive — and to not miss anything, they will alert on CVEs they believe are present, even if they cannot 100% confirm this. Put simply, the intent of vulnerability management tools is to flag as high a volume of alerts as possible.
Unfortunately, this causes roadblocks for many security teams, as incorrectly flagged issues can cause organizational chaos. False positives can cause legitimate vulnerabilities to be missed (due to their sheer number). Inaccurate alerts can consume significant time and human capital since sifting through them to identify true CVEs is arduous.
Security professionals know that false positives are common when utilizing a code scanner or conducting a pen test. Many tools in the landscape will identify incorrectly flagged alerts or CVEs that present no risk to the end-user or business.
One of the first lessons learned as security practitioners is that not all vulnerabilities are exploitable, and identifying which must be fixed (and which present no threat) is the key way to reduce organizational risk.
Handling externally-identified false positives
When an external party brings the security team a list of suspected vulnerabilities they have found in their product, what are they to do? Effective strategies exist for handling and remediating user concerns! It is possible to gain trust and foster transparency among the user base, even when false positives arise.
Step 1: True security’s foundation is ensuring the business’ product and services have adequate assessments and alerting processes. This foundation means enabling code scanning, conducting regularly scheduled pen tests, and implementing a repeatable method for vulnerability remediation.
These practices will ensure that security can identify and quickly fix any true positives within the offering. As part of these standard operating procedures, implement tooling or playbooks that will detail whether a CVE is a genuine issue that a malicious actor could take advantage of or if it is an illegitimate finding (this means that it is flagging on a CVE that is unexploitable or presents no risk).
Step 2: An essential step to effective security is tuning the organization’s tools — their security and development resources must be able to partner and correctly identify what CVEs are true positives and which are false alerts. Make best efforts to reduce the noise produced by the organization’s software — ensure security and development teams collaborate to identify which CVEs are mislabeled. For example, developers can provide clarity on if specific library components are utilized (ones that contain a CVE) — which allows the business to clearly understand if a risk is present.
Step 3: Once the business has implemented their tooling portfolio and tuned their results to the furthest extent possible, promote a culture of security transparency by sharing summaries of these reports with the user base. Keep the full reports private due to their sensitive nature. However, having externally-facing executive summaries that outline a high-level overview of the threat level will provide confidence to outside parties.
Internal due diligence
Step 4: When a user scans the product or services and identifies findings, some will likely be false positives. However, it is essential to assess the scan reports with care continually. A reputable security team knows that the first step is to do their internal due diligence and evaluate the scan output from an external party. Reviewing the external scan and pen test reports is vital for transparency among the user base and will only improve the security program — allowing users to assess the security of the business’ product is an effective way to build trust.
Step 5: Assess the report findings using internal tools (confirm if the CVEs are present using proprietary code scanners or automated software supply chain security platform). Following the report review, communicate to the user base which issues present no risk. There are a multitude of tools in the security landscape that will provide confirmation on if a CVE is truly exploitable in runtime.
Keep in mind that the goal is never to “convince” users that something is secure when that is not the case — we would fail to do our duty as security professionals if this were the aim! Instead, the primary goal is to use evidence and data from your security tool to prove that the presence of incorrect alerts does not indicate that the product is unsafe.
Doing internal analysis to confirm if a vulnerability is exploitable provides you with legitimate data that you can share externally to evidence that a CVE is a false positive. The primary benefit of modern scanning tools is that the majority can identify if a specific CVE is called on a code path used by your product or if a malicious actor could exploit it based on the code present in your services. You can then share sanitized spreadsheets and screenshots from your internal scanning tools to provide documented evidence that a supposed vulnerability is instead a false positive.
As new security risks emerge daily and the tech landscape shifts, false positives are here to stay. The process of reviewing inaccurate alerts and confirming non-exploitability will continue. Fortunately, it is possible to use data and evidence to effectively manage these conversations with external parties in a way that encourages transparency and develops trust. If you conduct regular technical assessments of your product, do your due diligence to review code scan findings, and use a portfolio of scanning tools to confirm when a CVE has no impact, you will have a reputable way to prove your offering is secure.