Product Security Incident Response Team (PSIRT) best practices


In my previous post, I disclosed that SonicWall had quietly released vulnerability fixes over the course of several days before vulnerability advisories were published for CVE-2020-5135.

Rather than properly fixing CVE-2020-5135, SonicWall’s fix introduced a new vulnerability in the same code. SonicWall was aware of the new vulnerability but deferred the small fix until the next release, more than 6 months later. These disclosures, more than most, touched into a couple of interesting topics with regard to what should be expected from a Product Security Incident Response Team (PSIRT) and what considerations to make when establishing a vulnerability response policy.  After years of working with vendors and having greatly varied experiences, I think it is long overdue for me to write a post sharing some of my thoughts on the best practices for operating a PSIRT. To begin, the topics I’ll be discussing are briefly enumerated below:

  1. Advisory Timing: How should the release of patches and associated fixes be coordinated? Is it better to release patches first, advisories first, or at the same time?
  2. Fix Verification: What is the process for verifying that a vulnerability has been fixed and that the fix has not introduced new issues?
  3. Release Scheduling: What’s an acceptable time for a vendor to know about a ‘0-day’ vulnerability before releasing a fix?

Advisory Timing

In most circumstances, I think it is bad practice for a vendor to do anything other than having patch and advisory publication synchronized. There may be exceptions to this, such as when a vulnerability is under active attack before a patch is available, but there are risks worth considering on either side of a synchronized release. Making vulnerability descriptions available before patches clearly can give attackers a head start seeking out and exploiting the described vulnerabilities before patches are available. Releasing a patch before the vulnerability advisory may seem like an intuitive way to resolve this concern and reduce risk but this comes with its own risk. When adversaries become aware that a vendor is releasing security updates ahead of security advisories, they can perform patch analysis to find what was fixed and potentially to start exploiting it before victims are aware that the fix contains security content. The only remaining option is to synchronize release of the patches with advisory publication so that potential victims and potential attackers should generally become aware of the situation at the same time.

Fix Verification

When receiving a vulnerability report, security teams should perform a thorough analysis of potential vulnerabilities to recommend a remediation strategy and assess the risk of related issues. Once a fix is proposed, it is critical for it to not only be black box tested but also to be reviewed in source code by a qualified security analyst. Basic regressions and repeated fixes reflect poorly on an organization’s commitment to product quality and security. Requiring formal code reviews with multiple security signoffs can usually prevent these types of embarrassing missteps and avoid putting customers at risk. Fixing a vulnerability in a particular functional area tends to draw attention from researchers and adversaries to take a closer look. This makes it all the more important that a comprehensive security review is conducted to squash these issues before patches get released.

Release Scheduling

Perhaps the most commonly discussed topic around vulnerability handling is the question of how quickly a vendor needs to respond with a fix. It is pretty common (as was the case on CVE-2020-5135) for more than one researcher to find the same vulnerability independently. Sitting on vulnerability information for too long increases the chance of adversarial exploitation. Conversely, if a vendor releases a patch as quickly as possible for every vulnerability reported, it can quickly become overwhelming for customers and may offer little to no additional risk reduction. Patch fatigue is real and nobody is their most productive with a never-ending stream of hotfixes to apply. This is why many large software vendors, Microsoft and Oracle for example, release security advisories in regularly scheduled batches where a single patch typically resolves several vulnerabilities. Customers can expect security updates on Microsoft’s monthly ‘Patch Tuesday’ or with Oracle’s quarterly critical patch update. The main exception to this is when a vulnerability is being actively exploited or active exploitation is imminent. Software vendors must have reasonable policies in place to decide if a fix cannot wait for a regularly scheduled release due to extenuating circumstances. I believe that if a vendor becomes aware of a publicly known vulnerability or ongoing attacks, they have the moral obligation to advise their customers on the situation and provide a fix as soon as possible.

It is important to note that having fewer scheduled releases through the year can be problematic when receiving reports from researchers with strict full-disclosure policies. Google is well-known in this regard for having a rather strict 90-day deadline before the issue report becomes public including proof-of-concept and other notes. As an example, consider an organization with a quarterly release schedule receiving a vulnerability report 3 weeks before a release. There may not be enough time to fully research and remediate the received report in time for the current release and the following release would be scheduled for after Google’s deadline. Google has been the subject of much praise as well as criticism for holding to this deadline even when it meant publicly disclosing unpatched vulnerabilities in popular products. (They now have a grace period if vendors have the patch scheduled for release up to 14 days past the deadline.) In her 2018 Black Hat USA keynote, Parisa Tabriz cited that a surprising 98% of Google’s issue reports were resolved within the 90-day deadline. She also stated that one vendor had improved their patch response time by as much as 40%. While the figures are impressive, it may not tell the complete story. Without further data, it is unclear whether patch quality was maintained with the improved response times. This could also be a post hoc fallacy as no strong evidence was given that the improvements were caused by Google’s policy.

A common system for managing this problem is to use a CVSS threshold score. At the core, the idea is that vulnerabilities scoring below this threshold can be deferred for the next appropriate release while those scoring at or above the threshold would require more immediate attention. A more sophisticated method may have further conditions based on a report origin, exploit availability, and other contextual elements of a vulnerability report. For example, a lower scoring vulnerability which is publicly known with available exploit code may rightfully justify a quicker response than a more severe flaw which was internally discovered through code audits. The goal is to balance competing interests of wanting to avoid disruption while also providing strong security assurances.

Conclusions

As a researcher, looking at a one- or two-line vulnerability, it is easy to criticize SonicWall for a sluggish 6-month turnaround time. With my software engineer hat on however, while slow, this isn’t outrageous by any stretch of the imagination. The vulnerability itself is a moderate risk with a CVSSv3.1 score of 5.3. As opposed to Heartbleed, which would leak plaintext user data from protected sessions, this vulnerability only seems to leak memory addresses in the various scenarios I have tested. SonicWall normally provides semi-annual updates to customers and there is no strong reason for this vulnerability to warrant an unscheduled release. That being said, my interactions with the SonicWall PSIRT have been underwhelming in some critical regards. As a PSIRT, ‘researcher relations’ is or at least should be part of the job. It is important to set expectations with researchers about communication and response timelines.  There is also no excuse for why email to a PSIRT should go unanswered for weeks and require a follow-up warning of uncoordinated disclosure. It also does not sit well with me that patched firmware was available for so long prior to any customers being aware of security fixes. Any attack groups taking note of this may develop resources to quickly diff product updates and search for fixed vulnerabilities. The fact that the fix was botched to introduce a new vulnerability, which was not detected before release, is rather alarming. This was the most critical vulnerability among a batch of 11 reports from Nikita Abramov and yet the patch apparently was not scrutinized or was not reviewed by someone with appropriate knowledge of the programming language.

All in all, my experience with SonicWall PSIRT raised some alarms but they were far from the worst vendor I’ve dealt with over the years. The experience could have been largely improved in two primary ways. First, the team could have done a much better job at setting expectations from the start. Their email from October said a fix was being developed and they would let me know when a patch was ready for release. I didn’t hear anything from them for 5 months. Presumably they knew already in October that this would wait for the April update cycle. Had they set that expectation, it would have saved several emails and likely some frustration on both sides. Second, is on email responsiveness. Some of the organizations I’ve dealt with have been really good on this end by either providing some form of human response quickly. If meaningful information is not available at the moment, it is ok to send a quick message to set expectations. As a researcher, it can be very frustrating sending vulnerability reports to a PSIRT and wondering the following week still whether anybody has actually seen it.



Source link