- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
Log4j Showed Us That Public Disclosure Still Helps Attackers.
By Alex Haynes, CISO, CDL
At 2:25 pm on the 9th of December an infamous (now deleted) tweet linking a 0-day Proof of concept exploit for the vulnerability that came to be known as ‘Log4Shell’ on GitHub (also now deleted) set the internet on fire and kicked off a holiday season of companies scrambling to mitigate, patch and then patch some more as further and further proof of concepts appeared on the different iterations of this vulnerability which was present in pretty much everything that used log4j.
Otherwise known as public disclosure, the act of telling the world something is vulnerable with an accompanying proof of concept is not new, and happens quite frequently for all sorts of software, from the most esoteric to the mundane. Over time, however, research and experience have consistently shown us that the only benefit to the release of 0-day proof of concepts is threat actors, as it suddenly puts companies in an awkward position of having to mitigate without necessarily having anything to mitigate with (i.e. A vendor patch).
How does disclosure usually work?
There are all kinds of disclosure mechanisms that exist today, whether companies have a vulnerability disclosure program that’s officially sanctioned (think of Google and Microsoft) or those that are run via crowdsourced platforms that are often referred to as ‘bug bounties.’ Disclosures in these scenarios often go through a specific process and have adequate timelines where the vendor patch is released and given ample time for take-up by the users of the software in question (90 days is the accepted standard here) as well as the proof of concept only being released publicly with vendor approval (this is also known as ‘coordinated disclosure’). Bug bounty platforms also apply NDAs to their security researchers on top of this so that often the proof of concepts remain sealed even if the vulnerability has long been fixed.
Having gone through many disclosures myself, both through the CVE format or directly through vulnerability disclosure processes it usually works like this if it goes smoothly:
- Researcher informs vendor about vulnerability with accompanying proof of concept
- The vendor confirms vulnerability and works on a fix with an approximate timeline
- One fix is in place, the vendor asks the researcher to confirm fix works
- After Researcher confirms the fix, the vendor implements the patch
- At a certain time after the patch release, details of the vulnerability can be published if the vendor agrees to it (Anything up to 90 days is normal)
Returning to the Log4j vulnerability, there was actually a disclosure process already underway as evidenced by the pull request on github that appeared on the 30th of November. The actual timeline of the disclosure was slightly different as evidenced by an e-mail to SearchSecurity:
11/24/2021: informed
11/25/2021: accepted report, CVE reserved, researching fix
11/26/2021: communicated with the reporter
11/29/2021: communicated with the reporter
12/4/2021: changes committed
12/5/2021: changes committed
12/7/2021: first release candidate
12/8/2021: communicated with reporter, additional fixes, second release candidate
12/9/2021: released
While the comments in the thread indicate frustration with the speed of the fix, this is par for the course when it comes to fixing vulnerabilities (As everyone points out, the patch was built by volunteers after all).
The reasons for releasing 0-day proof of concept and the evidence against
On the surface, there may appear to be legitimate reasons for releasing a 0-day proof of concept. The most common is often that the vulnerability disclosure process with the vendor has broken down. This can happen for many reasons including the vendor not being responsive (ie. Playing dead), not regarding the vulnerability as serious enough to warrant a fix, taking too long to fix it, or a combination of the above. The stance then is to release it for the ‘common good’, which evidence has shown is rarely for the good of users of the software. There are also peripheral reasons that are less convincing for releasing a proof of concept, namely publicity, especially if you are linked to a security vendor. Nothing gets press coverage faster than a proof of concept for a common piece of software that everyone uses but has no patch yet, and this is, unfortunately, a mainstay of a lot of security research today.
The evidence against releasing proof of concept is now robust and overwhelming. A study completed by Kenna Security on that very topic effectively showed that the only benefit to proof of concept exploits was to the attackers that leveraged them. Even several years ago, a presentation at Black Hat entitled ‘Zero days and thousands of nights’ walked through the lifecycle of zero-days and how they were released and exploited and showed that if proof of concept exploits were not disclosed publicly, they weren’t discovered on average for 7 years by anybody, threat actors included. Sadly this was realized a bit too late during the log4j scramble. While all the initial disclosures were promptly walked back and deleted, even the most recent 2.17.1 disclosure ran into the same trouble – receiving a lot of flak to the point where the researcher issued a public apology for the poor timing of the disclosure.
It’s good to see that the attitude towards public disclosure of proof of concept exploits has shifted, and the criticism of researchers who decide to jump the gun is deserved, but collectively, it seems like the work needs to focus on putting in more robust disclosure processes for everyone so that we don’t fall into the trap of repeating this scenario the next time a vulnerability like this rolls around.
About the Author
Alex Haynes is a former pentester with a background in offensive security and is credited for discovering vulnerabilities in products by Microsoft, Adobe, Pinterest, Amazon Web Services, and IBM. He is a former top 10 ranked researcher on Bugcrowd and a member of the Synack Red Team. He is currently CISO at CDL. Alex is a frequent contributor to Infosec publications such as the United States Cyber Security Magazine and Cyber Defense Magazine. He is also a regular speaker at security conferences on the topic of offensive security.
Alex Haynes can be reached online at our company website https://www.cdl.co.uk/
FAIR USE NOTICE: Under the “fair use” act, another author may make limited use of the original author’s work without asking permission. Pursuant to 17 U.S. Code § 107, certain uses of copyrighted material “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright.” As a matter of policy, fair use is based on the belief that the public is entitled to freely use portions of copyrighted materials for purposes of commentary and criticism. The fair use privilege is perhaps the most significant limitation on a copyright owner’s exclusive rights. Cyber Defense Media Group is a news reporting company, reporting cyber news, events, information and much more at no charge at our website Cyber Defense Magazine. All images and reporting are done exclusively under the Fair Use of the US copyright act.