Security researchers and vendors must work together closely to raise the security level of everyday technology. Based on a decade of hacking research at SRLabs, this post looks at best practices in vulnerability disclosure – both for the researcher and the vendor.
Security researchers find vulnerabilities all over the place. The Common Vulnerabilities and Exposures (CVE) database lists more than 150,000 security issues. With the Internet of Things, cybersecurity is no longer only about protecting data, networks, and systems, but also about stopping the exploitation of vulnerabilities that affect cars, power plants, medical devices, and other technologies. Here at SRLabs, we have found vulnerabilities in a number of technologies, including SIM cards, the Ethereum Blockchain, and Alexa & Google Home.
But sharing these research results with the affected organizations can be tricky - the disclosure of vulnerabilities is a complex process with different stakeholders. It also lacks a clear legal framework. Nevertheless, it is worth the challenge in striving towards making the Internet a safer place and technical infrastructure secure and reliable.
Security researchers face a choice when it comes to dealing with a vulnerability they found. There are three types of disclosures:
In a private disclosure the vulnerability is only reported to the affected organization. The organization decides if and when the public is informed. As a result, some vulnerabilities will never be known to the public. This is often the case with bugs in private networks, and the majority of bug bounty programs work this way. Here, an agreement is struck between a security researcher and a company in which the bug details are exchanged for recognition and sometimes compensation.
On the other side of the spectrum we find full disclosure. This means the finder has not been in contact with the vendor prior to publishing a vulnerability. The consequence is that there are no patches available yet. Often, the vulnerability is exploitable and users can be affected negatively. Regardless, full disclosure can sometimes be the last resort to encourage organizations to fix a vulnerability if they are unresponsive or delaying the fix unreasonably.
Responsible disclosure, also known as coordinated vulnerability disclosure (CVD), is the process by which the finder of a new vulnerability engages with the vendor in order to verify the problem, make them fix or patch the issue and work out a date at which the vulnerability will be disclosed to the public.
Responsible disclosure is usually the best way to balance the interests of the finder, the vendor and the users, but it can be challenging. In trying to make the world a more secure place, you need to be willing to keep trying – sometimes for months.
The stakeholders involved in a successful coordinated vulnerability disclosure are:
The researcher of a vulnerability which is also often the reporter that notifies the vendor, who maintains (and often created) the product in question, and who is in charge of fixing the problem - for example by distributing patches.
Sometimes there is also a coordinator, for example a bug bounty platform, who manages the disclosure process.
Clear communication and management of expectations are key for a successful CVD. Usually, a CVD has six phases which require the different actors to work together.
For more details on CVDs, have a look at The CERT guide to Coordinated Vulnerability Disclosure. How long the process takes depends on the kind of security issue at hand. To give a rough idea: If a hosted web application is affected, we would estimate around two weeks for the vendor to get it fixed, for enterprise software one or two months, and for firmware up to three months.
What you should do if you found a vulnerability…
…while researching:
…when getting in touch with the affected company:
…throughout the CVD:
…when the company does not get back to you and does not fix the vulnerability:
The team at SRLabs is dedicated to hacking research because we want to make the Internet more secure. Instead of finding individual programming bugs, SRLabs explores structural security issues to gain a deeper understanding of how classes of bugs work. Through analyzing technologies and protocols, we learn a lot about how systems are (mis)designed. This is why researching SIM-cards was so interesting: In order to fix the security risk, they had to be redesigned using the state-of-art cryptography, and in-network SMS filtering had to be implemented in mobile networks.
Investing in good relationships with the vendor proved to be helpful in completing a successful CVD. Nevertheless, we also had to do one full disclosure once, when a company threatened to sue us instead of fixing the vulnerability.
And to assure that other researchers can dive even deeper into technology, it is important to deliver detailed descriptions of the research process and the discovered issues, for example through responsible disclosure. We also find that including the perspective of the affected companies in our disclosures encourages constructive discourse towards better technology. And finally, we find that follow-ups on our research can help drive technology evolution, for example in the Android ecosystem.
The SRLabs team is currently researching 5G network technology and works on a census of Internet vulnerabilities. If either of these sound interesting to you, get in touch: We are looking for motivated researchers to join us on this journey.