Security researchers and vendors must work together closely to raise the security level of everyday technology. Based on a decade of hacking research at SRLabs, this post looks at best practices in vulnerability disclosure – both for the researcher and the vendor.
Security researchers find vulnerabilities all over the place. The Common Vulnerabilities and Exposures (CVE) database  lists more than 150,000 security issues. With the Internet of Things, cybersecurity is no longer only about protecting data, networks, and systems, but also about stopping the exploitation of vulnerabilities that affect cars, power plants, medical devices, and other technologies. Here at SRLabs, we have found vulnerabilities in a number of technologies, including in SIM cards , the Ethereum Blockchain , and Alexa & Google Home .
But sharing these research results with the affected organizations can be tricky - the disclosure of vulnerabilities is a complex process with different stakeholders. It also lacks a clear legal framework. Nevertheless, it is worth the challenge in striving towards making the Internet a safer place and technical infrastructure secure and reliable.
Security researchers face a choice when it comes to dealing with a vulnerability they found. There are three types of disclosures:
In a private disclosure the vulnerability is only reported to the affected organization. The organization decides if and when the public is informed. As a result, some vulnerabilities will never be known to the public. This is often the case with bugs in private networks, and the majority of bug bounty programs work this way. Here, an agreement is struck between a security researcher and a company in which the bug details are exchanged for recognition and sometimes compensation.
On the other side of the spectrum we find full disclosure. This means the finder has not been in contact with the vendor prior to publishing a vulnerability. The consequence is that there are no patches available yet. Often, the vulnerability is exploitable and users can be affected negatively. Regardless, full disclosure can sometimes be the last resort to encourage organizations to fix a vulnerability if they are unresponsive or delaying the fix unreasonably.
Responsible disclosure, also known as coordinated vulnerability disclosure (CVD), is the process by which the finder of a new vulnerability engages with the vendor in order to verify the problem, make them fix or patch the issue and work out a date at which the vulnerability will be disclosed to the public.
Responsible disclosure is usually the best way to balance the interests of the finder, the vendor and the users, but it can be challenging. In trying to make the world a more secure place, you need to be willing to keep trying – sometimes for months.
The stakeholders involved in a successful coordinated vulnerability disclosure are:
The researcher of a vulnerability which is also often the reporter that notifies the vendor, who maintains (and often created) the product in question, and who is in charge of fixing the problem - for example by distributing patches.
Sometimes there is also a coordinator, for example a bug bounty platform, who manages the disclosure process.
Clear communication and management of expectations are key for a successful CVD. Usually, a CVD has six phases which require the different actors to work together.
1. It begins with the discovery of the vulnerability by the researcher
2. Next comes a vulnerability report from the researcher to the vendor
3. Normally, the vendor would carry out a validation and triage to confirm the security issue. Additional information provided by the researcher could be helpful at this stage
4. Then, the vendor proposes a remediation plan to fix the vulnerability. This requires not only the development of a fix but also its testing. This is also a process which could benefit from support by the researcher
5. After this, the deployment of the patch to the affected systems takes place
6. Last but not least, public awareness for the vulnerability should be generated in order to motivate the users to install the patches
For more details on CVDs, have a look at The CERT guide to Coordinated Vulnerability Disclosure . How long the process takes depends on the kind of security issue at hand. To give a rough idea: If a hosted web application is affected, we would estimate around two weeks for the vendor to get it fixed, for enterprise software one or two months, and for firmware up to three months.
What you should do if you found a vulnerability…
• make sure that your testing is legal (as far as this is possible, since there is not always a clear legal framework concerning ethical hacking in some jurisdictions)
• do not change any data or the system itself and do not create backdoors
• protect private information
• make sure that the vulnerability has not already been found by using resources like the CVE
• determine the organization responsible for issuing a CVE ID and request one; this secures that after the CVD is done, the information on the vulnerability is public and easily accessible - this also helps with a possible cross-vendor coordination
…when getting in touch with the affected company:
• try your best to inform them; this can be difficult, since a lot of companies do not provide clear contact information concerning the security team. Social media resources like LinkedIn, Twitter and others can help you to identify and contact them
• provide a clear and detailed report that allows the company to identify, understand, and reproduce the vulnerability (e.g. which product was tested, which tools were used, what was the test configuration)
• propose a timeline to the vendor in order to set clear expectations
• answer their questions and help them to understand and triangulate the problem
• do not ask for money, this could be interpreted as blackmailing and get you into serious legal trouble
…throughout the CVD:
• know your limits: it is their job to fix the bug, not yours
• if the communication and relationships go awry, it might be useful to involve an independent coordinator
• avoid information leaks such as CCing the wrong person on an email or talking too much at a conference
• share information about the vulnerability, your research methods, and the CVE-ID with your community after the deployment of the patch
…when the company does not get back to you and does not fix the vulnerability:
• be patient to a certain extent and stick to the proposed timeline. Do not let the vendor stall you beyond that timeline: The longer the vulnerability is not fixed the more likely that someone else discovers it and actively exploits the issue
• involve a coordinator which acts as a relay or information broker between other stakeholders – The CERT guide to Coordinated Vulnerability Disclosure lists several types of coordinators along with reasons to engage a coordinator
• submit the vulnerability report to the national CERT
• publish a Proof of Concept (PoC) to to drive change within an organization; if following this approach, it is recommended to see if an organization has information over embargo requests and notifying appropriate parties about the disclosure
• you should also consider a full disclosure as the last resort, in this case do not publish exploit codes
• provide clear contact information to report security issues which also offers encryption, e.g. a PGP public key
• publish a coordinated vulnerability disclosure statement which explains the process and offers a clear timeline
• develop the resources to respond to reports and fix vulnerabilities promptly
• pursue an open and respectful communication with the researcher
• do not sue the researcher, they are doing you a huge favor in the long term – researchers and reporters want to help
• instead: offer rewards and credits to them, maybe even set up a bug bounty program
• after the vulnerability has been addressed, publish the details
• remember: disclosing vulnerabilities does not make you look bad, instead it shows that you care for the safety of your users
• if you want to stay ahead of the game, you should work with a cybersecurity company that scans your infrastructure and software for security risks and helps you to fix and possibly disclose them
The team at SRLabs is dedicated to hacking research because we want to make the Internet more secure. Instead of finding individual programming bugs, SRLabs explores structural security issues to gain a deeper understanding of how classes of bugs work. Through analyzing technologies and protocols, we learn a lot about how systems are (mis)designed. This is why researching SIM-cards was so interesting: In order to fix the security risk, they had to be redesigned using the state-of-art cryptography, and in-network SMS filtering had to be implemented in mobile networks .
Investing in good relationships with the vendor proved to be helpful in completing a successful CVD. Nevertheless, we also had to do one full disclosure once, when a company threatened to sue us instead of fixing the vulnerability.
And to assure that other researchers can dive even deeper into technology, it is important to deliver detailed descriptions of the research process and the discovered issues, for example through responsible disclosure. We also find that including the perspective of the affected companies in our disclosures encourages constructive discourse towards better technology. And finally, we find that follow-ups on our research can help drive technology evolution , for example in the Android ecosystem 
The SRLabs team is currently researching 5G network technology and works on a census of Internet vulnerabilities . If either of these sound interesting to you, get in touch: We are looking for motivated researchers to join us on this journey .