Since 2019, SRLabs has audited a range of prominent Substrate-based blockchain projects. This post shares insights on how to effectively test this complex technology.
Substrate is a framework that enables developers to create their own blockchains. Substrate-based blockchain projects are an integral part of the SRLabs assurance portfolio. Each test is a new challenge due to the large codebases, individual business models leading to unique threat landscapes, and the complexity of distributed systems. Yet, a structured audit process can – in our experience – make even the most challenging blockchain audits doable by a team of experienced pentesters or code auditors.
In this blog post, we describe the audit methodology that has helped SRLabs audit over a dozen blockchains and identify many critical vulnerabilities.
Substrate is developed by the Parity team, including some of the same people that previously created Ethereum. Parity themselves built the Polkadot and Kusama blockchains on Substrate but also enable hundreds of other teams to build their own Substrate-based chains.
What makes Substrate noteworthy from a security perspective is Rust, a memory-safe programming language that should prevent many security issues such as buffer overflows. However, security bugs can still exist, for example in the application logic and through the use of unsafe arithmetic functions.
To find and help fix these remaining issues, the crypto hacking team at SRLabs regularly reviews Substrate chains, following a structured approach from a threat model to design and implement checks. Our six-member team – most of us starting from a software security background - has established the methodology over the past two years.
In creating the review methodology, we had to consider three main challenges:
No review methodology was previously available to conduct these assessments, and there are only a few automated tools available to support the process.
Blockchain security reviews in 4 steps
The methodology that works well for us starts with threat and design questions before going into a deep-dive review on the actual implementation. These additional “theoretical” steps at the beginning of the review have proven crucial in guiding the testers during the more ‘practical’ implementation check. Without this guidance, the testers would likely get lost in the size and complexity of the chains.
The four steps of the blockchain security review are:
1. Threat modeling. We first create a threat model by determining incentive and estimated hacking effort of various attacks on the blockchain systems.
For example, a hacker could have a high incentive to stall the block production of a chain to cause high reputational damage and subsequential financial loss for token holders due to a drop of the cryptocurrencies market value. Attacks that are in relation to this threat can potentially be executed with low effort, making this threat a high risk area for the project team and its stakeholders.
With the model, the auditors can then focus on the parts of the code that are subject to the highest risk. A threat model also allows the auditor to deal with the lack of established vulnerability classes – for each attack listed in the threat model, the auditor can verify that the respective security mitigations against this attack are in place. This ensures that the auditor does not “forget” to check any attacks.
2. Design coverage check. The goal of the second step is to determine which of the threats in the threat model the blockchain tries to prevent. This is not to say that by trying to prevent a threat through a security control, the designers are automatically successful, however; this is why the hybrid testing in the next step is still required for threat considered in the design.
On the other hand, threats that are not even considered in the design are flagged as open risks for the blockchain and should prompt design changes.
The design coverage check results in a list of controls that – if implemented correctly – prevents threats from the threat model
3. Hybrid testing. Next, we choose implementation checks for each of these design controls to verify whether in fact they can deliver the intended protection. The checks test the runtime implementation of the chain with the goal of discovering as many hacking vectors as possible and pursuing the most promising ones. We employ a hybrid test approach that combines dynamic fuzz testing and manual code review.
For the dynamic testing, we implemented a number of custom harnesses based on honggfuzz. We combine this with writing custom tests that verify that a given blockchain is protected against certain attacks laid out in the threat model. While fuzzing is great to identify certain bug classes, for example the use of unsafe arithmetic functions, it is very difficult to detect logic bugs with fuzzing. To identify other bug classes, we manually review the code, guided by our threat model. This hybrid approach works well because it combines the best of both worlds – the scalability and ease-of-use of fuzz-testing and the thoroughness of manual code review.
4. Verify bugfixes. Once the bugs have been addressed, we double check to make sure that the reported issues are actually issues – which is not always trivial to determine once again owing to the complex nature of blockchain systems.
The four-step approach has proven to be both efficient and effective – several of the blockchain teams we worked with have integrated the threat model into their development process to guide designers and developers.
Blockchain technology – love it or hate it – is here to stay and people tie an increasing amount of value to ever-more complex blockchain applications. We, the security community, share in the responsibility to help blockchain technology mature and protect data and assets.
At SRLabs, we love a good security challenge and blockchain systems have proven an infinite supply of great puzzles, both for experienced testers and talented security newbies. If you feel the same, get in touch.