Let’s talk about “audits,” or assessments from external vendors on a specific section of code, product, functionality, technology, etc.
Many use the terms interchangeably; this document will use the term assessment as it implies (in my opinion) the appropriate way of thinking about things. The title uses the word “audit” because that is typically the word thrown around, but this usually gives a connotation that the results are a certification of conforming to specific standards, oftentimes legal ones. The process of getting your smart contracts and supporting infrastructure reviewed by security professionals is not that and that mentality should not be reinforced. Here is an article describing some more pedantic differences between the two terms.
Currently, there exists a large gap of knowledge about the assessment process between developers and reviewers. This document is an attempt to shorten that gap by clarifying typical expectations of the vendors, and methods organizations can use to best meet them during their development process. More specifically, this document details the concepts on how an organization:
By taking these steps, an organization building things can:
Note that this is not a comprehensive guide on all things regarding assessments. There are many details to the relationships between organization and assessment vendor. This document should prepare you to start developing practices within your organization to bolster these relationships. This document will most likely change over time as we at Status become more educated and seasoned in this process, so check back occasionally. The updated form of this will live in our Security Documentation
In order to properly understand all of the ins and outs an organization must go through in order to get an external assessment, it is first necessary to have a unified thought process on what an “audit” or assessment is. An assessment is a chance to get external eyes on a specific project, a proposed architecture, or even a specific part of a project so that you can gain additional confidence that it works the way you intend it to, and not in any other way. It is temporarily hiring additional (or specific domain) expertise that your organization does not have. Additionally, it is a chance to get a fresh perspective on something being developed, which helps to remove developer bias among other things.
Typically, a company engages a 3rd party vendor for an assessment in order to supplement the lack of knowledge (or capacity) the organization has with respect to a domain. For instance, A company may engage in an assessment if:
It can not be overstated that an external assessment DOES NOT give guarantees of code quality/security or give a stamp of approval. Many believe that an assessment is a stamp of confidence that they can market to a broad community, proving a specific level of safety and “production quality.” This mentality should be avoided at all costs. A security assessment should be thought of as simply gaining additional attention and expertise on a specific functionality or section of code, and an opportunity to gain insight and wisdom in a field of knowledge so that future development can be at a higher quality. More generally, the assurance level should increase as a result of an assessment. However, absence of evidence is not evidence of absence with respect to vulnerabilities detected during the assessment.
There are many factors that come into play that will dictate the quality of any given assessment. Here are some things you should keep that affect overall assessment quality.
The expertise of the vendor performing an assessment with the respective code or technology. Vendors typically have specific specialties that they excel in, and advertise as such. Their available staff, previous experience, and professionalism will all affect the outcome of an assessment. It is important to realize this when it is time to start searching for a vendor for an assessment. A leading vendor in user experience and design should not be expected to give leading reviews on distributed infrastructure architecture.
The resources the vendor is able to allocate to the assessment. This is typically a function of how busy they are, their available resources on the technology, and whether or not the available assessment budget is in line with the appropriate level of effort required to perform an adequate job. This not only refers to the vendor’s available man-hours, but also to their in-house tooling and expertise with industry standard tooling and best practices.
The documentation and quality of the respective code and technology under assessment. You can consider a vendor as new hires on a project, and they need to go from zero to expert in a very short amount of time. Their ability to do this is heavily influenced by the resources that are given to them to do that job. If inadequate, a vendor will either spend a substantial portion of the paid-for assessment time getting up to speed and helping to make these resources. This means they will either extend the required assessment time to do an adequate job (thus increasing the required price), or drastically decrease their available time to do their specialized work (thus decreasing the overall quality).
The number of known issues with the associated code, technology, or functionality being assessed. Assessment vendors pool from known issues in the past to rise the tide of knowledge across the board. If a given technology is relatively new, then the vendors are less equipped to find potentially deep issues that have yet to be discovered in the ecosystem. In fact, many times broad issues with respect to a given technology are discovered during assessments. If such issues exist and have yet to be discovered, the likelihood of that happening is heavily depending on how well you are prepared.
It can be quite daunting when gearing up for an assessment, as there are many steps you should take to prepare. Additionally, sometimes it is hard to even assess whether the material in question warrants an external assessment. Many of the items required for assessment preparedness are actually just good development practices. Here we discuss various procedures and methods that are useful to have done when considering an assessment. Not only will these things help identify whether or not you may need an assessment, what exactly should be assessed, and who and how to approach firms, being prepared will also help all parties in all aspects of an assessment (such as the items stated in the introduction).
In fact, proper preparation will put you in a decent place to move forward if you do not have the funds to have an external assessment. Preparing for an audit is the same as auditing the project yourself, and fixing whatever you are capable of. Additionally, formally showing that you are well prepared and have performed all available methods with your given resources can drastically improve your ability to raise any additional funding needed for an assessment as the required work that is needed is both explicitly highlighted and easier to reason about from an outsider perspective.
As an additional benefit not specific to a security assessment, spending the requisite amount of attention on peripheral information such that an outside contributor can get up to speed on everything they’d need to know to successfully contribute, thus lowering the barrier of entry to any company growth or open source contributors (when appropriate), as well as minimizing their time to becoming productive (which saves you money and grows your community!).
So what should you do before looking for an external assessment? The following sections will describe various efforts that help get your project ready for review.
Have a specification
Any product should have documentation that technically and explicitly details the requirements that need to be satisfied. The format of this can change depending on what the nature of the material is, but technical specifications allow for an outsider to fully understand the underpinnings of functionality of the material. It is crucial in understanding whether or not the technical implementation satisfies the requirements of the project, and what unintended consequences may exist based on that implementation.
For instance, a choice of a specific technology within a product may yield the desired functionality, but its technical implementation may result in unintended consequences that a specialist will immediately notice and report based on subtleties the developers may not be aware of. Having a specification helps detail what is used and how it is used so that external vendors are able to spot exactly where an issue lies, and also helps enable them to give specific recommendations on how to mitigate it.
The less detailed your product is specified, the more ambiguous recommendations will be with respect to any given issue.
Describe how to interact with it
So every product needs documentation (we all know most hate this part). Remember, a vendor (or new contributor!) needs to go from zero to hero in as little time as possible, and documentation is the road that is used to get there. You might be saying “but we already made a specification!” Good, but a specification is about minute details on the technical aspects of the project. Documentation is more high-level and explains how things fit together, how it is used, etc. Specification details what the product should do (and how) to meet requirements while documentation describes what it actually does.
If you are building a product, a vendor will need to build it and potentially check that process. Make sure that process is fully up to date and documented so that they do not have to stop and ask questions every time they hit a build snag.
Have exhaustive user stories
Do people use what you are building? If so, to what end? A product should outline in detail the actors that interact in all the possible ways that are intended. By allowing a vendor to understand what the intended users are, their possible interactions, and the outcome of their actions (called a story), you expedite the identification of unintended use cases, interactions, and outcomes that should be raised as an issue, and potentially fixed. Or maybe you’re lucky and something unintended becomes a new feature, and not a bug (but probably not)!
Perform threat modeling
Okay! You documented how to use it, you specified exactly how it’s built, and you detailed the intended users that interact with it. Now it’s time to view things from a different lens, that of an “attacker.” A short description of threat modeling is thinking about what an adversary to your system would try to do, where they would try to go to do “bad” things, and what they would have to break in order to “succeed” at it.
More formally, threat modeling is the process of actively identifying where within a project risk (value) lies, how it is accessed, who has access, and what security processes are in place to control all of it. This is an important part of preparing for an external assessment for many reasons.
For starters, the vendor will more than likely do this! By starting this process yourselves and documenting the results, you remove their need to create the associated resulting documents (examples below). They will more than likely have questions and want to alter them based on findings, but this is a process of improvement not creation.
A basic, but very valuable way of communicating a threat model is to list the various actors (i.e. admin, token holder, other Ethereum users) and their intended capabilities within the system (example).
Threat modeling is a systematic way of identifying where value (thus risk) lives within a given system. The resulting documentation of this process usually leads to a diagram of risk throughout the available components of a system, and an overview of how it is all connected. These diagrams serve as a central understanding to the architecture by all parties who contribute to it, as well as a bird’s eye view of “how it all fits together.”
If you have a map of risk for a system, it is then easier to evaluate whether or not a given system even warrants an external assessment, and if so, it pinpoints exactly where focus should be made. Often organizations approach vendors for an assessment, and when asked about the scope of the project, the organization simply says “I don’t know, you tell me!” Not only does this drastically increase the potential price of an assessment, it may also lead to a vendor performing an assessment on things that aren’t even your main focus because they don’t know where the risk is (or at least how you see it).
Perform User Stories
So you got a working system. You think it does what you set out to do. Good, now make sure of it. If you have done the previous planning process correctly, you should have a set of user stories that detail the ideal path of the system’s use. Do them, explicitly. Make sure that the experience is what you expect it to be for individual use cases. A good starting checklist of things to do is what your specification details.
Color Outside the Lines
That’s the first part, making sure the system does what it is intended for with an acceptable experience. The next step is to try and break it in any way you can possibly see it breaking. Attempt to access things that should not be accessed. Put in random characters where there should be numbers, go nuts. The system should fail gracefully in every attempt outside of the defined scope because users WILL attempt to do things outside of what you expect them to do.
Create a Testing Corpus
I am not going to belabor you on the various kinds of testing frameworks that are available, but I will point out some things that should be done. Not only should you test what the system should and should not do, but these tests should be codified and run whenever changes are made. That way, if an introduction of a change breaks something that was previously working, you can catch it quickly and fix it before it becomes a problem.
The process of testing your system (and documenting your testing) helps you in a variety of ways. First, it allows you to have a higher level of confidence that the code works as intended, and is resilient to new changes breaking that confidence. The more robust a testing framework is, the higher that confidence. Additionally, a corpus of tests also provides clarity, coverage, and context of what has already been done to those who would assess the codebase. The more you test, the more information you give about the system’s intent, and its implementation to those who would assess it.
Use Automated Testing
It is also advised to take advantage of various automated testing software to increase your testing corpus in a manner not feasible by manual testing. Manual testing is done to catch the things you can intuit about how the system works, automated testing is used to try and catch the other stuff. For instance, the process of implementing a software fuzzer can help you look at how the system crashes when random input is fed into the system, and at a testing rate that is only reasonable for a computer to do. Implicitly, the places you set up a fuzzer also help 3rd parties to understand the potential entry points a system has.
You have designed the work, you have done the work, you have documented the work, you have tested the work. Now it is time to get the work looked at by others. Let’s talk about how you go about doing that, and maybe the reasoning behind why you might want to.
Should you even get an audit?
When and if you should get an audit or assessment is a subjective thing and there are many variables that come into play to help a company figure that out. Much of the previously defined process not only helps you create secure systems that are more easily approachable to others, but also helps to maximize a team’s ability to assess whether or not they should employ external parties to help with that assessment.
Here are some of the questions that you should consider when evaluating whether an assessment is needed:
If you have come up with detailed answers to those questions, then you have more material to justify whether or not an additional assessment is warranted. Let’s discuss a bit about how to define what that looks like and some methods for approaching those that are capable of performing the assessment.
Define the scope
In order for you to properly engage with someone to audit your codebase, you must first define what their task would be. Auditors are not omniscient. They are people with specialized skill sets for hire. It is your job to come to the table with expectations.
That means before engaging them, you should explicitly define what it is that you want them to look at, and the types of questions you want answered as a result. If you have followed along with the previous recommendations, you have already done the lion’s share of the required work for this. The task now is to narrow down the codebase to the sections that require additional scrutiny.
Typically, this refers to areas within the system that carry the majority of risk (which you’ve conveniently defined already), or the more complex parts of the system that require high-skilled labor to create or specialized technology to implement.
In short, defining the scope is a statement to the potential auditors of “we feel this part of the codebase deserves more attention, and we do not have the internal resources to do it sufficiently ourselves.” Note that not having the appropriate internal resources is not a slight, but is simply an inevitability when building complex software. It is excruciatingly rare that a novel system that can carry risk is built from the ground up without using technologies that are developed elsewhere.
Have a budget prepared
Now that you know what you want looked at and come to the conclusion that you can not do it yourselves, you need to have a budget to pay someone to do it. This is not an easy task as prices fluctuate drastically between industries, firms, and times. I will not speculate here as to why.
Fortunately, since you’ve followed all of this advice, you have already done a lot of work to precisely define what a would-be auditor would do, and created the materials required to do it efficiently. This is an additional benefit of developing software this way, it drastically lowers the cost of assessments, and increases their effectiveness to you as an organization and to the community at large.
[FIND RESOURCES HERE TO HELP PEOPLE ESTIMATE COST OF AUDITS]
Write a Request for Proposals (RfP) document
The next step in the process of getting an external assessment is to bring it all together into a single document. This document is typically referred to as a Request for Proposals document. Here is a typical outline of such a document:
Here is an example of such a document written by Status for the Nimbus beacon-chain codebase. This document outline and process was emulated from the folks at Sigma Prime. We feel it drastically increases the clarity and fairness of choosing auditors for work. In our experience, the amount of feedback and response from doing this has drastically outweighed the times in which we directly reached out to vendors for a project. We’ve created relationships with firms that previously didn’t exist (more on why this matters later). Not only that, but the process of an RfP allows for you as a company to not be blinded by your bias in what you think you need done. A submitted proposal may justify additional work or an altered scope that you were not aware of.
Additionally, it forces vendors to justify the cost of the work to be done in an open market. If they happen to be relatively expensive, then their proposal should justify why they charge what they do and what you get from it. If their costs are relatively low, they should be able to justify that they are capable of doing a satisfactory job.
In short, by allowing vendors to bid on your project, you force the security community to sell themselves on the competencies they have cultivated. The more information and detail you can provide them on the work to be done, the better they can do that, and the more likely you are to have quality results from the assessment at an appropriate budget.
Set up channels of communication
You are going to need to communicate with the vendors throughout the selection process and during the assessment. It is worthwhile to set up secure communications beforehand to facilitate that as well as prepare your team for what to expect.
You are going to want an official channel of secure communication, this is typically done through email combined with PGP encryption. Set the expectations up that the official business of the assessment should go through this channel so that all parties have a record. Be sure to broadcast your PGP public key in the RfP and detail the appropriate email address for communications.
Throughout the assessment, it has shown to be beneficial to set up an ephemeral secure communications channel for more short term collaboration. This allows the assessment team to get access to your team so in case of any blockers, misunderstandings, or updates that arise. Be prepared to have the in-house domain experts (software developers in most cases) have proper time during the assessment to answer questions, fix issues, and provide whatever the security vendor needs to expediently do their jobs. You are paying for their expertise, maximize it.
In some cases, it may be worthwhile to set up additional channels of communication. Think ahead of what your engagement description might warrant. In our case, we have found it useful to set up a process for assessment teams to disclose findings through GitHub issues in the repository in question. We outline that process in a living document here and update based off of feedback from each external assessment we engage in.
There are a lot of firms available to perform security assessments on products and technology that an organization rolls out. Some are outstanding, some are good, some aren’t, some should not even be offering services. All of them vary with their associated costs. How does one navigate this and choose an appropriate vendor for a reasonable price? Below are some thoughts that can help guide your decision in selecting an appropriate vendor for an external assessment.
If you have followed the process of releasing an RfP, and have collected a number of proposals from would-be vendors, then you get to read and choose from the selection.
The number one factor when choosing is whether or not a given vendor is capable of doing the job you have laid out (which further emphasizes the importance of defining scope). A vendor’s proposal should show that they have substantial expertise in the specific technologies that are within the scope of the assessment. Remember, you are attempting to hire expertise and wisdom you do not have within your organization. It should show a track record of previous assessments that are available for review, specifically within a similar domain of required expertise. If these previous reviews are private, then other sources like relevant educational material or prior work from their team can suffice. If available, a vendor should convey that they have developed or contributed to similar projects, relevant best practices, or tooling associated with the given technology under review.
The following factors are price and timeline. The vendor should have a rationale behind their fee structure and why a proposed assessment cost the amount that they quote. If a vendor is not capable of defending the price behind their proposal, then they should be ignored. Their proposal should also include the expected timeline to get the work done and the amount of resources they expect to apply to the tasks. This part of the proposal should be thought about carefully, as their potential allocation of resources indicates how they view the distribution of difficulty in the assessment. If it is not in line with how you view the work to be done, then additional clarification and justification should be given.
It should be noted here that if you are looking to get an assessment on updated code that has been audited, additional weight should be placed towards the original vendor, as they have prior experience in your specific codebase, among other reasons.
After weighing individual proposals, it then comes down to a subjective decision. Your decision should be an optimization of price and timeline with the caveat that the chosen vendor is capable of performing the task at hand. Note that this means it is not always necessary to choose the absolute most skilled vendor for the job, but a vendor above a threshold of capability. All of the processes explained in this document should aid you in making that judgement for organization and systems to be reviewed.
You did it. Congratulations on successfully engaging the security community and getting an external assessment. Now what?
As per your RfP, you will now have a set of deliverables to go through. This is usually in the form of a report that details the findings from the team, and proposed mitigations. It is now your job to address them. You have spent a lot of time, resources, and effort preparing for this and going through it. Your response to the fruits of that labor should be commensurate. Once received, you should fully understand the reported issues and address them appropriately. If there is ambiguity in the issue, then ask the vendor for more clarification (previous channels that were setup aid in this). This could mean performing a quick fix of a known vulnerability found, revamping an entire part of the codebase to mitigate a particular type of attack vector based on fundamental architecture, or even addressing the issue as irrelevant. All issues should be addressed in one way or another.
After this happens, you should broadcast the results and your mitigations. The extent in which you do this depends on your organizations practices. At Status, we are completely open and release all information to the public. A more private company may wish to obfuscate some specific details for security reasons.
Additionally, a company should express the concerns brought up from the audit, and how they impacted your organization and product. This gives you a chance to detail what steps you are taking to fix underlying issues and what proactive measures you are implementing within the organization to raise the bar of security so that similar issues in the future are either avoided altogether or are caught early
Lastly, you actually have to fucking do those things instead of just talking about them.