The vulnerability management lifecycle is an ongoing process for organizing how security bugs are found, fixed, and prevented in an organization. The lifecycle consists of the following steps: assessing, prioritizing, acting, reassessing, and improving. Sometimes additional steps are added, or these five are named differently, but they represent an excellent way to encapsulate the broad consensus of the lifecycle stages.
In this article, we’ll explore the vulnerability management lifecycle in depth, giving a realistic, step-by-step example of applying the process to an organization. We’ll also provide some tips for using the lifecycle optimally.
Vulnerability management lifecycle key concepts
Let’s briefly review each step in the vulnerability management lifecycle.
|Assess||Thoroughly search systems and networks for vulnerabilities.|
|Prioritize||Evaluate which vulnerabilities require the most urgent attention according to your organization’s threat model.|
|Act||Fix vulnerabilities in priority order, and decide which issues will remain unresolved due to other necessities.|
|Reassess||Check whether you’ve missed any important vulnerabilities or even introduced new ones.|
|Improve||Address the institutional or process issues that lead to systemic weaknesses.|
How vulnerability lifecycle management works
Managing vulnerabilities is often messy: Security engineers might create issues on GitHub repositories that are never triaged by developers, who themselves may not understand the specific impact of a bug. Even if bugs are patched effectively and promptly, there are often no measures to address the organizational issues that led to the bugs popping up in production.
The lifecycle works by applying a formal, continuous process that addresses these concerns. There are several good reasons in favor of taking this approach. Some of the more important benefits include the following:
- Continuous improvement
- Focusing on vulnerabilities that fit into your threat model
- Coordinating effort between security engineers
- Having fewer bugs slip through the cracks
So far, we’ve only described the lifecycle in vague, highly generalized terms because the vulnerability management lifecycle looks slightly different in every organization where it’s applied. The theory provides an important foundation for understanding, but a purely theoretical approach is inadequate to demonstrate the power of this process. With that in mind, let’s move on to something more practical.
Why the vulnerability management lifecycle matters
Imagine that you’re on a security team that does not follow the vulnerability management lifecycle. This security team is tasked with reviewing this Python code that pings a user-supplied host:
from subprocess import check_output host = input('host: ') output = check_output('ping -c 2 ' + host, shell=True) print(output)
A security engineer learns that an attacker can use a semicolon in the input to execute arbitrary commands on the system:
$ whoami bob $ python3 ping.py host: example.com; echo "THIS SCRIPT RUNS AS THE USER: $(whoami)" [...] THIS SCRIPT RUNS AS THE USER: root
To prevent this, the developers remove semicolons from the input:
host = input('host: ').replace(';', '')
However, there is a simple workaround. An attacker can use the chaining operator (“&&”) to once again execute arbitrary commands:
$ python3 ping.py host: example.com && echo "THIS SCRIPT RUNS AS THE USER: $(whoami)" [...] THIS SCRIPT RUNS AS THE USER: root
Even when vulnerabilities are fixed correctly, they are often added back in later when the code is refactored, and the patches often take months to arrive from developers.
Applying the vulnerability management lifecycle
Now suppose that this team’s security leadership decides to implement the vulnerability management lifecycle. Let’s go through each step of the lifecycle and observe how it could improve the efficacy of the security engineering team.
Rather than depending on security engineers to spontaneously coordinate and audit systems, the team should organize deliberate penetration tests. Ideally, you should also have external penetration tests by hired auditors. However, the frequency and variety of these audits depend on other factors, like budget and the size of your security team.
For more information on organizing an assessment, check out the recommendations section later in this article.
Bugs should be assigned informative priority markers based on their severity and relevance to your organization’s threat model.
Using the example above, we could add tags to issues that inform developers of a security bug’s urgency. This way, bugs can be assigned to developers in order of urgency rather than only relying on other factors (like how well the developers understand the bug or how easy it is to fix).
Security engineers should work with developers to understand how to fix a vulnerability. When a developer writes a patch for a security bug, a security engineer who understands the bug should be one of the reviewers for the pull request.
After the patch is created, the system should be assessed again using the same criteria to ensure that the behavior really is fixed. Additionally, the auditor should check that new vulnerable behavior has not been introduced.
For example, when we reassessed the fix in the vulnerable Python code that simply removed semicolons, we found it still vulnerable. A proper fix of the issue should disallow shell manipulation, like this:
output = check_output('ping', ['-c', '2', host])
Of course, knowing to do that requires a more profound knowledge of programming than many security engineers actually possess, which is precisely why it’s so critical that security engineers and software developers collaborate very closely during this entire process. We use coding for this example, but the same principle of collaboration applies when security teams work with systems administrators, technicians, and so on.
Our imaginary organization has a problem that we still haven’t solved: Even after developers fix vulnerabilities, the same vulnerability is often added back in later!
How can we improve the team’s process to stop this from happening?
The answer is regression testing. We create tests that will only pass if the vulnerable behavior is absent. More specifically, the team must have a rule that any security patch pull request must come with a regression test for it to be merged.
Now that you know how the lifecycle works in a practical setting, let’s consider some of the most essential tips and best practices for applying it effectively.
Building a threat model
Your threat model refers to the process of identifying the kinds of risks and real-life scenarios that your organization is worried about confronting. A well thought-out threat model is essential if you want to get the most out of the vulnerability management lifecycle. The second step, prioritize, especially relies on getting your threat model right.
Broadly speaking, the threat modeling process can be divided into three basic steps:
- Decompose the application
- Determine and rank threats
- Determine countermeasures and mitigation
You can learn more about these steps and how to build an adequate threat model that matches your organization’s particular needs by following OWASP’s guide to the threat modeling process.
Managing an assessment
When preparing for an assessment, there are two questions you want to ask:
- Black box or white box assessment?
- Internal or external assessment?
A black box audit means that the auditor does not know the inner workings of the systems being tested, while white box means that they do. Ideally, you should do both, but that’s not always feasible.
An internal assessment means using the security personnel you already have to test your systems for security issues. Doing this is worthwhile because you can leverage your current team’s expertise to look in the right places. On the other hand, an external assessment comes with the benefit of security experts who are highly specialized in offensive operations. Again, ideally, you would do both.
OWASP offers an introductory guide to setting up a penetration test.
As a rule of thumb, if you have to choose between a black box or a white box, choose a white box; it is less realistic, but you find more bugs. It’s also worthwhile to start with a small-scale internal assessment before paying for an external assessment. Your team will catch a lot of low-hanging fruit, allowing the external auditors to focus more on complex, hard-to-catch bugs hiding in your network.
Automating the lifecycle
Like all the steps in the vulnerability management lifecycle, the final step (improve) is open to interpretation. Regression testing, anti-phishing training, and hiring additional security staff are ways to improve your security posture moving forward, depending on your situation.
A common problem when attempting to improve your security posture is that many solutions introduce new work. Extensive testing coverage, while completely worthwhile and important in the long run, increases the workload for developers in the short term. Planning regular penetration testing requires time from your security team, or money if you hire external auditors.
The solution to this dilemma is to lean extensively on automated solutions that can do the heavy lifting for you. Ideally, this means that the improvement step of the vulnerability management lifecycle actually decreases your workload.
Cyber Asset Attack Surface Management (CAASM)
Define your cyber attack surface & identify/visualize your multi-cloud asset inventory
Verify your security controls are protecting your cyber assets, identify coverage gaps
Extend your security posture by monitoring for vulnerabilities & misconfigurations
Without any formal process for managing vulnerabilities, it’s easy to let security bugs slip between the cracks. It’s important to effectively prioritize security issues that will affect your organization the most according to an informed threat model.
The vulnerability management lifecycle is a powerful organizational practice for establishing how you handle security issues. The lifecycle consists of several steps (usually five, although the exact number varies among organizations) that formalize the process of managing vulnerabilities.
To get the most out of the vulnerability management lifecycle, you’ll want to follow some basic recommendations outlined in this article. Whether it’s automation, penetration testing, or building a threat model, we hope you can use these tools to make your organization safer.
Paladin Cloud’s Cyber Asset Attack Surface Management (CAASM) product monitors your cloud environments to identify security risks and vulnerabilities in the configuration of cloud services. Paladin Cloud’s product also connects to third party vulnerability management platforms, such as Qualys, to surface vulnerabilities and to extend your security posture using pre-built, best practice policies.