While their definitions may be imprecise, their objectives overlap. A vulnerability assessment focuses on enumerating known flaws and misconfigurations, such as those catalogued by the Common Vulnerabilities and Exposures (CVE) standard. Pen tests add context around the exploitability and impact of those vulns, perhaps identifying new ones along the way.
The overall vulns provide a measure of risk associated with an app. The individual vulns highlight opportunities where better code and controls can reduce that risk.
Test Security, Reduce Risk
The goal of reducing risk is a more useful mindset than aiming for a concept like 100% secure. You don’t blindly add security controls, you manage an app’s ecosystem to minimize its risk relative to its value. Practical security involves working within the constraints of time, resources, and budget.
A common infosec quip is that a “100% secure” system is one that’s turned off and disconnected from the internet. In addition to ignoring the reality of how actual systems work, this describes a vague threat: the nebulous internet. A better risk model is built by questioning what data’s on the system, who needs access to the data, who needs access to the system, and so on.
Part of a pen test is creating such risk models for an app. A pen test targets an app to discover vulns, exploit them, and determine how well the app resists attacks. The pen tester adds dimensions like business impact and likelihood to each vuln in order to assign risk to it.
Sometimes this is described as acting “as an attacker”. That’s useful shorthand, but it’s also vague and it’s often incorrect. This attacker is really a pen tester executing a list of well-defined attacks. It’s an important exercise, but it’s not the only way attackers (aka threats) might compromise an application.
Coverage vs. Compromise
Pen tests are most effective when they’re aligned with a development process. They serve as a verification of the app’s design and implementation. This approach focuses on creating attack scenarios for the app, its users, and its data. Then applying those attacks against all of the app’s functionality and workflows. This amount of coverage gives a more accurate view of the app’s risk.
When an attack succeeds, it indicates that some improvement is necessary to the app’s code or the way it was designed and developed. A failed attack still provides useful feedback. Together they help evaluate the effectiveness of security controls within the code’s design and implementation. It’s an affirmation of what’s working well and an indication of what isn’t.
This attack-centric kind of pen test produces a risk profile that development teams can strive to reduce over time. They do this by fixing code, re-architecting components, or layering additional security controls.
However, there are other ways to target an app “as an attacker.” Or, more specifically, there are other threats than direct attacks against an app. Someone who wishes to compromise a particular user account or access particular data likely doesn’t care about cross-site scripting vulns or the absence of an X-Frame-Options header. These are dedicated adversaries who target the entire ecosystem around an app.
This kind of threat-centric pen test is a red team exercise. It’s often designed as a test of the security controls, processes, and monitoring within the entire organization, rather than just the development process that builds an app.
For example, one of the best backdoors into an organization is via stolen credentials. An adversary may focus on using a compromised developer account to access production data. This becomes a test of more than the app’s development lifecycle. It may start with questions like how dev accounts are restricted from production systems, or how they’re restricted from publishing unreviewed code to production systems. But it broadens to topics like host hardening, logging, monitoring, and identity and access management.
Use a pen test to evaluate the design and implementation of app; align it with the development process. Use a red team to evaluate the deployment of controls throughout an organization; align it with a security framework like the NIST Cybersecurity Framework.
Continuous Cycle: Release, Review, Repeat
External motivators might influence the schedule for application pen testing, such as annual compliance requirements or ad-hoc requests for security attestation from a third party. Even so, maintain focus on incorporating application pen testing to the development process. Doing so sets an expectation that security is important and helps establish behaviors that support security. It’s a quality assurance phase for production apps.
The frequency of testing should also be compatible with the development process. Modern development techniques are moving toward continuous integration and continuous deployment (CI/CD) methods. Thus, ever-changing code may be impacting security, altering original design assumptions, or have trivial typos that lead to dangerous vulns.
The CI/CD approach strongly encourages automation and continuous improvement to processes. This applies to pen testing as well. While manual testing will always be important for conducting in-depth security reviews, it should still lead to improvements in automated testing, scanning, and bug management.
Aligning pen tests with app releases also builds a record of the app’s risk over time. Trends are informative and a metric you can manage against. They help demonstrate the success of process changes (like adding a simple static code analysis phase) or architecture changes (like deploying Content Security Policy to mitigate cross-site scripting). They’re a feedback loop for measuring how well and how early security is being introduced into an app.
Treat application pen tests like a regular development exercise that happens to be performed by a security team rather than an occasional security event. You’ll create a more mature development process that raises the app’s baseline security and reduces its risk.