Our Thinking Thu 28th April, 2022
Five software delivery security pitfalls
The software delivery process has been transformed in the last decade, with the adoption of well-understood paths that incorporate functions such as testing, release management and operational support.
These changes have enabled organisations to speed up time to market and improve service reliability. So why does security still feel so hard for so many product teams?
In our Secure Delivery playbook, we explain the principles and practices you can apply to security within continuous delivery. We’ll also explain the security pitfalls that we encounter with customers, and how you can avoid them. The top five software delivery security challenges we’ve identified are:
- Security gate bottlenecks
- Disruptive chunks of security work
- Lack of tangible value
- Unsuitable security policies and checklists
- Inefficient duplication of effort
1. Security Gate Bottlenecks
Product teams need to roll out new digital services to customers as quickly as possible, to gain rapid feedback and iterate on their minimum viable product. However, too often this process is held up by weeks or months, because of the need for lengthy risk assessments and penetration tests. These checks are carried out by an overloaded security team that doesn’t have the capacity to meet multiple iterative launch deadlines.
You can reduce this bottleneck by embedding security assurance ownership into product teams. If product teams understand what assurance is required from a security assessment, then they can proactively provide it through code scanning, threat modelling and just-in-time security controls. This means security teams can change their role from fully assessing each release to simply checking the provided evidence.
The success statement you’re looking for is:
“We no longer wait for security teams to sign off that our release is secure, we provide security scan results and documented security controls in an agreed format which they quickly verify and use for assurance”
2. Disruptive chunks of security work
It can be difficult to plan for the outcome of large, infrequent security reviews or audits. Teams are often required to take on large chunks of remedial security or compliance work, which can lead to combative negotiations, disruption and widespread risk acceptance.
An effective way to reduce this disruption is by adopting security standards and controls earlier, in consultation with security teams. This means defining a clear set of standardised security requirements that are based on data sensitivity and system architecture. Additionally, shifting security requirements left towards the beginning of the delivery lifecycle means that product teams can design in controls from the start.
This looks like:
“We knew from product inception that our payment service would require service to service authentication and the encryption of card details. We planned this from the beginning and used agreed mechanisms that led to a very smooth delivery.”
3. Lack of tangible value
Too often, security activities such as code scanning, risk assessment and remedial work are carried out with little discernible value, apart from being able to say, “We did what we needed to to pass security sign off”. This makes it difficult to justify security activities against product needs, and to measure their value.
This can be improved by measuring and owning security performance effectively. If you understand how security activities can be measured and used to provide assurance evidence, then you can prove vulnerabilities are fixed within timelines or that important security controls are widely adopted. This also means you can help teams take responsibility for owning security improvements and prioritising work.
“Using our security dashboards, as a delivery owner I can demonstrate that 100% of my critical systems have granular database permissions, are regularly scanned for vulnerabilities against the OWASP top 10, and 95% of any vulnerabilities found are fixed within 14 days.”
4. Unsuitable security checklists
It’s important to define a set of technical security requirements and standards if you want to scale out security assurance effectively. However, in many cases the result is hundreds of questions in a spreadsheet, based on old and high-level industry standards. Such checklists are created in isolation by security teams, and rarely updated. This leads to security activities that are unsuitable for fast-changing tech stacks and modern design paradigms, such as microservices, serverless and composable front-ends.
You can improve on this by co-developing useful, relevant and up-to-date security standards. Ensure standards are co-developed and co-owned by product and security teams, and regularly review and extend them to keep up with new technology or innovative system architectures.
This looks like:
“Our security standards are seen as a valuable design toolkit for engineers and architects that provide early indication of security requirements and act as a blueprint when implementing security controls. ”
5. Inefficient duplication of effort
Security scans and/or penetration tests for multiple digital services often find the same vulnerabilities over and over again, with multiple product teams carrying out the same remedial work or accepting the same business risks. This results in duplicated, expensive work and/or increased security risk from cross-cutting control deficiencies that are too big for any one team to fix.
You can remove the duplication with:
- Focused penetration testing. You should review penetration test findings globally, and move towards in-depth, regular testing and scanning on cross-cutting areas such as web front-ends or shared infrastructure, while reducing testing on individual microservices.
- In-depth remediation efforts. Tackle shared vulnerabilities by carefully standardising shared components such as base VM or container images, shared framework libraries or reverse-proxies, and fixing them regularly so teams get fixes “for free”.
This looks like:
“We no longer pen test each new customer-facing website component before release, instead we test the whole site in-depth every 3 months and since we standardised our web server configuration we no longer receive hundreds of duplicated findings.”
In upcoming blog posts, I’ll go into more detail on each of these pitfalls with some useful concrete techniques to avoid them. So, follow Equal Experts on Twitter and LinkedIn for updates!