There are many significant areas to consider when implementing DevSecOps. Here are a few of the most effective mindsets and principles I’ve found.
- DevOps is inevitable: Embrace it
The first stage of grief DevOps is acceptance. Regardless of how you feel about agile and DevOps, they’re here to stay. There is too much business value in being able to reiterate quickly and ship users to new features faster.
For security teams to be part of important engineering conversations that influence architecture decisions and big-picture strategic directions, security needs to speak the same language, understand the development process, and figure out how to add value without slowing down the development process. If you are thinking to slow down the acceptance of CI/CD tools or containers, development teams will find methods to work around you and will not include you in future conversations.
In most organizations, security cannot (and should not) block an engineering decision or a release unless there’s a known critical vulnerability that could be devastating to the business. “But I am not finished reviewing it” or “The security scanners have not completed yet” are generally not adequate excuses.
Suggested Read: Concepts and terminology of devsecops
- Build guardrails, don’t be gatekeepers
In most companies, the security team can no longer be gatekeepers unless there is truly a critical risk to the business. Those emergency brake levers must be pulled infrequently, or else trust in the security team will be diminished, making the team less effective in the future when interacting with development teams and the broader organization.
Instead, the security team should focus on building guardrails—software libraries, tools, and processes that provide developers useful, safe-by-default primitive’s developers can use to do their job in an efficient and secure way.
In the perfect world, developers can concentrate on building new products and features, offering value to the organization, and “security” clearly occurs around them—seamlessly and easily integrated into the libraries they leverage and systems they use. Developers would not have to think about security; it would just occur.
- Automate everything
This may sound obvious, but it’s important to review all of the standard processes and tasks of your security team and determine:
- What are the tasks you regularly perform?
- What is the value you are getting out of these tasks?
- Which of these can you partly or wholly automate?
- Are there any of these you don’t have to do, whose benefit can be realized in some other, more scalable way?
In any company, the number of developers grows much more rapidly than does the security team. Thus, it’s important to organize your efforts around activities and tooling that scales the security you can provide better than linearly with security engineer person-time.
Also read: DevSecOps in Kubernetes
Security automation can apparent in many manners, but some examples contain:
- Doing a lightweight code scan on every new pull request (PR) and commenting on any problem found directly on the PR, similar to how peer code review is connected.
- Automatically DAST scanning all new versions of applications deployed in QA and new container images.
- When problems are identified, automatically making Jira tickets describing the problem and the recommended fix, then assigning the ticket to the development team that owns it.
- Consider high-signal, low-noise tools and warning
Whether you are developing or tuning a security scanning tool or developing some monitoring and alerting infrastructure, you mostly face two options that are fundamentally at odds with each other:
- Attempt to find as many issues as possible, at the risk of potentially returning many items that aren’t actual issues (false positives).
- Decide to be a bit more selective with the potential issues you report. You may miss some real issues (false negatives), but when you do claim that there’s an issue, you’ll generally be right.
In current world, you could develop a tool that has both low false positives and low false negatives. Though, there are basic, provable computer science explanations why this cannot be completed (at least for static as well as dynamic analysis). In practice, you need to decide between these two options based on your situation and priorities.
Developing custom tools
Some big organizations (e.g., Google, Facebook, and Instagram) have spent years deploying custom static analysis tools that are particularly tuned to their codebases. These tools understand bugs that are unique to the frameworks they’ve developed and the business logic issues they’ve had in the past.
Featured article: DevSecOps automation
But once these tools and their related security checks are deployed, the work is not complete. Multiple rounds of feedback-based iteration must follow in which the analysis engine is made more precise or exposed to different information, and the checks are then tuned to reduce noise.
If your organization does not have a devoted team of static analysis experts, or if you do not have one or more security engineers who can dedicate months to making custom tools, it’s likely best to focus on determining the low-hanging fruit (bug classes) that you can reliably find in a high-signal way with minimum analysis complexity.