Establishing Effective Static Analysis Capabilities

Planning to establish or reboot a static analysis capability this year? Use this simple framework to plan a new implementation or reflect on an existing program to improve maturity.

Over the years, we’ve learned that there are four primary dimensions to any static analysis capability:

  • Solution Architecture
  • Policy
  • Application On-Boarding
  • Vulnerability Management
It doesn’t matter if you’re considering building an in-house solution [*1] or leveraging an outsourced SaaS static analysis provider [*2], these four dimensions apply and are crucial to your capability’s overall success. In fact, the first dimension - solution architecture - is about evaluating whether an in-house or outsourced solution is the right answer for your program. It also doesn’t matter if you’re a Fortune 10 company with 10K+ developers or a small start-up with a hand full of developers; these dimensions still apply irrespective of scale.

If you’re working to get your secure code review program off the ground, think critically about each component to define a vision for your program in line with your overall objectives. If you have an existing program, enumerate over each dimension to reflect on the current state of your program, evaluate existing pain points, and identify opportunities for maturity improvement.

Solution Architecture

To a certain extent, solution architecture enables the other three dimensions. This is about selecting a static analysis tool, choosing an overall deployment model, and thinking about all operational aspects of fulfilling end-to-end execution of the capability. It’s also about weighing the pros/cons of in-house tool deployments vs. outsourced SaaS providers. From a management perspective, you’ll want to consider staffing needs - whether you’ll need to hire new staff, bring on contractors, or look for a managed service provider to provide the capability. All of these decisions have a direct impact on program budget.

Policy

Policy is about documenting what types of applications get scanned (or reviewed), how frequently, and using which techniques (automated vs. risk-based manual review). You’ll want to implement a risk-based approach to address your organization’s application portfolio that considers both in-house developed code and code developed by vendors or third parties. From a governance perspective, you’ll want to establish an appropriate “gate” in your SDLC to dictate point in time security assessments. Static analysis is a rules-based technology so determining which types of security rules are enabled (and why) is a policy concern. In a broader sense, findings reported by any code review capability should be mapped to a vulnerability coverage policy no matter how those findings are discovered (tool vs human). Policy also comes into play when determining how to “score” findings (e.g., High, Medium, Low) and when mandating firm remediation timelines based on the finding’s severity.

Application On-Boarding

Given the solution architecture, application on-boarding is about defining the end-to-end workflows associated with on-boarding applications to the capability and answering “who’s responsible for what?” You’ll want to evaluate the time required to on-board applications to the capability to ensure scalability goals will be met.

When thinking about the application on-boarding dimension, consider the who, what, when, where, and how aspects to the following activities:

  • End to end application scanning using automation
  • Risk-based code review process to blend tool-based and human review
  • Results analysis and triage
  • Rules management
  • Vulnerability reporting and remediation
This dimension demands that you consider the roles and responsibilities that must be undertaken by various stakeholders within the enterprise (e.g., developer, security champion, risk manager, development manager, etc.). Also note that the specific activities that get executed when applications are on-boarded to the capability may change depending on the program’s overall maturity - for instance, during a pilot phase, there may be a greater emphasis on static analysis rule tuning to manage false positives (to establish a baseline rulepack) than during “business as usual” operations.

Vulnerability Management

Often the most over-looked and challenging dimension involves all things related to vulnerability management. Why is this dimension important? Because it’s about ensuring governance, enforcing accountability, and establishing methods to measure the program to drive reporting and evolution of the program.

When evaluating this dimension, find answers to the following questions:

  • Which systems will be used to manage vulnerabilities?
  • Who will have access to vulnerability data and at what level of detail?
  • How will findings or applications be “scored” and how will scores affect code promotion (i.e., gating)?
  • How will false positives be managed?
  • What will be reported and to whom?
  • How does the sign-off or exception process work?

That should give you feel for the highlights of each dimension. I’ll end this post with a few more thoughts on establishing effective static analysis capabilities.

Pre-game

  • Position your code review capability as a distinct practice as part of your organization’s broader software security initiative (SSI), not a one off tool deployment exercise - be sure to plan and budget carefully
  • Understand that establishing an effective code review capability is a cross cutting concern impacting people, process, technology - often the people and process parts are more challenging than the technology piece
  • Recognize that success will be primarily measured by how effectively the capability influences and enables behavior change rather than build a big pile of security bugs

Tools are useful but far from perfect

Static analysis security testing (SAST) tools need software security experts to run them efficiently. If you want an in-house solution, you will need staff with specialized skills to manage - among other things - false positive rates. From a coverage perspective, tools suffer from the same drawbacks as any rules based technology - you’ll only detect things tools know how to find. You’ll need to determine the correct blend of tool-based and manual review for your portfolio to meet your specific assurance needs and not fall prey to having a false sense of security when the tool gives you a thumbs up. Sure, you can opt to go with a SaaS scanning model but that doesn’t solve the false positive, coverage, or behavior change problem that you’re really going after.

Building a program

Expect to be in it for the long haul. Remember that we’re really talking about behavior change here not technology execution. It’s a 2 year roadmap for most enterprises with a few hundred applications. You need a budget. You need executive level support (if you don’t have it, you need to plan to win it). You need to choose an implementation model. You’ll be creating new organizational roles, workflows, and policies - changing culture is the tough part. You need a roll-out plan to obtain organizational buy in and market the program internally. And you need to measure and report meaningful metrics to evolve the program. If you’re a smaller company, you’ll have fewer hurdles and a shorter track but the same fundamentals apply.

As with establishing any new program capability, remember:

  • Take care to document your objectives up front - know what you want to achieve and ensure that the solution architecture that you ultimately implement will achieve your goals or overcome existing challenges
  • You need a budget and resources (e.g., staff, hardware, software, etc.)
  • Avoid underestimating the short and long term operational costs - estimate costs based on your implementation model and overall program objectives

[1]: Such as an internal tool deployment of HP Fortify or IBM AppScan Source

[2]: Such as Veracode or HP Fortify On Demand

Post a Comment

Your email is never published nor shared. Required fields are marked *