Scanning Applications Faster - A Chicken vs. Egg Problem

Sunday, October 09, 2011

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

I sat in on a fascinating OWASP AppSec USA panel recently.  The panelists read like the who's who of Application Security vendor luminaries... and I had high hopes for the discussion. 

The panel was called "Speeding Up Security Testing Panel" - and the idea was interesting... what can vendors do to perform security analysis on more code, more applications "faster"?

While I have a tremendous amount of respect for everyone on that panel, I'm disappointed that no one pointed out the obvious folly the topic of 'testing faster'.  

While I believe there is merit in making security testing automation 'faster' to achieve results - I absolutely feel that need is far overshadowed by the need to fix faster/smarter.

What we have here is a classic chicken and egg problem.  While on the one hand we (in Application Security) want faster scanning technology so that the average 1,200 applications have even a snowball's chance of being tested in the scope of a year... what of it?  In even the best case scenario, what would that matter?

Let me give you a hypothetical scenario.  If tomorrow HP released a static or dynamic analysis tool that guaranteed 100% coverage and performed application security scans in 30 minutes, what would that achieve? 

Sure, you'd be able to scan all 1,000 of your theoretical applications in about 71 working days, or just over 3.5 months (conservatively speaking and factoring in time for human analysis)... so what? 

You would likely have a pile of security defects that would overwhelm your application development team for years.  The amount of technical debt we've amassed in legacy applications cannot be paid back in a single swoop... like the deficit we've racked up as a nation, we will be paying the technical debt off in installments for what I fear is decades to come. 

Even if you have the average 30 severe security defects per 100,000 lines of code - most code bases are in excess of a million lines of code, which is overwhelming.  My point is that even if we had the technology to scan applications at blistering speeds it wouldn't matter because all we would be doing is collecting a massive backlog of fixes that are required.

Now, don't get me wrong, I think that we do need to keep pushing the boundaries of automation in order to tackle this problem we've put ourselves in, but I think there are now two distinct issues. 

The first issue is that there are billions of lines of legacy code - the code that we've written or are writing right now which is being done under the existing school of application development.  What I mean by existing school of application development is this mentality of trying to 'test ourselves secure'... or the notion that we can build an application and later test it for security defects that may or may not be there. 

This is where we need all those super-fast, super-accurate scanners to tell us just how bad this hole we've managed to dig ourselves into has become.  I suspect I already know this answer, sadly. 

The second issue is we fundamentally need to re-think software development.  Security begins and ends with requirements, period, and scanning faster doesn't get us to this end.  If developers don't have clear goals and security requirements to code to, how can we reasonably expect them to produce resilient code. 

This is akin to a structural engineer being graded on whether his or her structure will withstand a meteor strike when the specifications initially did not call for it.  This clearly won't end well.

While I'll address these two issues in more depth in a future series of posts - I simply wanted to take a moment to point out that I absolutely feel that we're trying to solve the wrong problem.  Maybe it's because it's easier to build a faster scanner than it is to figure out how to build a faster way to fix problems? 

I don't know... but I know I'm not alone.  I know Jeff Williams agrees because we've talked through this ...and I'm sure that many of you reading this are nodding along too. 

Brian Chess gave a wonderful talk on gray box testing... and he made a point that I've been making for a while as well - the most important evolutions in application security automation won't (only) find security defects faster, they'll help you fix those things that are found faster. 

Fix time is the more important velocity metric, not scan time.  Step one is for us as application security professionals to clearly understand that there are two parallel efforts, and it appears as though the wrong one is getting the focus right now.

So please, the next time you're evaluating application security testing technology - ask yourself this... "how does this technology help my developers fix these issues faster?"  If you don't have a good answer, maybe that's a problem? 

We need to shift the security culture from "find bugs" to "fix bugs"... or else we're in deep, deep trouble.  Don't get me wrong, once the software industry has figured out how to write secure software by design, then we can worry about demanding bigger, better, and faster scanning automation. 

Until then, let's re-focus efforts on finding a way to not produce thousands of critical defects per application... and to fix the technical debt we've amassed so that at least we're not falling further behind?

Cross-posted from Following the White Rabbit

Possibly Related Articles:
12330
Webappsec->General
Information Security
Code Review Application Security Development Secure Coding metrics Scanning Software Security Assurance
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.