Software Security Assurance: Figuring Out the Developers

Wednesday, July 18, 2012

Rafal Los


Lately, this blog has been all abuzz with DevOps, cloud topics and enterprise resiliency. Today I wanted to take us back to where this blog started - App Security.  

Sometimes you hear someone say something so controversial it sounds ridiculous, and your first reaction is ridicule and dismissal... but what happens when everything else isn't working?  

Software security is one of those topics which has haunted me, and likely many of you, for a very long time.  It's not that we're not producing better quality (including security) code... it's just that we're not doing it enough.

A long while ago, as some of you may even remember, at OWASP AppSec Irvine one of the keynote speakers, who happens to be an industry analyst, said something controversial which made some people mad while others simply dismissed it.  She talked about a move information security would have to make in order to overcome the continually poor security quality we're getting from our SDLCs year after year.

Personally, my belief wasn't quite in line with the analyst but it's tough to dismiss this seemingly crazy notion even today.  Can we really finally get better software security by getting completely out of the developers' way, and simply creating mitigants on the tail-end of the SDLC?  I still don't think so... but the alternatives, you have to admit, aren't really going a fantastic job.

As someone who was a sales engineer for a software security products group for almost 4 years, I can tell you I've seen every environment known to man.  From organizations that genuinely don't care about the security of their applications, to to those that try and follow "best practice", to those that never stop spending money and trying to improve - they all have one thing in common.  The one thing these groups all have in common is that they've experienced a security incident of varying levels of calamity.

How can it be that organizations that are spending literally millions of dollars, empowering developers, pushing tools and automation, and re-defining processes to accommodate a more secure release cycle still manage to get popped?

It all comes down to the developers.  Whether you subscribe to the philosophy that developers should just be left alone while their code is passed through a magic widget that takes in junk and gives back secure code, or whether you think that security begins at home in the developer's IDE and should be pushed there ... both approaches are really, really difficult.

One of the main problems is that organizations are not homogeneous - they don't all follow the same release cycles, best practices, use the same tools or even report the same metrics.  This is where I believe much of the failure comes from.  Over the last few years I have watched two camps of belief emerge from the enterprise world.  

I'm not talking about vendors here, as we all have our own specific agendas and ideology - but I am referring to the enterprise management who are responsible for code quality and release schedules...

Integrated Development Approach

This is approach tries to work 'security' into the SDLC.  An approach like this doesn't work in many organizations because of how difficult this is, both technically and politically.  You really need to be able to motivate, empower, and then audit all while having a mandate from "on high" (that is to say, senior management).  Motivation is key, and continues to be one of the big 'tricks' that you have to pull off.  

Motivating developers to think about security can be as simple as financial incentives or as complex as negative feedback coupled with HR consequences of failing audit.  There is no silver bullet, and no magic pixie dust.  It is becoming accepted that bringing in a security expert into the architecture, development and testing groups independent of the security 'audit' function is required - but the challenges there are far from simple.

Post-Development Feedback

I'm still encountering many organizations which are trying to test themselves secure.  I don't want you to laugh at this, because it's a very valid approach in some places and it works for them... in whatever capacity they measure success.  Waiting for the developers to come up with a final product then testing rigorously and providing feedback both for immediate fix (show-stoppers) and eventual incorporation into standard practice (systemic issues) seems to work.  

Don't knock it if you've not tried it.  I could make a great argument that this is more effective in a centralized security organization where there aren't a lot of opportunities to directly live inside the development organization (or many development organizations).

I can think of at least 1 organization off the top of my head where each of these approaches works, just like I can think of at least one where each of these approaches has failed.  

I guess what this teaches us is that organizations succeed in writing secure code in different ways... and that what works in one may not be valid in another.  What doesn't work is not doing anything.  What doesn't work is hoping that no one takes advantage of your poorly written applications.

Whether you're a DevOps shop, agile, or just a standard old-school waterfall organization - each of you will have your own approach to attaining secure code.  What's critical is that you agree what an acceptable level of 'security' is, and what you're willing to spend in both time and effort to get there.

Cross-posted from Following the White Rabbit

Possibly Related Articles:
Testing Enterprise Security Application Security SDLC Development Secure Coding Software Security Assurance DevOps
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.