Exposing Unproven Enterprise Security

Wednesday, April 25, 2012

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

There has been a lot of discussion lately on Twitter and in some of the hallways at the last two conferences I attended on this topic. 

There was even a great panel at Information Security World 2012 which I wrote about as did Robert Westervelt of SearchSecurity.com and Bill Brenner of CSO Magazine

Let's be clear though that while many of the discussions focused around an over-exerted focus on automation I actually like automation and think that it very well may be the best thing since sliced bread (I mean, who doesn't like sliced bread!) so I am choosing this second article on this topic from a yet slightly different angle.

Something on that panel from InfoSec World has stuck with me.  That notion of preparedness that one of the panelists brought up is haunting me in my thoughts.  I feel like I need to re-address it otherwise it'll just eat at me.

I recall when I put together my operational budget and plan for a security team I led deep in the bowels of a Fortune 50 enterprise.  Thinking back everything seemed so perfect, and so well thought-out. 

I had accounted for perimeter, network and even data-centric security models.  I had budgeted for technology which would enable my scant security team to take on challenges that would otherwise be well out of their capability and was looking to automation for support where possible. 

I had worked hard to ensure that we had a compliance plan to minimize how many fire-drills we had to go through per year.  I even had an educational and awareness program in the works to keep the masses and my team vigilant against threats.  I was fairly comfortable defending my plan and how it would work once implemented and funded.

This is where I can now see looking back things break down.  Even with the greatest technology and automation in place, the best-laid plans and policies, and the most thoughtful training... none of it was ever tested in a real-world scenario to gauge effectiveness. 

Yes, I had done several 'penetration tests' of our environment but as someone recently pointed out - many of those were probably fairly useless given the quality of the testers.  This brings me to a much broader problem that I am starting to believe exists more globally, and likely spells trouble for anyone who hasn't found religion in validation.

Here's the deal, before you call me a baseless alarmist, unless you've tested your defenses you can't actually be sure with any amount of certainty whether they work.  I don't mean this in a "can we ever be really sure?" philosophical sense here - I mean this in a concrete "does this even work?" sense. 

As an enterprise you've easily poured a million dollars into endpoint and perimeter security technologies, wouldn't it be great to know how effective they are at stopping something like a 'determined attacker'... or maybe more realistically if they give you some warning that you're being attacked? 

How about all those dashboards and single panes of glass that you have sitting around and blinking?  Can you detect a real threat across your multiple operational silos? 

Are you sure?

After having at least a dozen discussions on this topic with some really, really smart people here are 3 key defensive measures that you should actively test ...as soon as possible... to make sure they won't fail you when you need them most.

In order of importance:

Incident response policies/procedures - I hope by now you and I agree that one of the most critical things an enterprise security leader can do today is to prepare the organization for the inevitable incident.  Whether you become the victim at 2pm on a Tuesday, the middle of the night while on your vacation, or during the busiest online day of your year - when lightning strikes you have to have the buckets ready.  Want more proof of this? 

Look in your office for the nearest fire extinguisher.  Check how often they're serviced to validate that they still work and will be ready for when there is an actual fire someone needs to put out.  First you should have a solid policy and set of procedures written down and approved should the unthinkable happen. 

When I say that, I mean you actually need to think of nearly everything for your incident response - from a full frontal DDoS attack, to a 3rd party breach, to someone stealing your last unencrypted media set from the data vault (ahem)... and then once you're certain you've accounted for every scenario that's plausible and written great response procedures in those situations - you need to run a drill. 

The kind of drill where you execute the absolute worst-case scenario to see how well your incident response policy works.  Best case scenario everything goes like it should on paper - although that's probably not going to happen.  When things fail and break down you have opportunities to learn from failure in your simulation when there are no real dollars or lives at stake and adjust your policy/procedures. 

Adjusting now when your hair isn't on fire is a lot easier then when the servers are burning around you... believe me when I say this from experience.

Cross-silo (dashboard) attack detection - I've recently had some cool (and unfortunate) first-hand experience with an incident that crossed siloes and ultimately because of those siloes wasn't caught until it was entirely too late.  Here's the scenario.  Let's take a for-instance - Security, Applications, and Network Operations teams all have their own 'dashboards' for operations management. 

At a particular point in time each of those 3 dashboards picked up an 'anomalous' set of circumstances which were never investigated, as I'll explain.  The security dashboard caught some 'suspicious packets' which were flagged as non-critical priority from high-risk IP addresses to internal applications servers, but ultimately because the security team is always so busy no one investigated (sound familiar?). 

On the network dashboard, there were some small spikes in traffic during non-peak hours, but never really anything to write home about so no one was alerted.  On the applications silo things got interesting because the application servers were pegged for a bit during an off-peak time, during which times database query times spiked 100x their normal transaction response times, but ultimately the problem resolved itself (aka - went away) after 30 seconds so no one investigated because the teams were busy with poorly performing applications investigations. 

If someone had tied those 3 dashboards together they would have noticed the massive SQL Injection that just happened, like in a highlighted in a previous post.  Had someone taken time to actively test these dashboard defenses they would have noticed the massive failure and hopefully corrected for it by linking them together using existing technologies that were likely already available without having to buy anything more.

Automated active response - Do you have an IPS in monitor-only mode?  What about a web application firewall in monitor-only mode?  The idea of active response scares lots of security and IT leaders because quite frankly it's really easy to break application or business functionality and get severely reprimanded.  Here's where testing and validation can be your best friend. 

The main question you should be able to answer is this: when you're not in the office or staring at the consoles how well do your automated defenses keep the bad things out, and let the good things through?  The main reason you care is not because you're going to stop a 'determined attacker' but because you could potentially save yourself a network calamity. 

If only we had tested our system's ability to self-heal (or self-shut-down) like we had programmed it to do I can tell you at least 3 full network outages that could have been avoided back in a previous job.  Automated response is the pre-programmed, pre-planned response to known bad things. 

Like the IPS you have that gets advanced warning that a 0day attack is possible and starts to proactively block attack capability to that vulnerability... won't stop everything, just give you some time to breathe.

Hopefully you see where I'm going with this, and you now have at last 3 things you can go actively test as soon as time and budget permits.  Remember, you can't rely on defenses, policies or procedures until you've tested them in a real-world or near-real-world scenario. 

Until then... you're just "sorta sure" that you're protected and ready for when the smelly stuff hits the fan-blades.

Cross-posted from Following the White Rabbit

Possibly Related Articles:
16850
Enterprise Security
Information Security
Testing Compliance Enterprise Security Incident Response Attacks Network Security Information Security Policies and Procedures Automated Systems
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.