Deploying code faster, on the order of multiple times per day, is the essence of the modern ultra-agile enterprise.
Nick Galbreath who works over at Etsy talks about this quite often, so much so that his latest presentation on SlideShare caught my attention. Take a look at slide 22... what Nick's highlighting is some of the age-old tactics that unfortunately few organizations employ even today.
A few things are true here... while you certainly can use velocity and frequency to detect potential vulnerabilities and thus attacks against a web application - a high frequency doesn't always mean an attack or that a vulnerability is present. On the other side of that coin, it is a fallacy to assume that a component (or "page") needs to have a high frequency or velocity to signal a targeting by an attacker.
Can we really learn anything here, or would setting this up be a simple waste of time and resources? Should organizations that care about the number of exploitable security defects in their applications simply stick to what they're doing today, and forgo this approach? The big question is this - is an "attacker-driven" security testing strategy just whack-a-mole and a colossal waste of time?
Nick's smarter than that, and he hints at his strategy on the very first bullet-point... "...augments Etsy's proactive security measures" says it all. So where does this leave us? I think this brings us back to intelligently bringing together multiple things in your enterprise, in stages, and making sense of data (as is the title of his talk).
Let me outline what I think we can learn, and what I think we can bring to raise the bar on web application security.
- First off no one sane is going to abandon tried-and-true security testing methods for web applications. That being said, there are lots of blueprints for what works in various organizations and every time someone comes up with a novel approach that seems insane it appears to work like a charm in some edge-case...
- Attacker-driven security testing is a secondary testing measure that is more closely aligned with incident response than traditional security testing cycles. After all, you're taking an application that's running in production and using telemetry data from that running application to do on-the-fly diagnosis of possible flaws... that's a tough game.
- Lots of web apps have issues that only require 1 or 2 hits to exploit, primarily because they're obvious. Simple SQL Injection (SQLi) or Cross-Site Scripting (XSS) or even something like Cross-Site Request Forgery (CSRF) can be found and automated with just a few page hits. I wish I had a dollar for every time an application was vulnerable to SQL Injection the first time I touched it by adding in a 'or+'1'='1 in an input field.
- Assuming your organization is vetting simple attack vectors it should take an increase in frequency to find and exploit web application defects in the wild. This fluctuation should be detectable by a reasonably tuned SIEM (take a look at ArcSight literature for more on how we at HP achieve this) and counter-measures could be launched to deter the potential attacker while the incident is investigated.
- Detecting variations in frequency and velocity of web page request patterns requires a relatively good baseline analysis - which isn't easy to do if you're generating terabytes of data per day... and patterns are hard to find and understand.
- The above brings me to the notion of 'big data' - and before you dismiss Big Data as a marketing buzzword think about the ability to detect patterns in massive data sets in near-real-time, which could be extremely useful in a situation like this where you need a baseline and the ability to detect a variant needle in a stack of needles.
- Tools are critical in situations where you have massive amounts of data, a strategy laid out, and have to perform an incredible amount of automated parsing and analysis. You'll notice that I said that tools are predicated on having a strategy laid out and knowing what you want to look for, and how.
- Your applications must have hooks for all the telemetry that will do the data pulling and analysis. Your applications must generate robust logs, capture events, IP addresses, users and all manner of errors and such in order for you to understand the application. Simply turning 'everything' on may be too much (in fact, I can almost guarantee that's true) but leaving only basic logging will rarely provide you the type of robust data you need.
Nick's idea of looking at attack-driven testing is interesting... look at where applications are being targeted (potentially) and send your crack your security testing team at that part of the application to see whether there is really some new security defect there ...or whether there is simply some cool feature that is driving lots of traffic to that page - is very interesting.
What we're talking about is an exercise in integrating incident response and security testing with the development organization and operations - the essence of which is ... DevOps! Personally, I think it would be very tough to have this work in an effective manner, but this is one of the many cases where I can't wait to be proven a baseless skeptic.
This is a fantastic way of taking a limited pool of resources and providing guidance on what is really interesting... OR... it could simply be a great way to chase wild geese. I think it's worth the try if you've got a decent security program in place, and your organization is ratcheting up the release rate for your web apps...
Cross-posted from Following the White Rabbit