Dynamic Application Security Testing (DAST) - When Not to Let Automation Drive
Dynamic Application Security Testing (DAST) is one of the long-standing staples of Software Security Assurance programs, and has been the anchor by which many organization have boot-strapped their efforts to write better code.
Whether this is the correct approach or not is not appropriate at this juncture, suffice it to say that in the real world this is how things go.
As the technology has evolved over the past decade it's been interesting to watch organizations use (or attempt to) use DAST technology to supplement or entirely replace their own internal staff of testers... hoping that technology can completely do the job of a human being. They've almost all failed, and while the reasons may feel obvious to security professionals - it's important to make sure everyone is on the same page.
Let me start off by saying technology should never replace a human being, if that's what you're trying to accomplish - you're just doing it wrong. Assume technology is a supplement to the human tester, and automation and software better enable people and processes.
If either the people, or process component of this equation fail, automation (technology) can not be counted on to be successful, or take up the slack. Technology, no matter who makes it or markets/sells it, is not intelligent... and by that I mean having the ability to think and reason. This fact alone should clue you into why automation alone can train-wreck in a big way.
Here are a few reasons, and places, never to let automation loose without first knowing what you're doing:
- Unknown application function - If you've never fully investigated the application; front-end, back-end, and gotten a well-grounded understanding of what it does, please should not point n' shoot any DAST-based application security scanning technology at it. The results could be catastrophic.
- Registration applications/sites/pages - If the site or application has a registration component to it, make sure you've appropriately either excluded that piece from automated testing, or appropriately scripted/configured that workflow so that the tool doesn't exercise the registration process, and cost you tens or thousands of dollars. An example of this is when a user set up a "black box audit" of a credit system front-end. The tool performed the registration step several thousand times based on the standard configuration testing the various parameters and page options... ultimately the tester found out that each registration - because it was doing a look-up through a credit agency - cost the company $1.00 USD. Not only did they then have to go and clean out the production database, but now there was an extremely large fee to pay the credit agency for the test.
- Administration Interfaces (production) - As a general rule, one should never blindly test administration interfaces for systems... especially in a production environment. Such a test could cause serious data loss, service disruption, or worse. Two terrible examples are a cloud management front-end, and a database (think MySQLAdmin) web-based management interface. In the first instance, an IT organization was implementing a well-known cloud management framework adding in their own custom code to extend available functionality and the security team decided to enforce DAST "black box testing" on the project since no code was reviewed during development. Given that the application was about to go live, after some arguing the security team got their wish and pointed a DAST tool at the customer BETA environment... the result was that the tool found an interesting bug... then accidentally exploited that bug to wipe out all the virtual machines in the cloud environment... causing a catastrophic outage. In the second example, a database administrator wanted to see if his DB admin front-end was secure before putting it out on the Intranet site for other administrators to access. Using a black-box testing method he successfully performed a drop-table on every database in the management console ...needless to say that cause some outages.
- Critical Systems (anything that's connected to a critical system) - Because critical systems - whether we're talking power grid, or heart monitor at a hospital - can cause death, you should never blindly "black box test" applications that are connected to them. On a best-case scenario the testing technology can overwhelm the application and cause it to fail temporarily, causing a minor glitch if it's not a production system; but on a worst-case scenario you're potentially killing someone when you drop the main table on an application that hospice nurses use to track medication dispensation. In the latter case, even if you can recover the database from backup in <8 hours... how many patients does that severely impact?
- Cloud Applications (or SaaS) - If you are renting or subscribing to a cloud software-based service you absolutely need to make triple-sure that you have (a) the vendor's consent, and (b) a good understanding of the total impact of the tests you're trying to perform. Remember that in multi-tenant (customer) environments the defects you uncover probably don't just affect your application or data. This is, of course, all the more reason to test - but you have to do it in a sane environment. An unfortunate example of this is a security tester who contacted me to complain that the technology they were evaluating (vendor shall go un-named) caused damage to their cloud environment, and that they were in the process of a legal battle over liability from the impact of running a black-box DAST tool on their SaaS application. While they did first get permission to scan their own application instance, what he did not realize is that on this same host and base URL were other applications which connected to other services - which, when broken, caused serious outages and confusion. An incident response plan was put in effect by the vendor, the environment went into lock-down, and data was lost and downtime was incurred... all because someone ran a testing technology without first understanding the consequences adequately.
That being said, here are some tips for conducting safe DAST testing, whether you're using our award-winning Webinspect engine or someone else's tech...
- Always prefer to test in a non-production environment, mandating when possible. When this is not possible, make sure you understand, convey adequately, and get agreement on the potential impact. Over-communicate as a rule.
- Get backups of everything first, then test. This is rule #1.
- Understand the application profile - know what the application does, how it functions, and if there are costs associated with transactions you will probably trigger... stub those out if possible! (let me know if you want more on this)
- Learn the tool you're using so that you can adequately configure it to be intelligent, purposefully omissive, and targeted
- Limit testing parameters so that the tool can't go off and test database, applications, functionality and domains you don't intend to test.. this is particularly important if you're stuck testing a production, or production-ready environment
...of course, if you simply don't have the time to do a thorough job with the background work needed to perform a thorough, sane dynamic application security test - you can always outsource it to us.
If you're interested in that service, click here for more information on Fortify on Demand... your best friend when you need it done, and don't have the time/resources to do it yourself.
Cross-posted from Following the White Rabbit