I've always found sandboxes interesting, particularly from a cost-benefit analysis perspective.
As a developer you should be writing good code, period. But when the pace of developing new functionality outpaces the ability to do complete software security analysis we see security organizations turning to sandboxing as a method of limiting the amount of damage an exploited piece of code can do.
Just ask Adobe if you want a good example.
Does it make sense to spend time designing, coding, testing and deploying a sandbox, when the real issue is in the underlying application you're trying to protect the operating system from? I'll let you answer that for yourself.
An interesting example from one of Black Hat Europe's speakers was from Guillaume Lovet came in the form of a presentation titled "Breeding Sandworms: How to Fuzz Your Way Out of Adobe Reader's Sandbox".
I sat in on the session and was particularly interested because Adobe has made such an effort to secure Reader X through sandboxing that it was interesting to see just how they got around the security features.
As it turns out, the bug he discussed is CVE-2011-1353 which was discovered back in September by both him and his co-researcher as well as some X-Force team members - so the moral of the story here is that it probably was already being discussed/exploited "in the wild"... just an educated guess.
The idea that a sandbox is a middle-man between a binary application and the operating system, effectively limiting the damage an exploit of that binary can do against the operating system. So if you were to exploit Adobe's Reader prior to the sandbox model you would probably have read/write to the OS and many other system-level calls which would then go completely to the attacker's control.
Now with the sandbox in place, even if you exploit Adobe Reader, you are still limited in damage scope by the sandbox's 'proxy' back into the OS layer. The sandbox, in theory, should be limiting the Reader binary from doing things it really shouldn't be doing to the system. Everything about this makes sense and the exploit bar is certainly raised as Guillaume explained...but it doesn't make it impossible.
You see, Guillaume explained what we all already know, finding flaws in Reader isn't all that difficult with a little work, and the sandbox is not a panacea for security mechanisms.
As the presentation continued, Guillaume exploited a flaw in the way that the sandbox function calls and handlers in the inter-process communication stream interacted - and he was able to simply completely disable the sandbox by writing a registry key through the first-pass at the exploit, and then subsequently do whatever he wanted to on the system.
Now, to exploit this you have to have something people will click on more than once, if the first try fails. We all know that this never happens ...right? I can see it now - a PDF file labeled "Employee performance review Q1-Q2 2012" is accidentally found on a USB drive in the parking lot.
An eager employee picks it up, plugs the drive in and sees the file. Of course the employee double-clicks the file and Adobe Reader X crashes. What happens next? Of course, the eager employee tries again - but this time the system is fully owned by the trojaned PDF file... sounds too real to be fake doesn't it?
The bottom line here is, sandboxes aren't a panacea of security, they just raise the bar some. Determined attackers and researchers like Guillaume will continue to find these holes in the armor of the code, and the proxy-code (sandbox) through which it runs until we figure out how to write secure code in the first place.
Oddly enough, even if we write a perfect sandbox protecting the operating system from exploit through the browser - we're still looking at mass exploits as data moves from your hard disk to the cloud.
It's getting less and less critical to exploit someone's local machine to gain access to critical data as everyone starts to access their files from 'the cloud' using services like Google Docs, DropBox and others... the implications here are staggering.
The solution here of course is the same one that we've been preaching for years... regardless of the presence of a sandbox (which yes, I will admit adds a layer of complexity to the attack) software security is critical during requirements, design, code, and testing of any application. It is only when we write good code in the first place that we will have an easier time defending it once it goes live.
Lots of interesting buzz about this and many other topics in the hallways at Black Hat Europe. Wish you were there!
Cross-posted from Following the White Rabbit