Continuous Patching: Is it Viable in the Enterprise?

Tuesday, February 28, 2012

Rafal Los


David Mortman, a friend and colleague, posted a message on Twitter that caught my attention a few weeks ago, and I've been thinking about it ever since. 

David says he wants vendor companies to release patches more regularly (aka more often) but in smaller batches so administrators could simply do continuous patching rather than playing the monthly patch bingo they do today. 

Yeah, I'm not sure I buy that as a good idea, so I thought I'd challenge him to figure out what in the world he was thinking. What transpired was a series of emails back and forth, and it turned out like this...


I started a thread this morning on Twitter about wanting vendors and users of enterprise apps to take on more of a continuous deployment model of patching. In the web-scale world, which is admittedly different from enterprise software, continuous deployment had been demonstrated to be a very effective methodology for changing applications while maximizing uptime, security and stability of the application in question.

What I really want is two things:

  • Vendors to release smaller patches more often
  • Due to #1 above, have customers be much more comfortable pushing patches faster and more frequently

Note that this is far from just related to security patches, but all patches, change-sets, CPUs, upgrades, whatever you want to call them...

[Alright, he had my attention because he was either totally nuts or onto something. I wasn't quite sure which.]


David, that makes sense if you're writing the software as in an agile development methodology, but can you imagine what kind of madness would ensue if patching was a constant exercise?  As it is, most organizations barely have enough time to regression test and deploy patches as they come out on a semi-regular basis... If we asked them to patch continuously I think the business (and IT folks) would revolt.

Let me play devil's advocate for a moment.  As a perfect example, many high-criticality enterprise applications (and specifically ones which I had the displeasure of working with in a previous life) count downtime (planned or unplanned) in minutes per year...  and a continuous patching regimen would blow that completely away. 

How would you justify that or what would we do?  I know you are saying this requires a 'new way of thinking' but is this more of a revolution?  Are enterprises set up for and maybe even ready for this?  Is it even a good idea?

Where do you find increased efficiency here?  What's the added value you're bringing by running patching... 'continuously'?


It may well be a revolution that is required but I think that's a red herring.

Well architected apps don't require downtime to patch. At least one of the large SQL database vendors already supports patching w/o shutting down or rebooting. Linux usually only requires reboots with kernel changes. Web apps should be able to be upgraded node by node. That part isn't rocket science. Part of the reason huge regression tests are necessary is that the size of changes is so huge.

Let's look at an effective example of how this does work well today. To whit, av, anti-spam and anti-malware updates. People have gotten used to pushing them without testing because the perceived value is better then the perceived risk.

When I was at Siebel we pushed windows patches very quickly relatively speaking (w/in 24 hours) because we decided we'd rather break the network in ways that we understood rather then let the miscreants break it in ways that we didn't.


Fair enough David... I think the shift to a more 'on the fly' patching mechanism will require a few things then:

  • A re-architecture of some of the core applications businesses rely on, including those they wrote themselves to be flexible and require downtime in only minimal situations
  • Trust in the patching model, because applications fundamentally will still be breakable by the errant patch. We see this in the Java world all the time where an update to the JVM has disastrous consequences on the application running on top of it, and not only are organizations required to 'back out' critical patches but in certain cases organizations have to go without patches due to the complexity of rebuilding to meet patch requirements. (This is a very real problem!)
  • Some proof that 'faster patching' equals better security ...and with all the exposed applications being the root cause of much of the breaches, I'm just not ready to believe that yet... raw numbers would help, proof speaks loudly.

I'm still not convinced operating systems, databases and enterprise applications are ready for this kind of tectonic shift - but maybe it's time to start thinking this way.  It's really a risk equation, as you lay out... the risk of unknown effects versus the risk of known effects (i.e. downtime).  I'm wary though, because it takes just one major outage caused by this type of patching cycle to make the whole thing explode... how would you roll this out?


1 & 2 - Completely agree here. This isn't something that we can just turn to overnight. It also requires vendors to push smaller patches/upgrades as well for this to all work well.

3 - Also I agree, we need more data. There's lots of very interesting evidence to coming out of the web-scale continuous deployment world that this works very well, but those are organizations who own the code that is being pushed. It's definitely time for some experimentation and data gathering to find out if my idea will actually work or not.

Maybe I'm missing the point, but I don't buy it.  Continuous patching would be an additional burden in my eyes.  Then again, being able to work more efficiently with less to change (smaller patches) would be beneficial because we could make better assumptions on what broke if a patch went sideways.  Once again, we're down to risks.

Is an organization willing to trade a more steady-stream of effort on their systems/applications upkeep in exchange for a more sane way of patching with less patches going in at once?  Can you quantify the risk of unexpected downtime against expected downtime?  Isn't downtime just downtime?  Only time will tell if and when some vendor tries that. 

Personally, as I've stated before, I will sit back and wait for some raw numbers and metrics behind this. 

The way patching works right now (on all different levels) I'm surprised anything works because we have operating system patches going in with application patches - and we're all just a heartbeat away from catastrophe.  Maybe more continuous patching can be our savior?

Cross-posted from Following the White Rabbit

Possibly Related Articles:
Information Security
Enterprise Security Risk Management Application Security Security Strategies Patch Management Network Security Innovation Architecture Rafal Los downtime
Post Rating I Like this!
Damion Waltermeyer I agree it's a worthy goal, but I'm not sure how to implement it. With testing now, it requires a lot of effort to coordinate the testing for each batch of updates in the test environment so we don't nuke production. We try to follow the patching schedule of all of our apps regularly, but it is the testing that takes longest so you can have a buffer from that catastrophe you reference. In a smaller environment with fewer apps It could work very well, in an environment where you need to regularly test over 100 apps. It seems harder to pull off.

Thinking on it more, I think maybe more apps need a diagnostic of sorts wherein they can self test after each round of patching. If an app can go through it's motions and detect errors in itself it would speed testing and lower patch deployment risk dramatically.

Joe Stern When I deploy patches, it's usually through Group Policy, which requires users to reboot in order to install updates with elevated permissions before logon. They view frequent requests for reboots as intrusive, and I wouldn't want to order them more than once per week.
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.