The Ultra-Legacy Problem - Systems so old...

Wednesday, December 19, 2012

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

"That app is so old, they've found caves with the source code scrawled on them!" ... sounds funny right? It may be funny, unless you're the person heading up the organization which needs to "do something about" that ultra-legacy code or system in your environment.

Say you're a sizeable institution here in the United States.  Say also that over the last two decades you've amassed lots of interesting technologies and platforms that run your business, in a time before the Information Security organization did much more than install anti-virus on your desktop... and now that technical debt has come back to haunt you.

If you're not familiar with the notion of technical debt as applied to Information Security, it's something you must read, and can certainly research yourself additionally.  The problem with the masses of technical debt that we've accumulated in the various industries is that it's not going away via "natural obsolescence".  It would also seem to concentrate heavily in finance, particularly with the "old financials" which translates to companies that have been around for 30+ years either as banks,  or other forms of financial services.  These organizations have archaic systems that very few of us are even qualified to address anymore ... remember Foxpro? Those systems don't really have people who can easily decipher decades-old software and translate it into more modern systems without severe pain, or maybe a seance!

Let's take a system I was introduced to recently where the problem isn't the desire to move off a 30-year-old platform, it's the ability that's the show stopper. When a friend of mine took this organization over at CISO, he immediately noticed that there were systems out on his network that were ...how should I put this ... antiquated. How in the world would you migrate that Foxpro system over to your current web-based platform? There just aren't any simple roadmaps or recipes to do this, so the ultra-legacy stuff keeps churning.

It makes sense that the biggest risks in any organization are the unknowns and systems that are more than 10, 15 or even 20 years old definitely classify into that category. Odds are pretty good that not only are the original architects, developers and implementers not around anymore, but that documentation and knowledge has left the building with Elvis. In a situation where you're not entirely sure what a system does, how it behaves, or what it depends on you have no choice but to try and understand it and what risks it may post first before you go to anything else. I've tackled one of these projects before, and it wasn't even this difficult. One day I'll have to write up a post about porting a Phion firewall to Checkpoint, but I digress.

Let's take a minute to reflect on why those ultra-legacy systems exist in the first place. It's generally one of two things that cause ultra-legacy systems to stick around - cost or knowledge ...and sometimes both are the problem. Cost is often the biggest reason due to the enormous costs that are required to migrate some of these ages-old systems to modern technology. For example, migrating one simple timecard tracking system on a shop floor (which was running Windows for Workgroups) in 2003 was projected to cost north of $300,000.00 USD to bring it up to 'modern' code standards. The result was the application was kept around until "it could no longer be reasonably maintained or used". The idea behind technical debt is that the longer a system sticks around the higher the costs of modernizing its internals and technology, or fixing bugs. So after, say, 15 years you're in serious financial ruin trying to migrate a system that should be simple in the first place. The other big problem is knowledge. Yes, that timecard system wasn't overly complex but it was written in C, and the original source is long gone so to effectively duplicate that system we must find the original business process documentation, right? Right ...good luck. If only it was that easy. In places where you've got both problems the costs tend to be higher and the life-support requirements elongated.

On a system where simply 'securing it' doesn't fit with modern technology you have at your disposal what do you do? When you can't patch a system because it relies on technology that has been out-of-support since the Clinton presidency you may have a problem. It's interesting to note that there are plenty of solutions out there that claim to solve many of your legacy problems with a new box on your network, or a proxy-based approach, or whatever ... be careful when you go to these types of 'solutions'.

What do you do, then, if you've got some of these ultra-legacy systems sitting around causing you nightmares and cold sweats? Stay tuned, that's coming in my next post in this series, because this issue isn't going to just 'go away' or solve itself.

Note: If you maintain, work with, or know first-hand of these types of ultra-legacy systems I want to talk to you. I want to hear from you what you're doing to mitigate the risks, migrate the platform, or whatever else your strategy may involve. Heck, if there's a wrecking ball in the future then even better! Either leave a comment or email me directly (Wh1t3Rabbit at hp) or find me on Twitter (@Wh1t3Rabbit) ... I look forward to hearing from you!

Cross-posted from Following the White Rabbit

Possibly Related Articles:
5506
Network->General
Information Security
Enterprise Security Network Security Legacy Systems
Post Rating I Like this!
65c1700fde3e9a94cc060a7e3777287c
Simon Moffatt Hi Rafal - do you think there's a case for arguing that old could be more secure? I'm looking from the point of view of less common coding languages, approaches to business logic and so on? If the basics are covered (encryption, access control etc) could there be an argument in saying that it's lasted 20 years, it must be good? Sometimes, new doesn't always mean better or more secure. What do you think?
1355995614
0a8cae998f9c51e3b3c0ccbaddf521aa
Rafal Los Simon,
I suppose an argument could be made, but then again you have to keep in mind that on the same breath the 'good guys' don't know whether the system is behaving properly or not - and if it comes under attack how they can right that ship.
I don't know if you can ever really make that argument stick, although I've seen and heard it done several times with varying degrees of success.

/Raf
1356147841
1de705dde1cf97450678321cd77853d9
Ian Tibble It's a fair question and one that comes up frequently. I can't think of even one data centre where there wasn't at least one Clinton-era box...NT 3.51, AIX 3.x, and the reason for this was typically the first one (cost) given in the post, more than the second reason.

Obvious non-security, support/infrastrucure/manageability concerns aside, the argument is filed under Security by Obscurity, but we don't really know how obscure...the Internet is as old as the pre-holocene machines and there could very well be zero days out there. Too many variables to make a safe assumption in the way of being old, and therefore more secure.

Even new vulnerabilities could affect very old versions of the same flavour of product...e.g. Oracle listener poisoning is thought to affect Oracle db versions going back 13 years...but from what I read i don't see reason to think that version 7 wouldn't be affected - and yes there are still version 7s out there (released 1992).
1356586454
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.