What started this post is that I have recently received a number of calls and messages from clients and colleagues. The conversations have all gone basically the same.
They were calling me and telling me that their ASV had failed their vulnerability scan because the OS detected was unsupported and they are wondering whether or not I have encountered this before.
My first question usually was along the lines of; “So, what vulnerabilities did they detect?”
“None,” was the confused answer at the other end of the line.
“What? They must have detected at least one high, severe or critical vulnerability? That is the only way you can fail,” I would ask, now also confused.
“Nope. Nothing. Just the fact that the OS is unsupported,” I was told.
Do not get me wrong. I am not advocating the use of unsupported operating systems, particularly unsupported versions of Windows. The risk of course is that one or more vulnerabilities show up that the vendor will not fix because the OS is no longer supported. So there is good reason to avoid this situation. However, there are also situations when you just get no other choice either due to your own organization’s issues and politics or software vendor issues.
This situation got me thinking and doing some research since I did not remember ever seeing or being told that an unsupported OS was an automatic vulnerability scan failure. I no longer do external vulnerability scanning, so my recollections of training and working on the ASV side of our business is a bit fuzzy and very rusty. However, I had never failed a client for an unsupported OS. So when this issue came up, my only action was to determine what had changed.
The first thing I did was review the latest version of the PCI ASV Scanning Procedures, v1.1. I searched for terms such as ‘old’, ‘unsupported’, ‘out of date’, ‘OS’ and ‘operating system’. No matches. So there is nothing in the ASV scanning procedures that fail an organization for running an unsupported OS. Even the PCI DSS does not call out unsupported software, so procedurally; I am thinking there is nothing explicit regarding unsupported OSes causing a failed vulnerability scan.
So when I made the original posting, I got a comment from one of my readers pointing me to the ASV Program Guide. Low and behold on the top of page 16 is the following:
“The ASV scan solution must be able to verify that the operating system is patched for these known exploits. The ASV scanning solution must also be able to determine the version of the operating system and whether it is an older version no longer supported by the vendor, in which case it must be marked as an automatic failure by the ASV. “
So there is no “magic” vulnerability I was missing as the PCI SSC does specify that a scan automatically fails if the OS is unsupported.
But that is not the entire story. The key to this whole process is that the vulnerability scanner used must be able to verify the operating system. While all vulnerability scanners attempt to identify the operating system, the reliability of this identification process is suspect at best.
I am not aware of any vendor of security testing tools that makes a claim that they will identify an operating system 100% of the time. This is because of the fact that there are many, many things that can influence the OS signature that the tools cannot control and therefore can greatly affect the ability of the tool to identify the OS, particularly when talking about external scanning.
And if an organization follows the OS security hardening guidelines, a lot of unsupported OSes will not be properly or reliably identified by vulnerability scanners. As a result, I find it hard to believe that the PCI SSC intended to have ASVs only rely on the results of a vulnerability scanner, but that seems to be the case.
So with this clarification, I contacted our ASV personnel and they have told me that they too have been failing vulnerability scans if they run across unsupported operating systems. I ask if the OS signature is inconclusive, then there is not a failure? Yes, if the scan comes back and does not identify the OS, then they have nothing to go on to fail the scan and the scan passes.
Given the difficulties vulnerability scanners can have identifying the target operating systems such as when scanning through network firewalls, Web application firewalls, load balancers and the like, I now ask if they feel that these identifications are reliable enough to fail a scan. I am told this is why they confirm the information with the client before issuing the report so that the report is accurate. So if a client is not honest, they could influence the results of their scan? I am reluctantly told that is probably true.
Then there is the issue that not all operating systems are created equal. Operating systems such as MVS, VMS and MCP are nowhere as risky, if they are even risky to begin with, as Windows and Linux. A lot of ASVs would argue that they never come across these operating systems running Web services.
However, all of them have the capability of running Web services and I personally know of a number of organizations that run their Web services from such environments. Organizations are running these older versions of operating systems mostly because of the financial considerations of migrating to something else.
However, I can guarantee that none of the dozens of vulnerability scanners that I have used in the last 10 years would accurately identify any of these operating systems, let alone tell you the version unless some service message header information was retrieved by these tools. And even then, most tools do not parse the header to determine the OS so it would take human intervention to make that determination.
Regardless of the failure, most ASVs do have a review or appeal process that allows organizations to dispute findings and to submit compensating controls to address any failures. So for organizations that cannot get rid of unsupported OSes, they can use a compensating control. Like compensating controls for the PCI DSS, the organization is responsible for writing the compensating control and the ASV needs to assess the compensating control to ensure that it will address the issues identified by the vulnerability scan.
So, if you can fail an organization over an unsupported OS, why is it that you do not automatically fail on unsupported application software? I went through the Program Guide and there are all sorts of other criteria for applications but nothing regarding the fact of what to do if they too are unsupported. Applications such as IBM Websphere and Oracle Commerce can become unsupported just as easily as their OS brethren.
And in my experience, use of unsupported application software is even more prevalent than unsupported OSes under the idea that if it is not broken and does not have vulnerabilities, why upgrade? When I asked our ASV group if they fail organizations on unsupported applications I got silence and then the response that they will fail an application if the vulnerability scanner provides a high, severe or critical vulnerability.
To tell you the truth, while vulnerability scanners regularly return text header information for a lot of applications, I would be hard pressed without doing a lot of research to find out if the version being reported was unsupported. However, scanners could provide this feedback if they were programmed to provide it.
Then there are all of the conspiracy theories out there that say the PCI SSC and technology companies are working together to drive software sales by forcing organizations to upgrade and there would appear to be a lot of anecdotal evidence that would seem to support this argument.
In reality it is not that the software companies are working together with regulators such as the PCI SSC so much as software companies operate this way in order to focus development and support resources on fewer, more current versions. As a result, it is just happenstance that regulations cause organizations to have to update their software.
The bottom line in all of this is that you have options to avoid a failing vulnerability scan because of an unsupported OS. The best method, and the one I most recommend, is do not use unsupported operating systems in the first place. However, as a former CIO, I do understand the real world and the issues IT departments face. As a result, I recommend all of the following which may or may not require you to develop a compensating control.
- Implement not only a network firewall, but also a Web application firewall (WAF) and make sure that the rules are extremely restrictive for servers running unsupported operating systems.
- Configure your firewalls to block the broadcasting of any OS signature information. Masking the OS signature will provide the benefit of not advertising to the world that the OS running whatever application is unsupported. This is not a perfect solution as, 9 times out of 10, the application itself will likely advertise the fact that the underlying OS is unsupported. It is very important to note that this is only a stop gap measure and you should still be actively in the process of migrating to a supported OS.
- Implement real-time monitoring of firewalls, servers and applications. Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
- Implement a host-based intrusion detection/prevention solution on any servers that run the unsupported OS. If using a HIPS solution, you may also want to consider using its preventative capabilities for certain critical incidents.
- Implement real-time log analysis for firewall, servers and applications. . Define very specific alerting criteria to ensure that any suspicious activity is immediately reported and operations personnel immediately follow up on any alerts to determine whether they are a false positive.
- Actively use your incident response procedures to address any incidents that are identified with any unsupported OS.