It’s Time to Re-evaluate Host-based Security

Wednesday, May 12, 2010

Andrew Baker


I’ve said it for a few years now, but host-based antivirus is really not working out anymore.  Not with its focus on enumerating bad code and its reliance on signatures to detect malware.

Recently, several prominent antivirus vendors have experienced problems with faulty virus definitions: 

Although all of these vendors have promised the obvious improvements to their QA and testing processes (and I have no reason to believe that they are insincere), there is no sign that these problems will diminish over time.  Instead, it is pretty clear that there will be more problems as the massive increase of malware is forcing vendors to push out updates faster and faster.

There are several problems with malware protection which relies on signatures:

  1. Malware writers are using sophisticated toolkits which reduce the skill and time needed to produce effective malware – both new and variants.
  2. Polymorphic malware regularly gets around signature detection, forcing AV vendors to constantly push out new signatures – several times per day!
  3. There are many kinds of malware that are still not properly detected by up-to-date AV solutions with current definitions.

Where does this leave us?  Host-based antivirus products are using up more and more CPU cycles to process an ever-growing list of viruses, yet are still unable to keep up with the onslaught of new malware.   To make matters worse, the constant creation and release of new definition files is stressing the quality assurance (QA) process for antivirus vendors.   We have reached the place where IT professionals are considering turning off automatic AV updates, and deploying labs to test the updates before release.

In short, the odds of timely detection continues to drift downward every so slowly, while the risk of friendly fire from the AV solution itself creeps upward ever so steadily.  (The McAfee update issue had an impact on its clients that rivaled a major virus attack.)

We are long overdue for a different approach.

Application Whitelisting

Companies such as Bit9, CoreTrace, and Lumension have been pushing application whitelisting for years now.  Microsoft has also provided this technology via AppLocker in Vista, Windows 7 and Server 2008Even some of the major AV players have purchased or developed application whitelisting technology, but they have not been actively pushing it into the mainstream.  They need to start.

Better yet, we as IT leaders and professionals need to start evaluating and deploying the technologies that better address information security concerns in 2010 and beyond, allowing us to make better use our limited budgets and resources.

Application whitelisting is a good idea, because for every environment, there are less items that fall into the “known good” category than bad code that you don’t want to run.  Just consider the difference in a firewall rule-set that assumes a "deny all that has not been explicitly opened" stance vs one that tries to explicitly prevent access to all bad protocols and ports. 

The frequency of change in the “allow list”, particularly in corporate environments, will be greatly reduced as compared to the “bad list”.  This automatically minimizes the chance for error.   It also means that the processing power needed to evaluate the former list will be far less than that needed to evaluate the current lists of malware in today’s signature-based AV products.

Mitigating Code-Enabled Data

I think that we really have to weigh the disadvantage of code-enabled data files and either abandon them outright (queue lots of whining), or at least ensure that there are centrally controlled configuration options for enabling or disabling the automation features of productivity applications.

For instance, consider how diminished the threat of macro-embedded documents has become since Microsoft enabled much better controls over macro security, including turning them off by default, and allowing them to be set via policies.  Remember when macro viruses were the most common threat vector?  We need to do the same for PDF exploits.

Getting a better handle on security at the host level entails not only controlling which application can run, but determining in what context, and with what functionality it can run at any given time.  If we can get vendors to provide us with centralized controls regulating all of the features they integrate into their apps, then each person and each organization can determine what level of risk to assume for any given application – and in the event of an emergency, the problematic feature can be disabled or otherwise impaired on as a stopgap.

All of these options will sufficiently mitigate external risks without simultaneously increasing risks from errors.  And they will consume less processing power and generate less application conflicts than our current antimalware solutions.

Using the Right Technology

Signature-based security devices still have their place within the enterprise – mostly at the perimeter.  (And even there, their days are numbered.)  But at the desktop, they are increasingly causing more pain than gain, and it is time for us to change our approach, lest we find ourselves slipping further and further behind the malware writers.

And whitelisting need not concern itself with every executable.  Each organization can determine just how much to watch and keep track of, balancing performance, productivity and security according to a risk profile that they select.

Yes, there will be a few challenges to address in order to see mainstream use of whitelisting technology – including the integration of such technology into the patch management process – but, the gains will be well worth it.  Environments that have moved in this direction are already seeing significant ROI just in terms of recovering lost administrator time from managing the AV process and from recovering from broken antivirus definitions.

If you haven’t looked at Microsoft’s AppLocker technology, or at the technology from one of the other vendors, you owe it to yourself and your organization to start evaluating, testing and ultimately deploying.  Those who get ahead of the curve in the next 9-15 months will save themselves and their organizations significantly vs those who keep using the same old methods, even as the nature and intensity of the threat landscape has changed dramatically.

Blacklisting is out.  Whitelisting is in.  Please get with the program.

This was originally posted on Talking Out Loud with ASB 

Possibly Related Articles:
Operating Systems Viruses & Malware
Antivirus virus
Post Rating I Like this!
Fred Williams What about anomoly based detection systems rather than signature based systems? The problem with signatures as you mention is the catch up game. And catch up is not as bad as users who refuse to turn on their auto updates.

With anomoly based detection systems, it detects anomolies in your system that could alert you to a potential problem rather that scanning down a list of known signatures trying to determine an attack.

I think the anomoly based systems are the better way to go.
Andrew Baker Hi Fred,

The problem is still one of enumerating "bad" behavior. That takes even more processing power and you have to keep track of what the potentially malignant software is doing for a longer period of time.

Not only that, but I've seen a lot of "good" software with bad behavioral characteristics. Back in 2007, when I was testing out a behavioral host-based security product, I found that a particular AV vendor used the HTTP protocol in a non-standard way to get definition updates. At a glance, it would seem as if the behavioral product was the problem, but it really was the AV vendor. And that is not an isolated situation.

In the end, we cannot keep up with the list of known bad attack vectors. Do you really want your application and data files to go through the virtual equivalent of going through an airport checkpoint *each* time you open them for use?

It's not a scalable approach, and we've been using it for years. Anomaly detection has its place, but more on the network than on the desktop. Plus, it takes a while to get your initial baseline...

Tom Murphy Fundamentally, chasing bad software or behavior is a reactive approach to a 20 year old malware problem. As malware grows exponentially, becomes more complex, and is more targeted, appliation whitelisting becomes essential for endpoint security.

No longer can the AV vendors, who are defending the effectiveness of signature and behavioral defeneses, hold off the angry mob of dissatisfied customers.

I wonder how many organizations still consider their AV vendor a "trusted advisor" when it comes to endpoint security?

Andrew Baker Unfortunately, Tom, the answer would be "too many".

That is not to say that current AV vendors cannot add other solutions to their portfolio in order to better mitigate host-based risk. But until they do so, their insistence on staying with an approach that is no longer effective puts the value of such a partnership into question.

Aaed Alqarta From my experience with malware, I can clearly say that blacklisting-based AV softwares are losing the war. I've been recommending complementary solutions to fight latest threats like:

1. Application Whitelisting
2. Device Control
3. Security Configuration Management
4. Patch Management (OS + Apps)
5. Locked down, sandboxed browser
8. Strict Internet usage policy (URL Filtering)
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.