New Vulnerabilities Discovered Twice per Second

Thursday, July 29, 2010

Rahul Neel Mani

F520f65cba281c31e29c857faa651872

With two new vulnerabilities discovered every second, traditional protection techniques are falling flat, says Michael Sentonas, CTO, APAC, McAfee, in an interview with Geetaj Channana.

He thinks that the industry has to look towards two solutions to overcome the new challenge: put the research in the cloud and embrace whitelisting.


Q:Lets start with talking about a technology that Indian CTOs are looking at adopting –virtualisation. What are the kinds of security challenges that a CTO can face in virtualised environments? How are they different from regular environments?

A: As far as security is concerned, virtualisation extends some of the traditional security issues and brings up new ones. It adds its own complexity and uniqueness. The same threats that you experience in a physical endpoint or server also exist in a virtualised server or endpoint. Vulnerability in a Windows or Adobe platform is the same regardless of whether the server is physical or virtualised.

People always talk about the ultimate exploit in the virtual environment... the exploitation of the hypervisor. People like VMware, Citrix and IBM do a great job of securing their hypervisors. Looking from the perspective of VMware there hasn’t been a critical vulnerability in the hypervisor till date.

When businesses buy expensive hardware and then a hypervisor, they want to install many virtual guests on top of that hypervisor, but they do not want the complexity of installing security on every single guest that runs on top of that hypervisor. They just want to protect it in a hassle-free way and yet be effectively covered.

We as a company also share this vision. We ask: what is that we can do so that businesses can just plug into the hypervisor and be security at once for everything.

We are also working on a product to be released by the end of this year, which will arguably change the way security is looked at in the virtualised environment because of the fact that we are looking at reducing the amount of weight of the endpoint that runs inside the virtual instance. This will ensure more efficient scanning, will use less resource and need less memory too.


Q:There are talks that host based security is not working as well now. Is there another option that they can use – like whitelisting?

A: If you consider the traditional approach to research in security, I agree with you that we are coming to a point very quickly where it will become very expensive to do the research and put it into the endpoints. We detect around 47,000 new pieces of malware every day, and that's a lot. This is almost one every two seconds. How many end users you know would update their technology once every two seconds; they can’t [and they won’t]. It is too intensive for them to do that. Typically it is done once every day.

Back in the early 2000s, we had a lot of time to work on. When vulnerability was discovered and it entered the Ethernet; some of the biggest attacks of the times like the slammer or the love letter took six months to attack the end user.  Around 2006-7, a term called zeroday vulnerability came up. This was because the vulnerability was discovered and it attacked the same day and the time to respond was significantly compressed.

In the early 2000s the focus was on blacklists. But as the time was compressed the industry responded by creating more predictive features in the technology. Now we are at that infection point where we are saying that there are so many new vulnerabilities, it becomes very hard to research and it is very difficult to use predictive signatures because possibility of getting it wrong is always there. So, I think the whole industry has to look at two things: putting the research in the cloud and whitelisting.

As a security vendor we have more than 350 people working 24x7 and coming out with fixes, blacklists, predictive signatures, techniques and behavioral technologies that we put in the cloud and we want our customers to leverage that.

One of the strategies that we’ve had for the last eighteen months is that every solution that McAfee provides is effectively a telemetry sensor. What this means is we are trying to gather information of our desktops, servers, laptops and email systems and help feed a private McAfee security cloud with information about every IP address and application that we interact with.

The reason for that is that we are grooming, what we believe to be, and the largest reputation cloud in the industry to help people start to make reputation based decisions about using an application or interacting with an IP address. Eighteen months ago we enabled our endpoint technologies to leverage that security cloud. Now we have our firewalls, email security solutions and IP technologies communicating with that cloud.


Q:Can you illustrate how this is done?

A: You probably recall the July 4th weekend last year there was a very targeted attack at the US government organisation; that attack did not happen on July 4th. The attack started on May 29th, just that the main pay load happened on July 4th. The attackers were planning the attack from a long time ago. We were monitoring the malicious activity and were setting the reputation scores of the IP addresses linked to the activity. By the denial of-service attack happened on July 4th, the firewalls connected to our security cloud had sensed that the reputation of these IP addresses is not good and they must not accept traffic originating from these addresses.

The next thing that we talked about is whitelisting. That is exactly the way the industry needs to head in the next couple of years. We have just acquired a company called Solidcore that has a dynamic whitelisting technology. I think the proof of this technology came into the limelight at the time of the rover attack in January on Google and other companies. At that time we knew that at least 30 companies were impacted – now we know that the number was at least 500. The traditional technologies that were out there could not respond to that kind of an attack. This was because no one understood the kind of attack, even the reputation services were not helpful as the IPs involved did not have a reputation rating. Our whitelisting technology, however, was able to stop that attack, immediately, without prior knowledge. We have held many hacking seminars where we have demonstrated the attack and how it could have been stopped. I am sure this is where the industry needs to go.


Q:Let me go to another pain point that is there. We’ve seen a move towards virtualisation and the cloud. But, there is certainly a problem of security when people are migrating from their current systems. What according to you are the biggest challenges?

A: Migration is certainly a big challenge. Obviously when you are migrating you must have a contingency plan, a risk mitigation plan and a disaster recovery plan to deal with such scenarios. People have the misconception that they can turn one off and turn the other one on – which obviously is not going to happen. Most cloud operators are sensitive to this and they have good migration processes to ensure least amount of outage.


Q:People have a lot of concerns about security when they are moving to a public cloud environment. What according to you are the biggest challenges and solutions?

A: There are no standards today that help people look at security in a unified manner globally. Different standards are used in different countries. People use regular audits – quarterly, half yearly or yearly – to manage this situation. But, the problem is that the security landscape changes every minute. We at McAfee are working towards helping create these standards. What we are trying to do is create a cloud security program whereby an organisation not only goes through a periodic process but they also go through a daily assessment of their technology to make sure that the technology is free from malwares and security holes.


Q:How different is it from having regular security patches?

A: Regular security patches do not tell you if you are vulnerable. What we are talking about is a process by which they can continually access their technology and gauge what are the holes in the process. Once they know this, they would know how to protect their environment. This is even before a patch has been released. They could take countermeasures before patching the vulnerability to protect their environment.


Q:Another important thing is the introduction of mobile devices on the network. What is the kind of security risk associated with these?

A: There has always been a big hype around security concerns with mobile devices. But, the reality is that there has not been any major attacks that have emerged from these devices.

With a lot of intellectual property and sensitive information sitting on these devices, the risk is now more severe. Since these devices are small and cheap they can be easily lost and replaced, but the data on these devices cannot be retrieved. It is important to be able to manage a device after it is lost. On top of that, in the last 18 months or so, there has been an influx of applications on these devices. There are hundreds of thousands of such applications that are floating on the internet – a lot which are not installed from authorised sources – they are not approved.

Cross-posted from The CTO Forum

Possibly Related Articles:
5625
Vulnerabilities
Web Application Security
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.