The Evolving Value of Event Data for Effective Security

Tuesday, November 26, 2013

Danny Banks

3026fa994d431ada923190b72e84fda4

Innovative enterprise security teams realize the importance of security event data and are changing the way they collect and store information for greater insight into the threat landscape.

The data logs generated by IT system and network traffic activities are no longer simply of interest to IT managers and CIOs. These logs now contain valuable clues about suspicious activities taking place across an organization’s users, devices and networks.

Time-stamped log records – also known as event logs – have always been collected in a limited form to specifically drive real-time monitoring of IT infrastructure. As threats became harder to spot in real time, more event data was selectively collected and sampled on an ad-hoc basis.While most organizations felt this approach was good enough, it is time-consuming and puts a lot of pressure of on the security professional, who has to  know what they are looking for in order for the system to deliver the data they need.

In the last two years proactive security practitioners have started to apply statistical analysis against these security events to give them a continuously improving view of their security posture. While security practitioners have changed their approach, their tool set has not evolved as rapidly as they have.

In order to achieve their goals, these metrics-minded security professionals want to collect and analyze massive amounts of event data – not just what they think they need but whatever might help them establish behavioral patterns and richer context of activities. In addition, they want to store that data for long periods in order to do historical analysis or forensic investigations. They want to answer questions like: “What was the typical behavior of that system over the last six months? What happened that appears to be different? What is driving that variance?”

Many IT and Security teams are realizing that these questions can’t be answered easily and that their traditional SIEM solutions were simply not designed to address the sophisticated, massive-scale event data collection and analysis challenge. Here are just a few of the barriers they encounter in legacy systems:

  • Variety (and Volume and Velocity) – Security data is a unique class of data.  Security data has its own characteristics that pose challenges in terms of how you collect it and its various formats.  Legacy systems over the years have built special agents or connectors or regular expression search terms to extract out the key elements in that security data, but the volume and velocity that Event Data now generates can easily bring legacy systems to their knees.
  • Data Filtering – To reduce the amount of event data that needs to be stored, filters are created to scale down both the number of events and the amount of data stored for each event. While this strategy allows event data storage to span longer ranges of time, it creates risky side effects. Filtering is based on the notion that the nature of all future searches will be known in advance. As a result, any unanticipated security query may require data that the system previously eliminated; rendering limited value from any event data that is stored.
  • Limited Time Range Queries – Typical security analytics are artificially limited in time scope based on storage capacity. While successful threat detection requires analysis of data over longer time horizons, most SIEM solutions can only keep 30-90 days of data on line and the rest offline.
  • Two-tier Storage Architecture – To alleviate the high cost of relational database management system (RDBMS) storage, aged events can be removed from the database and archived into lower cost compressed storage. Should events from the archive be needed, they must be uncompressed and restored to the database. Removal and restoration of event data from an RDBMS database are time consuming, creates resource contention with other operational data loading, and are potential manual operations requiring DBA and system administration resources.

Unfortunately, IT and Security teams often don’t have the time and resources to see the forest from the trees when it comes to these kinds of challenges and therefore they encounter them when it’s too late.  Finding the right solution that has the proper foundational components, such as massive-scale and event analysis flexibility can eliminate headaches today and prevent them from ever appearing in the future.

About the Author: Danny Banks is VP Solutions Engineering and Emerging Markets for Hexis Cyber Solutions

12411
Infosec Island Enterprise Security
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.