Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Security and the Cloud: Closing the Gap as the Market Grows Mon, 22 Sep 2014 14:50:02 -0500 The cloud is a major presence in technology news and a trending topic that seems to pop up everywhere these days. The cloud certainly has the potential to transform computing across the spectrum, from individuals to SMBs to multinational corporations and is rapidly becoming an essential part of the way companies do business.

Many IT decision makers feel pressure to adopt the cloud for the sake of not being left behind. But moving to the cloud is usually easier said than done. As with any technology advancement, there are growing pains. Each company has a unique portfolio of IT assets as well as specialized business objectives, all of which adds complexity to the process of moving from legacy systems to the cloud.

The Cloud Services Market is Growing

The market is strong for public cloud services; recent Gartner research projects the public cloud IaaS market to grow cumulatively to $128 billion dollars from 2014 to 2018. That computes out to a robust compound annual growth rate (CAGR) of 35 percent. In addition, private cloud adoption is forecasted to reach 72 percent in 2014. In just a couple of years, private cloud will give way to hybrid cloud; by 2016 at least 50 percent of large enterprises will have a hybrid cloud solution in place. The cloud is big business

with seemingly little in the way to stop its forward progress, and everyone wants in on the action. So it’s not surprising that service providers, system integrators and others are scrambling to capitalize on the hyper growth of cloud.

Interestingly, while these numbers show tremendous growth in cloud there are other data points that show a different side. In another study, also by Gartner, the size of the market for cloud security appears to be trailing behind other cloud services. Cloud security was a minimal $2.1 billion in 2013 and estimated to grow to only $3.1 billion by 2015. While this represents solid growth, it doesn’t come close to the CAGR of the overall cloud market. We could draw a lot of conclusions from the data, but one thing we know for sure is that while the cloud is growing at a breakneck pace, it’s not without barriers to entry. Security concerns, and a lack of secure solutions, could easily put a damper on cloud growth.

In a separate report published by KPMG, survey data relates that 48 percent of enterprise leaders are concerned about general loss of control in the move to the cloud, while 42 percent are concerned that there isn’t an optimal method for migrating corporate data and workloads to the cloud. In fact, 42 percent related that moving existing infrastructure is too complex. Finally, 39 percent have concerns related to the loss of data and privacy.

Taken together, these data points clearly indicate a common fear amongst leadership: by going to cloud, businesses are worried about losing corporate intellectual property and wasting resources. It isn’t a great leap to hypothesize that the lack of investment in cloud security innovation could be hindering the growth of cloud adoption. From the executive’s perspective, the value proposition of moving to the cloud isn’t always clear. If risk factors are deemed too significant, the potential benefits of cloud adoption become a moot point, however enticing they may be.

Tackling the Migration of Workloads to the Cloud

Disparities between the desired state of the cloud and the enterprise class cloud services that are currently available from service providers come to be viewed as flaws in the technology. Adoption slows as IT decision-makers wait for integrated and complete solutions they can trust. For example, one major component still not universally available is automated migration of workloads to cloud. Commonly referred to as cloud onboarding, it is the process of moving a workload from one cloud provider to another. Most providers are still onboarding customer workloads using manual methods that are extremely expensive and labor-intensive; it can cost thousands of dollars to move a single workload.

Today, there are a few companies tackling the challenge of streamlining the migration of workloads to the cloud. These are SaaS-based solutions that automate the core processes of cloud migration. Until recently, these SaaS solutions required the workload to be extracted from the source environment and moved into the control plane environment in order to execute the conversion process. Unfortunately, with this approach, all workloads would have to traverse the public Internet in order to be converted and deployed into the target cloud—creating a significant vulnerability. In hybrid cloud models, workloads frequently move between private and public clouds; clearly, a secure methodology is critical.

Besides the obvious risk involved in moving any data across the public Internet, compliance requirements and legal standards play a significant role in cloud security concerns. When migrating workloads to the cloud, there are a variety of acts and policies that need to be considered and adhered to with regards to data security. For instance, the Health Insurance Portability and Accountability Act (HIPAA), which stipulates that all sensitive patient information must be kept private and that specific steps must be taken to ensure data security at all times. Likewise, Electronic Medical Record compliance mandates that cloud servers require proper authentication to access medical data.

Any business that processes sales and payments online must use Payment Card Industry (PCI) compliant technology. Businesses must also consider Sarbanes-Oxley (SOX) compliance, including requirements around securely maintaining and backing up appropriate log files and documentation.

Many enterprises have additional considerations specific to their industry, supply chain relationships, or contractual obligations. Beyond being able to prove complete security coverage for compliance purposes, a strong security posture protects brand reputation and provides a competitive advantage.

Closing the Cloud Migration Security Gap

Unique SaaS-based solutions are emerging that will close the cloud migration security gap. In this approach, a source modeler (cloud appliance) is deployed into the target private or public cloud.

Leveraging an existing direct connection between the source and target cloud environments, the workload attributes are collected and sent to the SaaS control plane. Based on the attributes, a set of VMs equal to the source are created and deployed to the target cloud datacenter. The workload data is then collected directly from the source, overlaid onto the target VMs, booted and deployed into the cloud.

By moving data within the trusted network connection, the need to leverage the public Internet to transfer server data is completely avoided. Such an approach mitigates security concerns associated with migrating workloads from a source datacenter into public and private clouds, as well as issues associated with data sovereignty, which in and of itself represents another gap in cloud technology.

In addition to maintaining a high level of security throughout the migration process, this approach increases the speed with which workloads can be moved. It is no longer necessary to open tickets with network administrators to edit WAN settings in order to access source servers. The control plane has enough information to identify bottlenecks and trouble spots in the migration process, further streamlining the process and ensuring a higher global quality of service.

Bridging the Technology Gap

Innovations in cloud migration security will be a boon to enterprises eager to begin migrating workloads from a source datacenter into private or trusted private clouds, but concerned about security and compliance issues. Faster, automated, and secure migration solutions will accelerate the growth of the private cloud market by enhancing efficiency and building confidence in a fairly new and often complex process. Bridging technology gaps paves the way for increased cost savings, enterprise agility, and further innovation.


John M. Hawkins is a Senior Director of Services at RiverMeadow Software. Hawkins has more than 20 years of Software IT/Consulting experience.

Copyright 2010 Respective Author at Infosec Island]]>
Parallels Among the Three Most Notorious POS Malware Attacking U.S. Retailers Mon, 22 Sep 2014 11:41:00 -0500 By: Marion Marschalek

POS stands for ‘Point-of-Sale’ as in the Point of Sale devices used by retailers at check out stands worldwide. These devices, are an appealing and lucrative target for cyber criminals because these days, they are as bank robber Willie Sutton famously said, “where the money is.” POS devices process financial card data at cash desks all around the world and as the slew of recent breaches reveal, they require better security. Cyphort Labs took a look at three different POS malware families to reveal whether they are connected or not, where the connections are and what their unique features are.

After the first major success of POS malware breaching Target Corporation in November 2013 occurred, the number of POS device infections in the wild skyrocketed. BlackPOS malware, which was used in the Target breach, had the biggest impact in terms of the amount of systems breached and money stolen. Next in line is FrameworkPOS, the malware used in the Home Depot breach, discovered in September 2014. Both show a targeted nature. The third malware family of interest, Backoff, appears to have been widely used in a more general attack approach.
Cyphort Labs dissected variants of all three families to create an accurate picture of state-of-the-art POS malware. We are confident that we see fingerprints from three different actors behind the mentioned families, but we can also say with certainty one was inspired by the others.

We will be sharing more of these results in person on our Most Wanted Malware webinar at 9AM PDT, Thursday, September 25. You can register on our website blog will share with you “early birds” of our findings, and our insights on how point-of-sales systems are breached and how to better apply security measures in the future.

BlackPOS was used in the data breach of the Target Corporation in December 2013. An estimated 40 million credit and debit cards were exfiltrated from Target’s POS systems.

BlackPOS malware consists of multiple components meant to infect either the POS machine itself or a server on the local network. The POS component is multithreaded and each of the components installs a service on the infected machine to ensure persistence and frequent operation.

IP addresses and server names are hardcoded in the binaries, which suggests the malware is clearly tailored for that specific operation. Also it tells the attacker had perfect understanding of the victim’s network. Both binaries include debug information that points to a cybercrime actor named Rescator.

Both involved components install services on the infected machines to carry out their malicious activity. The operation of BlackPOS can be summed up as follows:

  •         The POS component constantly searches for the pos.exe process in memory
  •         It scrapes card data from pos.exe’s memory and appends it to a dump file
  •         A dedicated thread waits for changes on the dump file and when triggered it pushes it to a Samba share on the local network
  •         The server component fetches the dump file from the share and transfers it via FTP to a remote server in a time interval of 10 minutes

pos-1 Figure 1 - BlackPOS appending card data to the dump file

At the beginning of September 2014, Home Depot confirmed a massive data breach of (to date) unknown dimensions. In terms of functionality, the malware found on Home Depot’s network resembles the BlackPOS malware used to hack the Target Corporation.

Compared to BlackPOS the malware that struck Home Depot is simpler and more straight-forward. Execution is more linear, key functionalities are implemented differently. Similarities, including the general way of operation and the memory scraping method, are undeniable, but still, we can say with certainty that the authors are not the same.
pos-2 Figure 2 - FrameworkPOS' service disguising as McAfee service
An interesting feature shown by FrameworkPOS is how it disguises as a McAfee Anti-Virus service to hide unsuspiciously on the system. Also notable are hidden messages the author included in the code, in form of links to news articles and cartoons. Content of the messages is clearly anti-American, dealing with America’s role in foreign conflicts in Syria and Ukraine.
pos-3 Figure 3 - Links to news articles and pictures in memory

An estimate from the US CERT says that in the United States alone more than 1000 businesses are affected by the Backoff malware family. Backoff seems to be the most aggressive strain of POS malware, being less focused on specific victims and acting more like common malware. It uses a runtime packer, hides in the file system, adds multiple ways to guarantee persistence and it does not rely in a local infrastructure on the victim’s network.

Also Backoff added a keylogging module, which neither of the other families provides. It does not install a service on the infected machine, but achieves persistence by creating a remote thread in explorer.exe which will restart Backoff if the malware stops running.
pos-4 Figure 4 - Command processing by Backoff
As opposed to the other two POS malware families Backoff shows standard bot behavior. It receives commands from a CnC server, protects its executable and can update itself.

According to the US CERT there are now at least five different versions of Backoff in the wild (

  •        1.55 “backoff”
  •        1.55 “goo”
  •        1.55 “MAY”
  •        1.55 “net”
  •        1.56 “LAST”

Clearly, all three families give away very interesting insights. Backoff looks like real world malware, it is packed, it hides it’s executable in %APPDATA%, uses registry keys for persistence, takes commands from a CnC server. This behavior is typical for a common bot, just this time coming with a POS scraping feature.

FramworkPOS and BlackPOS on the other hand, are like off-the-shelf software, tailored specifically for dedicated targets. They are most likely not from the same authors but FrameworkPOS leaves the urgent impression of a copycat attack after former POS malware incidents (i.e., Target). Basic principles and ideas are identical: creating a service, scanning chunks of memory, pushing data to a local SMB server or hiding the data in a fake binary file in system root.

Still, the implementation methods look very different. FrameworkPOS is very linear, no multi-threading is performed and the data exfiltration is controlled by time intervals rather than coordinated by two threads. Also, FrameworkPOS scans multiple processes, while BlackPOS limits itself to the pos.exe process of the infected POS device. Interestingly, all three families show slightly different memory scraping methods.

Criminals will always be where the money is. We go to great lengths to protect cash in the brick and mortar world - The question now is, why we aren’t we doing the same online ? POS malware shows once again that enterprises need to operate a solid baseline security while focusing their security solutions heavily on their most valuable assets. Identification and proper risk assessment of a company’s intellectual property is the first step towards prevention of data breaches. Knowing what to look for is equally important, which is why we took the time to share our analysis. For more a deeper dive on these three malware variants, we again invite you to attend our next webinar at 9AM PDT, Thursday, September 25. I would like to thank Paul Kimayong and rest of the Cyphort Labs team for their help with this analysis. 

This was cross-posted from the Cyphort Labs blog.

Copyright 2010 Respective Author at Infosec Island]]>
Poisoning the Well: Why Malvertising is an Enterprise Security Problem Thu, 18 Sep 2014 14:54:00 -0500 Malware distributors are increasingly embedding attack code in online advertisements, or malvertisements in order to infect Internet users. These are typically delivered via ad networks that unwittingly place them on reputable websites operated by recognizable brands.  This practice does more than expose customers to fraud and personal data theft – it damages the brand equity and customer loyalty of the companies who own the websites involved.  To put this problem in perspective, a single malvertising campaign can quickly infect over 10% of the Internet’s top 1,000 trafficked sites.

Malvertising Ecosystem

The malvertising problem stems from that fact that when an organization places an online advertisement it is typically placed by an ad network. Often, ad networks will resell unfilled ad spaces to other networks — basically doing anything to avoid unused real estate. Meanwhile, an ad is typically sent directly from the servers of the ad network that inherits the space, and are out of the advertising organization’s control.

This multi-level online advertising supply chain has any number of weak links that an attacker can exploit to slip malware into legitimate ads or even take out their own ads. Advertiser vetting by the ad networks is usually limited to the credit checks needed to assure payment for ad placement. There are no integrated controls, industry-enforced standards, or end-to-end accountability across the supply chain. And if an ad network did suspect that a particular ad was malicious and blocked it, it might or might not be correct about that ad, but it would certainly lose all revenue from that ad.

Meanwhile, even if an ad network wanted to scan every ad it handles, it is unlikely to be equipped to handle the sophistication of today’s attackers, who use tools to disguise their malware code from traditional signature scanners.  Attackers also randomly alter the domain names of their command and control infrastructure so that known sources of malware can’t be blocked, and make sure their payment collection network is constantly shifting.

According to Cisco’s midyear threat report:

Malvertising is becoming more prevalent, and advertisers are able to launch highly targeted campaigns.  A malvertiser who wants to target a specific population at a certain time—for example, soccer fans in Germany watching a World Cup match—can turn to a legitimate ad exchange to meet their objective.

The problem is only getting worse, as attackers have discovered that planting their malware onto brand-name sites gives the demands of the pop-up windows more credibility and legitimacy with less sophisticated users.

RiskIQ has found that the rate of increase in malvertising has been skyrocketing, increasing 29% in 2012 and then a staggering 225% in 2013. The increase is likely to continue accelerating in 2014.

On mobile websites malvertising has also been growing, and often appears as in-app advertising, typically on Android devices. If users respond to the malware they often end up downloading more unwanted apps.

It’s assumed that victimized consumers will increasingly resort to the defense most readily available to them—ad blocking software. But if that becomes a major trend it will cut into the ad revenue generated by the Internet for all parties. Meanwhile, malware attacks that are initiated from a website owned by the organization or an ad placed by the organization will damage the trust the company has built with its customers.

Fighting Back

Protecting an organization’s brand from being poisoned by malvertising is complicated. Due to their sophisticated anti-detection technology, stopping today’s malvertisements requires intelligent, continuous scanning of the actual behavior of ads after they reach users’ browsers or mobile devices. Since neither the ad exchanges nor the owners of the websites that carry the ads are equipped for this task, it is best handled by third-parties that possesses the necessary expertise.

Using cloud-based crawling technology, it is possible to navigate websites or mobile apps, “clicking” banner ads so that their associated software will react as if it were being viewed by a user. Malware and other malicious behavior can then be detected. By examining the behavior of malware rather than looking for its signatures, whatever cloaking technology it uses to avoid detection becomes irrelevant.

Ads that are conclusively malicious can be removed, while questionable ones can be examined manually. Global scanning can uncover tens of thousands of malvertisements on a daily basis regardless of the specific operating systems, browser types, or geographic regions that are being targeted.

So while customers won’t know or care which ad network delivered a malicious ad, they will blame the organization that owns the website or placed the ad that attacked them. This downstream impact explains why many organizations now consider malvertising to be an enterprise security problem.

About the Author: Elias Manousos is CEO of RiskIQ, Inc.

Related: Windows and Mac Users Targeted in Malvertising Campaign

Copyright 2010 Respective Author at Infosec Island]]>
How Many Auditors Does It Take … Thu, 18 Sep 2014 12:52:58 -0500 The title of this post sounds like the start of one of those bad jokes involving the changing of light bulbs.  But this is a serious issue for all organizations because, in today’s regulatory environment, it can be a free for all of audit after audit after assessment after assessment.  A never ending cascade of interruptions to their business operations as they go through audits and assessments, all in the name of ensuring controls are designed and functioning properly.

But another reason I have written this post is because of all of the comments that I have received that seem to paint my position as a reason why QSAs are not needed to conduct PCI DSS assessments.  I wanted to clarify for everyone my rationale for my position.

Besides those reasons, the larger reason this issue needs to be brought up and discussed is that the PCI SSC is pushing for organizations to adopt business as usual (BAU).  For those of you that did not read the preamble of the PCI DSS v3, BAU is the integration of relevant portions of the PCI DSS into an organization’s everyday activities.  A rather noble goal and only a recommendation at this time, one has to believe that BAU will at some point become part of the PCI DSS in a future version.

Any organization that takes the time to implement BAU is going to want to assess their implementation of BAU.  They will do this through internal/external audit activities, automated real-time monitoring via dashboards and other internal assessment processes.  Why bother with BAU if you are not going to use it to spot control issues before they become major problems?  That is, after all, the whole point of BAU.

Which brings me back to this year’s Community Meeting and the question I asked about reliance on other auditor’s/assessor’s work.  The reason for the question is to minimize, as best we can, the disruptive effects of the myriad of audits/assessments that some organizations are required to submit.  The answer provided by the Council was an emphatic “NO!” followed by some backtracking after the audience apparently showed its displeasure to the Council members on stage to their take it or leave it answer.

The reason for the audiences’ displeasure though is genuine.  A lot of organizations question the number of times user management controls such as identification of generic UIDs, last password change date, last logon date and the like need to be performed before such activities are deemed adequate?  How many times do facilities people need to be interrupted to prove that video monitoring is performed and the video is retained?  How many times do facilities have to be visited and reviewed for physical access controls?  There are numerous areas in all control assessment programs where those programs cover the same ground in varying levels of detail and focus.  It is these areas of commonality where the most pain is felt and we hear the lament, “Why do I have to keep covering this ground over and over with every new auditor that comes through?”

It is not like the PCI DSS cornered the market on control assessments.  Organizations have to comply with ISO, HIPAA, GLBA, FISMA, NIST and a whole host of other security and privacy control audits or assessments.  All of these audits/assessments share certain common controls for user management, physical security, facilities management, etc.  What differentiates the programs is the focus of what they are trying to protect.

One easy approach to address this situation is to combine audit/assessment meetings with personnel in physical security, facilities management, user management and the like.  Each auditor/assessor can ask their specific questions and gather evidence and conduct testing as they need.  Unfortunately, due to timing of reporting requirements, having common meetings might not always be possible.

But another approach would be to use internal auditors performing testing monthly, quarterly, etc. and then the QSA reviewing those results during their annual PCI assessment process.  There might be some independent testing required by the QSA for areas such as device configurations, change control and application development changes, but the sample sizes of any testing could be greatly reduced because of the testing done throughout the year due to the implementation of BAU.

If we as QSAs work with other auditors/assessors and agree to common criteria in our respective work programs that satisfy our common controls then we will not have to interrupt an organization to ask the same questions and alienate people as we do today.

Success of compliance programs is the result of making them as unintrusive and automatic as possible.  BAU is a great idea, but it will only succeed if the Council understands how BAU will be implemented in the real world and then adjusts their compliance programs and assessment approach to take BAU into account.  The quickest way to kill BAU is to make it painful and cumbersome which the Council is doing very effectively at the moment.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
A Fresh Approach to Building an Application Security Program Thu, 18 Sep 2014 12:34:40 -0500 Ben Tomhave and Ramon Krikken at Gartner have released a paper called "Application Security: Think Big, Start with What Matters," ( which describes concrete steps on how to cost effectively deploy an app sec program. We highly recommend that organizations seeking to build an app sec program to read the report.

Krikken and Tomhave have defined a realistic set of  guiding principles that can be leveraged to prioritize the use, growth and maturity of each given framework component.” In our view, the framework is valuable because of the following realities in most organizations:

                •             There aren’t enough application security experts available to rely on manual activities. From the report in regards to the “Cost-efficiency and agility require automation component “There is no reasonable way to scale manual human activities relative to appsec without exploding costs. As a result, it’s important to leverage automation in order to embed and scale appsec practices in a cost-effective manner.”

                •             Defining security requirements is fundamental to achieving secure software development. One recommendation from the report is to “Start by implementing application security testing (AST) and creating basic security feature requirements.”

                •             Security isn’t what drives business revenue or operating efficiencies: features do. Software teams are self-optimized to produce business value, and application security programs need to adapt to this rather than the other way around

All too often, we have seen organizations invest only in application security testing and education as the only two components of their application security programs. The net result is an expensive “patch and fix” approach that self optimizes only for the risks that scanners are able to catch. Tomhave & Krikken point out that: “Anecdotally, it is believed that SAST [Static Application Security Testing] only covers up to 10% to 20% of the codebase, DAST  [Dynamic Application Security Testing another 10% to 20% (minimal overlap with SAST), with the end conclusion being that traditional AST really only covers about 40% or less of your codebase.”

Organizations often struggle to move past education & testing because they haven’t found solutions that scale with a limited security staff. The authors also dispel the myth that it’s impossible to automate early-phase secure SDLC activities: “Automated incorporation of security requirements into the overall requirements management process should be sought out and leveraged wherever possible”.

Overall, “Application Security: Think Big, Start With What Matters” should be on your short-list of reference material if you’re looking to lower the costs and decrease the risk of software security.

Cross-posted from the SC Labs blog.


Copyright 2010 Respective Author at Infosec Island]]>
No Quick Fixes for Home Depot After Record Cyberattack Wed, 17 Sep 2014 12:16:36 -0500 Home Depot fixes America’s household problems. If you’re planning a do-it-yourself project, whether it’s repairing a leaky faucet or installing new linoleum flooring, you’re probably going to visit a Home Depot to buy your materials or get some advice.

America’s largest home improvement retailer seems to have a repair for everything, but after news that its payment systems had been breached, Home Depot has a lot of work ahead to get its own house in order. It faces a long road as it repairs its reputation, its relationships with customers and its network security.

In what the New York Times speculated could be the “largest known breach of a retail company’s computer network,” a massive breach that affected more than 2,000 Home Depot locations in the U.S. and Canada between April and Labor Day, exposing the credit card information of an estimated 60 million customers.

These are unprecedented numbers, topping the infamous Target breach of last holiday season. By comparison, that attack did not last as long (three weeks), affected fewer stores (about 1,500) and resulted in fewer victims (40 million).

The information security press has been quick to criticize Home Depot for its handling of the advanced persistent threat (APT) attack, particularly for its slow response. Eric W. Cowperthwaite, vice president of Core Security, told the Times, “This is not how you handle a significant security breach, nor will it provide any sort of confidence that Home Depot can solve the problem going forward.”

Lessons from the Target Breach

In KrebsOnSecurity’s original report of a possible breach earlier this month, Brian Krebs reported that Home Depot registers had been infected by “BlackPOS” – the same strain of malware found on Target point-of-sale systems last winter.

And the parallels don’t stop there.

After both network security breaches, customer data surfaced on Rescator, a black market website that peddles stolen credit card information. And what’s more, both Target and Home Depot were attacked when their sales usually spike – Target during the holiday season and Home Depot during the spring, which this year produced a record number of transactions.

Both retailers have also taken similar steps to address the attacks publically. Just as Target did, Home Depot is offering “free identity protection services, including credit monitoring” to any customer who shopped at the store from April 2014 onward.

What’s still unclear is how hackers were able to breach Home Depot’s computer network. In the case of Target, attackers gained remote access to its network by finding a vulnerable point-of-entry in the form of one of the retailer’s HVAC contractors. If that’s also the case here, as it’s been with other prominent companies that have been attacked, it’s yet another reminder of the need for more secure remote access.

Any time a mobile employee or endpoint accesses a corporate network remotely, instead of working within the safer confines of the immediate network, there’s a greater chance that an attacker could exploit a vulnerability if the proper network security measures aren’t in place. In order for a network administrator to map out a complete view of the network, including remote users, tools like centrally managed VPNs are critical. This way, if a breach is detected, an administrator can take immediate steps to halt the attack, from deprovisioning users to revoking network access.

As Home Depot rebuilds its network security infrastructure, this is just one of many steps it will need to take to prevent another attack.

This was cross-posted from the VPN HAUS blog.

Copyright 2010 Respective Author at Infosec Island]]>
3 Things To Consider When You Revisit Your Backup System Wed, 17 Sep 2014 12:12:00 -0500 What’s expected from you in your role as a CISO is expanding as companies rely heavily on more complicated information systems. There is a barrage of threats and more reliance on technology as businesses leave the pencil and paper behind. Status quo is not an option with so much change occurring within the IT industry, so let’s cover one aspect that is often overlooked; an effective backup system solution. When all is well, there is nothing to worry about. A poorly configured backup system, however, can make life more than a little tricky when you can’t restore your files effectively or efficiently. Let’s cover just a few aspects that help relieve the worry. We can start with…

Strong Client-Side Encryption:

CSE is a measure to keep data secure when it leaves your computer and travels to another destination. Its contents are decrypted with a key, with which only you have access to. As an Information Officer, you know how valuable your company’s information is. This added layer of protection ensures that the management of data security is controlled from your location. Merging CSE with protocols to manage the security of keys themselves is important too. So that’s one side of it — encrypting data on the client-side. But data is always on the move, so read on.

Verified and Encrypted Transit:

Computers are just things. They don’t know each other or even that they exist — unless us humans tell them so. When two of them are first introduced (by us), they constantly check to make sure they are still who they say they are. So when moving data around, the source and destination should always be verified to ensure that the information remains only between them. Adding encryption to this process yet adds another layer of protection. Both the clients and servers should use a secure transport like SSL/SSH with host authentication that uses private/public keys. Using things like SMB or NFS for transferring backups is not recommended. Whatever your information is, somebody else wants it. If you effectively patrol the data, you have the upper hand. But things happen. Systems are comprised of computers, and computers are vulnerable to many things. So how can you recover when data is lost due to some unforeseen event?

Tested, Well-Documented and Rapid Recovery:

There are a lot of things to consider when designing your backup system. What’s your recovery time objective? What’s your budget? What exactly are you going to back up? Should you test it daily, weekly, monthly? The difficult task is finding a balance that works for you, and when you do — test it! There’s nothing worse than having a failsafe measure, well… fail. Document your backup results and tweak protocol to optimize your solution. This way, a potential catastrophic event can instead be a minor disturbance. Remember, a backup is not a backup until you have tried restoring it!

 And There’s More…

Let’s face it. You have a lot on your plate. The increasing demands on information security make your objectives more challenging and your goals further from reach. Having a well-defined backup solution in place is a facet of data management that cannot be ignored. Consider all of the data that your business relies on — accounts receivable/payable, sales and customer databases, supplier information, so on and so forth. It’s unimaginable to lose any of that important info — and to what…human error? Hardware failure? Software crash? Take a good look at your system, revisit your strategy and make sure the solution is efficacious in restoring your files.

If you are implementing or building your own solution by bundling different tools, you should explore these:

                •             zbackup

                •             rzbackup

                •             rsync



Any combination of these tools should serve you well. If you’d like to take a deeper look into revisiting your back up solution take a look at Jarl’s blog: A Word About Backup Solutions (

Cross-posted from the SC Labs blog.

Copyright 2010 Respective Author at Infosec Island]]>
2014 ICS Cyber Security Conference: Register Today to Hold Your Spot Wed, 17 Sep 2014 08:43:28 -0500 2014 ICS Cyber Security Conference Discount Code

Atlanta Oct. 20-23, 2014 - Georgia Tech Hotel and Conference Center

Following a sold out event in 2013, the 2014 ICS Cyber Security Conference is expected to attract more than 250 professionals from around the world and again sell out.

Attendees can register online and pay just $1895 for a full conference registration which includes 4 days AND workshops on Monday.

ICS Cyber Security SponsorsSince 2002, the ICS Cyber Security Conference has gathered ICS cyber security stakeholders across various industries and attracts operations and control engineers, IT, government, vendors and academics.

As the longest-running cyber security-focused conference for the industrial control systems sector, the event will cater to the energy, utility, chemical, transportation, manufacturing, and other industrial and critical infrastructure organizations.

The conference will address the myriad cyber threats facing operators of ICS around the world, and will address topics covering ICSs, including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

The 14th ICS Cyber Security Conference will have 5 major themes:

• Actual ICS cyber incidents

• ICS cyber security standards

• ICS cyber security solutions

• ICS cyber security demonstrations

• ICS policy issues

The majority of conference attendees are control systems users, working as control engineers, in operations management or in IT. Industries represented include defense, power generation, transmission and distribution, water utilities, chemicals, oil and gas, pipelines, data centers, medical devices, and more. Other attendees work for control systems vendors, security products and services companies, associations, universities and various branches of the US and foreign governments.!register/c8g7

Representatives from these organizations and many more have attended the ICS Cyber Security conference in the past:

ICS Conference Attendees

Copyright 2010 Respective Author at Infosec Island]]>
The Big Three Part 3: Incident Response Tue, 16 Sep 2014 10:41:33 -0500 It’s been a couple of busy months since we posted parts one and two of this series, so I’ll recap briefly here. Part one talked about the failure of information security programs to protect private data and systems from compromise. It showed that despite tighter controls and better security applications, there are more data security compromises now than ever. This was the basis for suggesting an increased emphasis on incident detection, incident response and user education and awareness; the Big Three.

Part two in the series discussed information security incident detection and how difficult it is to implement effectively. It related the sad statistic that less than one out of five serious data breaches is detected by the organization affected, and that a disturbing number of breaches go undetected for months before finally being uncovered. Part two also touted a combination of well configured security tools coupled with human monitoring and analysis as one of the best solutions to the problem. In this installment, we’ll discuss the importance of accompanying incident detection with an effective, well-practiced incident response plan.

Say that an ongoing malware attack on your systems is detected, would your staff know just what to do to stop it in its tracks? If they don’t do everything quickly, correctly and in the right order, what could happen? I can think of a number of possibilities right off the bat. Perhaps all of your private customer information is compromised instead of just a portion of it. Maybe your customer facing systems will become inoperable instead of just running slow for a while. Possibly your company will face legal and regulatory sanctions instead of just having to clean up and reimage the system. Maybe evidence of the event is not collected and preserved correctly and the perpetrator can’t be sued or punished. Horrible consequences like these are the reason effective incident response is increasingly important in today’s dangerous computing environment.

Developing and implementing an incident response plan is very much like the fire drills that schools carry out or the lifeboat drills everyone has to go through as part of a holiday cruise. It is really just a way to prepare in case some adverse event occurs. It is deconstructing all the pieces-parts that make up security incidents and making sure you have a way to deal with each one of them.

When constructing your own incident response plan, it is wise to go about it systematically and to tailor it to your own organization and situation. First, consider the threats your business is menaced by. If you have been conducting risk assessments, those threats should already be listed for you. Then pick the threats that seem the most realistic and think about the types of information security incidents they could cause at your organization. These will be the events that you plan for.

Next, look over incident response plans that similar organizations employ and read the guidance that is readily available our there (just plug “information security incident response guidelines” into a web browser and see what you get – templates and implementation advice just jump off the page at you!). Once you have a good idea of what a proper incident response plan looks like, pick the parts that fit your situation best and start writing. This process produces the incident response policies needed for your plan.

After your policies are set, the next step I like to tackle is putting together the incident response team. These individuals are the ones that will have most of the responsibility for developing, updating and practicing the incident response procedures that are the meat of any incident response plan. Armed with the written policies that were developed, they should be an integral part of deciding who does what, when it gets done, where they will meet, how evidence is stored, etc. Typically, an incident response team is made up of management personnel, security personnel, IT personnel, representative business unit personnel, legal representatives and sometimes expert consultants (such as computer forensics specialists).

Once all the policies, personnel and procedures are in place, the next (and most overlooked part of the plan) is regular practice sessions. Just like the fire drills mentioned above, if you don’t actually practice the plan you have put together and learn from the results, it will never work right when you actually need it. In all my time doing this sort of work, I have never seen an incident response practice exercise that didn’t expose flaws in the plan. We recommend picking real-world scenarios when planning your practice exercises and surprising the team with the exercise just as they would be in an actual event.

In the fourth and final installment of this series, we will discuss user education and awareness – another vital component in recognizing and fighting data breaches and system security compromises. 

Thanks to John Davis for this post.

This was cross-posted from the MSI State of Security blog. Copyright 2010 Respective Author at Infosec Island]]>
Shining a Light on Industrial Control Networks with Purpose Built Intrusion Detection Systems Tue, 16 Sep 2014 05:15:27 -0500 With their reliance on open networking technologies and increased connectivity, industrial control systems (ICS) are at a great risk for cyber attacks against their hardware and software components. Announcements of newly discovered cyber weaknesses in ICS are now commonplace. Public and private sectors across the ICS landscape are greatly concerned about the exploitation of these vulnerabilities and are working collectively to develop defensible postures through regulation, supply chain standards and guidelines for implementation and operation.

ICS connectivity and publicized vulnerabilities are on the rise. For example:

· The number of industrial products with Ethernet connectivity grew 350% (30% CAGR) between 2007 and 2012, with 4.5 million connected products in 2012. (Source: VDC Research).

· The number of ICS vulnerability disclosures grew 600% between 2010 and 2012. (Sources: NSS Labs, DHS).

· The cyber attack surface is immense. There are 45 million connected SCADA devices and millions of connected Smart Grid devices installed worldwide. (Source: Mocana)

Gas Compressor Stations Targeted in Cyber Attacks

Similarly, cyber attacks are on the rise:

· In the six-month period ending June 2012, nation-state cyber attackers targeted 23 US pipeline companies. One company had remote access to 60% of pipelines in North America. The attackers stole password lists and control system credentials. (Report “Active Cyber Campaigns Against the US Energy Sector” DHS, ICS-CERT)

· In August 2012, hacktivists using the Shamoon virus attacked Saudi Aramco in an effort to halt production. The main IT network (~30,000 workstations) and corporate website were shut down for more than a week (some services were down for even longer). Saudi Aramco stated that the cyber attack was aimed at production, though it failed to disrupt it.

· Researchers using internet-facing honeypots mimicking ICS systems recorded 74 intentional attacks in five months. Eleven of the attacks modified the control system. (Trend Micro).

Of particular interest is the growing involvement of hacktivists and nation states in infrastructure cyber attacks. Hacktivists such as Anonymous, through their #OpPetrol campaign, have selectively targeted ICS assets for attack to protest perceived social and political injustice. Covert nation state-sponsored cyber attacks against critical infrastructure are also occurring. These include the suspected Bush administration’s Operation Olympic Games, which targeted Iran’s Natanz nuclear facility, and the sweeping infrastructure attacks in Georgia prior to the Russian invasion, both of which remain formally unacknowledged.

The combination of increased ICS connectivity and the ongoing rise in vulnerability disclosures indicates that cyber security incidents will become more frequent and complex over the coming years. The main question for those analyzing the risk of an ICS security incident is no longer if such an incident will occur, but when. And when it does occur, how will they ensure that they are addressing the range of people, process and technology in order to minimize the impact and cost of the breach.

For years, one of the most effective ways enterprise IT departments have addressed the problem is by leveraging Intrusion Detection Systems (IDS) security.  Operational Technology (OT) groups can now take advantage of similar protections against cyber attacks that can bring down the industrial network, compromise data, or reveal sensitive intellectual property.  While the Department of Homeland Security ICS-CERT has long advocated using IDS as a key preventative measure, the key to a successful implementation is using an IDS that has been designed and built to meet the key security, technical, and business requirements of industrial networks.  For simplicity, efficiency, and security efficacy, IDS should be a key component of an industrial next gen firewall solution.

The right solution must include an industrial-focused IDS (vs. an enterprise IDS) because industrial attacks can easily bypass enterprise IDS.  For example, attackers can:

Use smaller messages to bypass traditional IDS– Many attacks evade enterprise IDS when attacks are broken into segments that the IDS cannot reassemble properly because the IDS does not understand the industrial protocol. For example, consider this scenario:

(1) Allow “aaabbbccc”

(2) Allow “dddeeefff”

(3) Deny “bbbcccddd”

Without understanding industrial protocols, the sensor can see a message segment that reads “bbbcc.”  Although the message content is clear, the IDS does not know if it is the second portion of the first “Allow” message or if it is the first portion of the “Deny” message. Without the ability to understand the significance or potential impact of a message, tuning an IDS to block an attack is virtually impossible without an exorbitant number of false-positives.

Leverage legitimate protocol functionality for illegitimate reasons– Attackers can use functions of an intended feature set of a control protocol for malicious purposes. Consider the damage that can be done to uptime and production if any of the following were used inappropriately: turning devices off, changing IP addresses, modifying names, altering settings, modifying firmware, restarting devices, and more. For example, a subcontractor that performs a small portion of a larger process has misconfigured gear that communicates with your equipment. The misconfigured gear can be used to modify coils, outputs, tags, and other parameters. Without any context to know who (or which device) is permitted to use a particular function leaves system operators of traditional IDS to one option, open or close a port, which is an all-or-nothing solution that is impractical and unusable.

Bypass exploit signatures– Exploits normally have short life cycles and thus, vendors of enterprise IT IDS take easy short cuts in developing signatures.  These signatures are very good at detecting known exploits, but insufficient in detecting the source vulnerability that led to the exploit.  Therefore, there is a clear danger that attackers can easily modify an exploit to bypass the signatures. For example, many bad IDS signatures will have a pattern in them such as "\x41\x41\x41\x41,” which is really just a sequence of "AAAA" that the researcher was using to fill space arbitrarily. An intermediate attacker can recognize this pattern and replace the 'A's with 'B's or another letter/number to bypass the exploit specific protection. Without understanding the software flaw that led to the security concern, full protection is impossible. So, what is the meaning behind the actual data? Is it the number of 'A's that leads to the problem? Perhaps the application only expects to receive 2 characters but getting 4 causes it to crash. Does the number in that section of the message have any limits? The letters "AAAA" are the same as the number 1094795585 from the computer's perspective, so perhaps that number is not supposed to be above 70,000. Does that part of the message even matter for the attack? The sequence "AAAA" can just be separating two more important sections of the message, or padding it to the correct length and doesn't actually matter. Is the key just one of these items, a combination, or all of the above? Without knowing these kinds of details,IDS vendors are always in catch-up mode.

These represent the key attack scenarios that can bypass enterprise IDS and threaten industrial networks.  Because of these key differences between enterprise IT networks and industrial networks, the respective security solutions must be able to account for these differences to provide the security needed in ICS environments. Therefore, to combat attacks on industrial networks, system operators require an IDS with specific protections against industrial attacks.  Therefore, an industrial IDS must feature these vital protections and capabilities:

To counter the above attacks, an industrial next gen firewall featuring industrial IDS must have the following:

A Deep Packet Inspection (DPI) engineis designed to understand the industrial protocols relevant to industrial control systems.  Some protocol examples include PROFINET or CIP for industrial automation, IEC 6070-5-104 or IEC 61850 for electrical substation automation, and many others.  Once the IDS understands a protocol, it has the intelligence to properly reassemble the segments into meaningful messages.  And it is with these messages that the industrial IDS enables organizations to make properly informed security decisions.

Granular policy controlsets specific parameters to determine when communication is allowed.  Actual parameters are highly specific to the industrial protocol. These parameters include items that determine: (1) “Who” – IP addresses, MAC addresses, protocol addressing information (i.e. slave/station address in Modbus), and more; (2) “How” – function codes, operations, data types, and primitive types; and (3) “About what” – coil/IO numbers, memory addresses, tag names, and allowed values. By understanding the parameters in conjunction with the protocol used and the specific context will allow system operators to have the proper visibility to take action on illegitimate use of functions and commands.

Protection against vulnerabilitiesinstead of protection against exploits is needed to ensure long lasting security. Industrial gear is designed to be in service for decades with minimal interaction from system operators and device firmware might be on older revisions for extended periods.  Therefore, protection needs to have high security efficacy to alleviate concerns about frequency of patch times. Therefore, when considering an industrial IDS, ensure that the vendor has the people, expertise, and experience in fully understanding the vulnerability when creating signatures for the DPI engine. Also, work with vendors that have the key relationships that enable them access to full vulnerability information from device vendors, government sources, 3rd party independent researchers, and the source researcher who found the vulnerability. In addition, an ICS-focused IDS will ensure that proper prioritization and research resources will be dedicated to understand the vulnerability to better enhance protections whereas an IT-focused IDS would lowly prioritize ICS vulnerabilities.

Protection profilesprovide in-depth mitigation processes.  Many times, a signature is not enough.  There will be times when more is needed: a patch, an update on configuring the IDS, additional background information and more. Therefore, guidance and direction is needed for the additional mitigation steps.  When considering an effective IDS solution, ensure that protections extend beyond signatures alone. For example, protections should include the following:

· Policy enforcementensures set policy to prevent system attacks or misuse that can impact system productivity and reliability

· IDS signaturesfor vulnerabilities help secure the system from the root vulnerability, defending it against any exploit that may try to take advantage of the weakness. This results in greater accuracy for broader protection and security efficacy, even while using fewer signatures

· Patching updatesare recommended for vulnerable systems to ensure that proper versions address security concerns

And of course, all industrial IDS functionality needs to be easily deployed and managed.  IDS is a key component of an industrial next gen firewall, so both must be deployed on the same firewall device. IT Security staff may lack resources or experience with industrial equipment.  OT teams may not have the security expertise.  So, regardless of whether the IT team or OT team is taking point, the right solution needs to have simplified security administration with an easy-to-use graphical interfaces (i.e. no commandline interface required) to enhance management and deliver visibility across the network.

In summary, there are differences between industrial control systems and enterprise IT networks resulting in different security needs. Therefore, since current enterprise IDS solutions are not designed to protect industrial networks, system operators must opt for an industrial next gen firewall with an IDS that fully understands industrial protocols and the specific context of each industrial command. In addition, knowing that industrial networks are difficult and costly to patch, an industrial next gen firewall “must-have” is protection against vulnerabilities vs. exploits to ensure long lasting, effective security. Therefore, for new installations and for upgrade projects, be sure to include security budget for industrial next gen firewall to effectively protect your company’s assets, productivity, and revenues.

About the Author: Nate Kube is founder and Chief Technology Officer at Wurldtech Security Technologies.

Related: Register for the 2014 ICS Cyber Security Conference

Copyright 2010 Respective Author at Infosec Island]]>
Cyber Security and the Electric Grid – It IS a Problem Mon, 15 Sep 2014 11:31:26 -0500 Politico had an article, “U.S. grid safe from large-scale attack, experts say” ( Digital Bond had a discussion on the article. Enclosed is my response as I don’t believe the “experts” understand the issues including Aurora:     

The electric grid has been, and continues to be, susceptible to unintentional and malicious cyber incidents. There have already been 4 major cyber-related electric outages in the US. I am currently supporting the US DOD on Aurora hardware mitigation and so have a pretty good idea of the issues. Aurora is the cyber exploitation of the physical gap in protection of the electric grid affecting EVERY substation.

The DOD program has installed Aurora hardware mitigation at 2 utilities and we are starting to get data which will be presented at the October ICS Cyber Security Conference in Atlanta. As best as I can tell, there has been at least one Aurora attack that destroyed a power plant overseas. Aurora has several unique features- it can defeat predictive maintenance programs, it can cause multiple failures either simultaneously or over time, and it uses the electric grid to attack the equipment connected to the grid.  Now consider that DHS essentially provided a hit list of critical infrastructure that can be destroyed by Aurora including refineries, water systems, and gas pipelines. This is a very big problem.

This was cross-posted from the Unfettered blog.

Copyright 2010 Respective Author at Infosec Island]]>
Using Network Intelligence to Turn the Table on Hackers Mon, 15 Sep 2014 11:24:01 -0500 As technological innovations transform every aspect of modern life, innovations in cybersecurity threats are evolving in lockstep and often even outpace the IT department’s ability to protect critical data. The ingenuity of malicious actors knows no bounds, exploiting vulnerabilities in open source software, unleashing ransomware and looking for inroads to access organizational and customer data. With attacks that seem to come from all directions, IT security teams have the difficult task of creating new strategies that will secure their organization’s data.

Attackers often employ a pattern of steps, each of which leaves a distinct trail for security teams in the know. Teams that possess the proper threat intelligence about their organizations’ extended networks can typically discover and halt an attack before damage occurs. Sources of that intelligence include commercially available information, ongoing analysis of user behavior and native intelligence from within the organization. Teams that use intelligence inherent in the network will gain insight into how cyber actors operate and how to quickly shut them down.

A Pragmatic Approach

To quickly stop an attack and secure vital data, organizations must implement a security strategy that outwits attackers and addresses the extended network.

Addressing the attack continuum – before, during and after – is a logical approach and a cyclical process for anyone in the security profession. It is helpful to examine each of these stages in detail:

Before an incident occurs, cybersecurity staff is vigilant watching for any area of vulnerability. Historically, security has been all about defense. Today, teams are setting up ways to more intelligently halt intruders by giving them total visibility into their environments, including, but not limited to physical and virtual hosts, operating systems, applications, services, protocols, users, content and network behavior. This knowledge can be used by defenders to take action before an attack has even begun.

During an incident, time is of the essence. Security professionals must be able to identify and understand threats and how to stop them quickly to minimize exposure. Tools including content inspection, behavior anomaly detection, context awareness of users, devices, location information and applications are critical to understanding an attack, as it is occurring. Security teams also need visibility into where, what and how users are connected to applications and resources.

After an incident, security staff must comprehend what happened and how to reduce any damage that may have occurred. Advanced forensics and assessment tools help security teams learn from attacks. Where did the attacker come from? Where did they find a vulnerability?  Could anything have been done to prevent the breach? More important, retrospective security enables an infrastructure that can continuously gather and analyze data to create security intelligence. Compromises that would have gone undetected for weeks or months can be identified, scoped, contained and remediated.

Inside Intelligence

It makes sense then that intelligence and understanding are critical elements of any defensive strategy. Cybersecurity teams are constantly trying to learn more about malicious actors, why and how they are attacking.  This is where the extended network provides unexpected value – delivering a depth of intelligence that cannot be attained anywhere else in the computing environment. Similar to counterterrorism, intelligence is key to stopping attacks before they happen.

As sometimes occurs on the actual battlefield, the success of attacks in cyberspace cannot be predicted solely according to the amount of resources on each side.

Relatively small adversaries with limited means can inflict disproportionate damage on larger adversaries. In these asymmetric environments, intelligence is one of the most important assets for addressing threats. But intelligence alone is of little benefit without an approach that optimizes the organizational and operational use of intelligence.

An example of optimizing intelligence would serve us well here. With network analysis techniques that enable the collection of IP network traffic as it enters or exits an interface, security teams can correlate identity and context. This allows security teams to combine what they learn from multiple sources of information to help identify and stop threats. Sources include what they know from the Web, what they know that’s happening in the network and a growing amount of collaborative intelligence gleaned from exchange with public and private entities.  

Best practices for optimal security include a comprehensive threat containment strategy for the entire attack continuum – before, during and after – to discover threats and defend and remediate against them. Ultimately, threat intelligence used at the organizational and operational level will provide a more comprehensive security posture.

About the author:

Greg Akers is senior vice president, Advanced Security Initiatives Group - Security and Trust Organization at Cisco Systems.



Copyright 2010 Respective Author at Infosec Island]]>
The JPMorgan Chase Breach: How Are Hackers Stealing Your Credentials? Thu, 11 Sep 2014 12:49:15 -0500 If you needed more proof that authentication attacks are on the rise, look no further than the recent JP Morgan Chase data breach. The investigation is still underway, but Proofpoint security researchers have analyzed the 150,000 phishing emails that hit JP Morgan Chase customers to find that attackers are rolling out more than one way to exploit stolen credentials.

Dubbed the ‘Smash & Grab’ campaign, Proofpoint has found that the emails not only ask users to submit their credentials, but the spoofed page also redirects users to a RIG exploit kit via a malicious iframe. RIG checks a machine to see if it’s vulnerable, and then installs the banking Trojan Dyre on a user’s machine.

Symantec reported that RIG checks a computer for certain vulnerabilities found in:

  • Microsoft’s IE (CVE-2013-2551, CVE-2014-0322)
  • Silverlight (CVE-2013-0074)
  • Adobe Flash (CVE-2014-0497)
  • Java (CVE-2013-2465, CVE-2012-0507)


This means even if a user doesn’t give away their password, there’s a chance their machine may be infected anyway. If a user enters their credentials into the spoofed page, they’re served an error report telling them they need a Java update. Users are then prompted to download a fake Java executable file, which effectively installs the Dyre Trojan on their machine.

The Remote Access Trojan (RAT) Dyre is designed to steal passwords and data. This newly discovered Trojan reported in June can bypass SSL within the browser to steal credentials, with a list of U.S. and internationally-based banks as their targets, including Bank of America, NatWest, Citibank, RBS and Ulster Bank. One distinction of this malware is that it doesn’t encrypt any data it POSTs back to the attacker’s command and control (C&C) servers.

Other credential-harvesting email methods include PDFs and zip attachments that attempts to install Dyre once users open them.

Whew - that’s a lot of ways attackers are trying to steal your credentials.

These attacks put heavy emphasis on the importance of stealing credentials (even at the risk of getting found out by using multiple different methods to procure legitimate usernames/passwords), whether via malware or traditional social engineering means.

As the SANS Institute Editor’s Note by William Hugh Murray pointed out in the SANS NewsBites Vol. 16 Num. 69 e-newsletter:

The layered security architectures of money center banks are the targets of daily and resourceful attacks. Almost by definition, some of these attacks enjoy at least limited success. If there were no success at all, the attackers would tire, retire, or seek softer targets. That said, such success should not, as in this case, include “gigabytes of sensitive data.” That it did so suggests insufficient layers and monitoring. Strong Authentication should be the first layer.

At Duo Security, we’re always keeping a close eye on the latest data breach news and how two-factor authentication can play a role in protecting consumers and businesses. By enabling two-factor authentication as the first layer of defense, online banking and financial firms can protect themselves and their users from attacks that steal passwords and successfully authenticate from a remote location and device.

Two-factor authentication is highly recommended by the Federal Financial Institutions Examination Council (FFIEC), the governing body working to secure and standardize web-based financial services for the industry. The FFIEC states that single-factor authentication (only a username and password) is not adequate for:

  • Sensitive communications
  • High-dollar value transactions or
  • Privileged user access (i.e., network administrators)

They also recommend using out-of-band authentication (OOBA) for transactions on the premise that it is more secure than other two-factor authentication methods.

Interested in learning more? You can find out more about the standards and learn more about OOBA in Answer to OTP Bypass: Out-of-Band Two-Factor Authentication.

About Thu Pham

Thu Pham covers current events in the tech industry with a focus on information security. Prior to joining Duo Security, Pham covered security and compliance for the infrastructure as a service (IaaS) industry at Online Tech. Based in Ann Arbor, Michigan, she earned her BS in Journalism from Central Michigan University.

Copyright 2010 Respective Author at Infosec Island]]>
Challenges with MSSPs? Thu, 11 Sep 2014 11:44:00 -0500 Let’s get this out of the way: some MSSPs REALLY suck! They have a business model of “we take your money and give you nothing back! How’d you like that?”

A few years ago (before Gartner) I’ve heard from one MSSP client who said “I guess our MSSP is OK; it is not too expensive. However, they never call us – we need to call them [and they don’t always pick up the phone].” This type of FAIL is not as rare as you might think, and there are managed security services providers that masterfully create an impression in their clients’ minds along the lines of “security? we’ll take it from here!” and then deliver – yes, you guessed right! – nothing.

At the same time, I admit that I need to get off the high horse of “you want it done well? do it yourself!” Not everyone can boast about their expansive SOC with gleaming screens and rows of analysts fighting the evil “cyber threats”, backed up by solid threat intelligence and dedicated teams of malware reversers and security data scientists. If you *cannot* and *will not* do it yourself, MSSP is of course a reasonable option. Also, lately there have been a lot of interesting hybrid models of MSSP+SIEM that work well … if carefully planned, of course. I will leave all that to later posts as well as my upcoming GTP research paper.

So let’s take a hard look at some challenges with using an MSSP for security:

  1. Local knowledge – be it of their clients’ business, IT (both systems and IT culture), users, practices, etc – there is a lot of unwritten knowledge necessary for effective security monitoring and a lot of this is very hard to transfer to an external party (in our MSSP 2014 MQ we bluntly say that “MSSPs typically lack deep insight into the customer IT and business environment”)
  2. Delineation of responsibilities – “who does what?” has lead many organizations astray since gaps in the whole chain of monitoring/detection/triage/incident response are, essentially, deadly. Unless joint security workflows are defined, tested and refined, something will break.
  3. Lack of customization and “one-size-fits-all” – most large organizations do not look like “a typical large organization” (ponder this one for a bit…) and so benefiting from “economies of scale” with security monitoring is more difficult than many think.
  4. Inherent “third-partiness” – what do you lose if you are badly hacked? Everything! What does an MSSP lose if you, their customer, are badly hacked? A customer… This sounds like FUD, but this is the reality of different position of the service purchaser and provider, and escaping this is pretty hard, even with heavy contract language and SLAs.

In essence, MSSP may work for you, but you need to be aware of these and other challenges as well as to plan how you will work with your MSSP partner!

So, did your MSSP caused any challenges? Hit the comments or contact me directly.

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
CERT Pudding and the War on Bad SSL Wed, 10 Sep 2014 11:23:03 -0500 By: Craig Young

On August 21, the CERT Coordination Center at the Software Engineering Institute at Carnegie Mellon University released a MiTM analysis system called CERT Tapioca. CERT/CC is now using this tool to help address a rapidly growing problem many researchers have been taking note of, including myself.

There are many Android applications putting personal data in jeopardy due to improper SSL validation. SSL implementation problems exist in apps of all shapes, sizes and function, ranging from those with little sensitive data and few users to apps with millions of active users handling some of our most sensitive data, such as financial transactions and account login information.

There are in fact too many applications for even a well-staffed team of researchers to manually identify and report all of these issues in a reasonable time frame. Will Dormann, a CERT/CC researcher involved with Tapioca has responded to this problem by developing an automated process using CERT Tapioca along with a virtualized Android environment to simulate application use and identify whether communication channels are at risk.

His work has led to a spreadsheet listing several libraries and hundreds of applications prone to various SSL implementation problems. Deep in the list of vulnerable apps is one I rate as an extremely critical risk. The PHONE for Google Voice & GTalk application (com.moplus.gvphone) is a useful app which I had been using for more than a year before March 2014 when I noticed Google account credentials showing up in the logs from my VPN based SSL MiTM environment.

As it turns out, although parts of this application do in fact validate SSL certificates, the very critical piece of code which requests Google SID tokens did nothing to verify that it received a legitimate Google certificate prior to sending credentials. This means that an attacker could intercept Google credentials, as well as returned session ID values, giving an attacker access to any data associated with the account. Fortunately (for me), the compartmentalized approach to security I employ meant that the credentials for my ‘real’ Gmail account were never exposed.

In general, I won’t use applications if they expect me to enter Google account credentials, since it is difficult to guarantee that the credentials are not uploaded to some shady developer. However, sometimes an app offers functionality I cannot find elsewhere, so I create a new Gmail account just for the app. This protected me but it may not be the case for the million plus other users who have downloaded the app.

Upon discovering this issue, I reached out to Mitre and the app developer (Mo+), but only received a response from Mitre who assigned CVE-2014-2566 on March 21, 2014. I later flagged the application in the Google Play store but to date I have not received a response from Mo+ or Google regarding the app, which according to CERT/CC was still vulnerable as of  September 3, 2014.

Now that it is listed in the CERT/CC spreadsheet, the cat is out of the bag regarding this popular calling app. So, I would urge caution to anyone using the Mo+ PHONE for Google Voice and GTalk app. Due to the security risks, consumers should either remove or avoid the application entirely or limit use to ‘trusted’ networks only. Attackers can easily exploit this vulnerability with a pineapple WiFi or equivalent rogue access point.

I would also encourage anyone who is interested in testing the safety of a specific app to take a look at my VPN based SSL testing strategy documented here.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Who Will Foot the Bill for BYOD? Wed, 10 Sep 2014 06:58:10 -0500 The concept of "Bring Your Own Device" seems so simple. Employees can just tote their personal phone or tablet with them to the office – which they're probably doing anyway – and use it for work. Or, they access the corporate network remotely, from home or while on-the-go. BYOD and remote access have always seemed like a win-win arrangement – employers pay less hardware costs and employees gain convenience.

Of course, it's never really been that simple or straightforward. And now, following a ruling by the California Second District Court of Appeal, BYOD looks poised to become even more complicated.

Last month, the court ruled that companies in the state must reimburse employees who use their personal phones for work purposes. Specifically, the ruling covers voice call expenses, and reimbursement is not contingent on an employee's phone plan – even if the employee has unlimited minutes, for example, the employer must reimburse a "reasonable percentage" of the bill.

The consensus in IT circles is that the ruling muddies the water around BYOD. Now that there's a legal precedent for voice call reimbursement, mandatory data reimbursement could be the next shoe to drop. And why wouldn't it? Americans rack up more expenses for mobile data consumption than they do for voice calls. Should the law evolve, and if the California ruling sets a national precedent for other states, many companies may find BYOD no longer saves them that much money.

DataHive Consulting's Hyoun Park has said that the ruling would be a "deal killer" for many companies, while Forrester Research's David Johnson told Computerworld that BYOD could now be "sidetracked" for some companies as IT and business leaders scrum over how the ruling affects their own policies.

The 'Rights' of Employees

The reimbursement issue is one of many that have been whittling away at BYOD's appeal to workers. Also high up on that list are security concerns. Employers are worried that many workers who participate in BYOD do not use any additional security features beyond whatever came as the default with the device.

In response, employers have clamped down by adding more security, through supplemental applications and software. This not only undermines the whole concept of BYOD – since the devices are no longer fully the employees' "own – but there has already been a backlash by employees. Half have said they would stop using a personal mobile device for work if their employer forced them to install security applications. That seems like a very clear line in the sand.

Some have even called for some ground rules to dictate the relationship between workers and employers as it relates to BYOD and remote access. Webroot has gone as far as to call for a "BYOD Bill of Rights." Among its eight principles, employees' personal information would remain private, security applications would not denigrate speed or performance of a device, and employees would be able to choose whether to use their personal device for work.

One way for employers to create a secure BYOD environment, without infringing on any of the "rights" employees have defined for themselves, is through a VPN with central management capabilities, also in combination with container solutions like Samsung Knox or Open Peak Secure Workspace.

Network administrators can adopt VPNs to create a secure network tunnel through which devices connect to the corporate network. Central management functionality allows a network administrator to take action as soon as a breach is detected, whether that means revoking network access or deprovisioning a user.

The only way BYOD and remote access will continue to grow is if employers and workers are able to achieve consensus and compromise along the security-convenience spectrum.

Copyright 2010 Respective Author at Infosec Island]]>