Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Threat Intelligence: Knowledge is Power Tue, 26 May 2015 11:24:47 -0500 Organizations have made massive investment in a variety of security solutions over the years. It is important to understand what investments that have made in security technologies in order to understand the success and possible challenges that they face.

The initial focus was to secure the perimeter and to invest in firewall, and intrusion detection systems and secure the endpoint by investing in anti-virus solutions to protect the user base.

The issue with Firewalls and IDS systems is that they need to be continuously updated and require lots of human intervention and have no visibility into unknown or zero day attacks.

The issue with AV solutions and scheduled scanning is that they typically miss malware threats that are stealthy in nature. While these technologies are still must haves as part of and organizations security portfolio, their focus is on known attacks but they have no visibility into new or unknown attacks.

What followed was the massive investment in SEM/SIM/SIEM solutions. The problem with these solutions even up till today, is that customers are not properly prepared to take on large scale deployments because of a lack of defined processes more important the lack of trained people to support them.

Finding the needle in the stack of needles continues to be a major issue with SIEM solutions. In addition, SIEM is only as good as the level of auditing and logging of the reporting devices/systems.

Those same systems require human intervention to keep them current but they also lack the visibility of emerging threats which can impact the value of SIEM. Organizations now realize that perimeter defense and feeding logs into SIEM are lacking and are now investing in solutions that focus on post prevention/post compromise.

When the focus shifts to post prevention, having a forensics capability is important and an understanding of what additional data may be required to answer the question of how bad is bad. Customers are now asking questions such as how was I attacked, when did it happen, is it still happening and most important who attacked me.

Many post prevention solutions focus on Advanced Persistent Threat. These solutions focus on advanced targeted attacks and advanced malware.

These advanced attacks are designed to bypass the traditional signature based solutions (mentioned above); which often require user intervention (a “people” issue) to keep them up to date and are only effective when the threat is known.

One example of an Advanced Threat Solution that customers have begun to implement over the last few years is Network Forensic Full Packet Capture (FPC) solutions.

These solutions have been implemented over the last few years in an attempt to combat advanced persistent threats by collecting and performing deep packet inspection on every packet that enters and exits the network.

These solutions are great but require a lot of storage if one is to leverage the data for both real-time and forensic analysis.

In addition, FPC solutions require the analyst (people) to have a thorough understanding of their network environment in order to establish a baseline of known good and that baseline will need to be updated as new threats emerge.

In addition, the analyst must have a deep understanding of indicators of compromise that could alert them of the unknown threats and advanced targeted attacks and the various techniques that are used.

Another example of an Advanced Threat Solution that customers have made investments in are Advanced Malware Detection solutions that often include sandbox/simulation technology.

These solutions are built to handle a high volume of data and if there is something unknown it will be sent to the sandbox environment for further analysis. These solutions are not bullet proof and the analyst must wait for the results of the analysis before action can be taken.

There is also no guarantee that what was found in the simulated environment will directly map to the production environment.

In addition, the intelligence that powers these solutions relies heavily on what is gleaned from the vendors install base. There are other solutions that customers have invested in but I wanted to point out some of the more popular solutions.

Successful technology deployments have center-around a balance of people, process and technology. In many cases, the issues customers have faced over the years have centered-around people and process.

Organizations have struggled in the past with having enough people and the proper processes in place to ensure that the mean time to remediation when a breach occurs is as short as possible.

Many of these security solutions require full time resources dedicated to the upkeep and maintenance of each solution. Considering the ever-changing threat landscape and the need to perform forensics post compromise, organizations must continue to invest in training of their security teams.

Another way to educate security teams is to add Threat Intelligence to enhance visibility. Having early indications of potential threats before they get to compromise is another way to keep security teams better informed. Automated live threat intelligence could help shorten the time it takes to identify potential threats and potentially minimize the frequency of security incidents.

Given the investments that customers have made in security solutions to date, adding Threat Intelligence into the mix is the next logical step.

Live Threat Intelligence can provide security teams with pivotal information about potential threats and provide insight and motivation behind some of the more targeted attacks which security teams need focus on first.

Gone are the days when you implement a solution and wait for them to alert you of a potential threat and begin incident response. Organizations need to take a proactive approach to incident response.

Adding Threat Intelligence into existing processes could improve monitoring and once the threat intelligence data source is trusted, the data could be used to perform active inline blocking in order to capture potential threats before compromise.

This was cross-posted from the Dark Matters blog.

Copyright 2010 Respective Author at Infosec Island]]>
10 Ways to Detect Employees Who Are a Threat to PHI Tue, 26 May 2015 11:19:52 -0500 It’s why I get up in the morning.

Most people who don’t work in security, assume that the field is very technical, yet really – it’s all about people.   Data security breaches happen because people or greedy or careless. 100% of all software vulnerabilities are bugs, and most of those are design bugs which could have been avoided or mitigated by 2 or 3 people talking about the issues during the development process.

I’ve been talking to several of my colleagues for years about writing a book on “Security anti-design patterns” – and the time has come to start. So here we go:

Security anti-design pattern #1 – The lazy employee

Lazy employees are often misdiagnosed by security and compliance consultants as being stupid.

Before you flip the bozo bit on customer’s employee as being stupid, consider that education and IQ are not reliable indicators of dangerous employees who are a threat to the company assets.

Lazy employees may be quite smart but they’d rather rely on organizational constructs instead of actually thinking and executing and occasionally getting caught making a mistake.

I realized this while engaging with a client who has a very smart VP – he’s so smart he has succeeded in maintaining a perfect record of never actually executing anything of significant worth at his company.

As a matter of fact – the issue is not smarts but believing that organizational constructs are security countermeasures in disguise.

So – how do you detect the people (even the smart ones) who are threats to PHI, intellectual property and system availability:

  1. Their hair is better organized then their thinking
  2. They walk around the office with a coffee cup in their hand and when they don’t, their office door is closed.
  3. They never talk to peers who challenge their thinking.   Instead they send emails with a NATO distribution list.
  4. They are strong on turf ownership.  A good sign of turf ownership issues is when subordinates in the company have gotten into the habit of not challenging the VP coffee-cup holding persons thinking.
  5. They are big thinkers.    They use a lot of buzz words.
  6. When an engineer challenges their regulatory/procedural/organizational constructs – the automatic answer is an angry retort “That’s not your problem”.
  7. They use a lot of buzz-words like “I need a generic data structure for my device log”.
  8. When you remind them that they already have a generic data structure for their device log and they have a wealth of tools for data mining their logs – amazing free tools like Elasticsearch and R….they go back and whine a bit more about generic data structures for device logs.
  9. They seriously think that ISO 13485 is a security countermeasure.
  10. They’d rather schedule a corrective action session 3 weeks after the serious security event instead of fixing it the issue the next day and documenting the root causes and changes.

If this post pisses you off (or if you like it),  contact danny Lieberman me.  I’m always interested in challenging projects with people who challenge my thinking.

This was cross-posted from the Software Associates blog.

Copyright 2010 Respective Author at Infosec Island]]>
SSL and TLS Update Tue, 26 May 2015 11:16:28 -0500 At the beginning of March, a new vulnerability to SSL and TLS was announced called FREAK. This compounded the announcement last fall of POODLE that caused the PCI SSC to abruptly call SSL and “early” TLS (i.e., TLS versions 1.0 and 1.1) as no longer acceptable as secure communication encryption.

In April, the PCI SSC issued v3.1 of the PCI DSS and gave us their take on how to address POODLE. Their plan is to have organizations remediate SSL and “early” TLS as soon as possible but definitely by June 30, 2016. While remediating SSL and “early” TLS, organizations are required to have developed mitigation programs for these protocols until they are remediated. There are some exceptions to the June 30, 2016 deadline for devices such as points of interaction (POI) but those exceptions are few and far between and still require some form of mitigation.

Reading the explanations for the POODLE and FREAK vulnerabilities, while they are technically possible over the Internet, they are much more realistic to be performed successfully internally. As such, these vulnerabilities are more likely to be used as part of an attacker’s toolkit when compromising a network from the inside. This is not good news as an organization’s internal network is much more vulnerable since a lot of appliances and software have SSL and TLS baked into their operation and will not be quickly remediated by vendors, if they are remediated at all (i.e., you will need to buy a new, upgraded appliance). As a result, organizations need to focus on their internal usage of SSL and “early” TLS as well as external usage.

The remediation of these vulnerabilities on the Internet facing side of your network should be quick. Stop supporting SSL and TLS versions 1.0 and 1.1 for secure communications. While I do know of a few rare situations where taking such action cannot be taken, most organizations can simply turn off SSL and TLS v1.0/1.1 and be done with the remediation.

As I pointed out earlier, it is the internal remediation that is the problem. That is because of all of the appliances and software solutions that use SSL/TLS and vendors are not necessarily addressing those issues as quickly. As a result the only approach is to mitigate the issues with appliances that are at risk. Mitigation can be as simple as monitoring the appliances for any SSL or TLS v1.0/1.1 connections through log data or using proxies to proxy those connections.

The answer to SSL and TLS vulnerabilities are to remediate as soon as possible. If you are unable to remediate, then you need to mitigate the risk until you can remediate.

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Cloud Security Monitoring … Revisited (aka It Is Not 2012 Anymore!) Tue, 26 May 2015 11:14:35 -0500 My next project, now that I am done with security analytics for now, is to revisit our cloud security monitoring work. Specifically, some of you remember my 2012 (!) paper “Security Monitoring of Public Cloud Assets”, where I presented these three monitoring architecture choices for your public cloud assets:

  1. Most Monitoring On-Premises – this is essentially about monitoring the cloud environments by using your traditional on-premise tools, sending cloud logs to yourSIEM, etc.
  2. Most Monitoring on Monitored IaaS – this is about deploying your monitoring tools inside the monitored cloud (only works for IaaS, naturally)
  3. Most Monitoring via SaaS or Other Third Party [or another cloud] – this one is about using another cloud to monitor your cloud, such as cloud log manager or another monitoring tool (like cloud SWG?)

In reality, back in 2012-2013, by far the most common approach to security monitoring of the public cloud assets was … not to do any. Indeed, while we have seen a tiny number of clients who practiced one or more of the above architectural approaches, most of the rest practiced cloud computing with no security – and thus with no security monitoring. While loud, obnoxious screams “Security FAIL!!” may be heard, the reality is that many organizations used public clouds for stuff that just didn’t matter much, and “no security” was probably about the right amount of security needed. At the same time, industry research seemed to confirm that CSPs were not the source of damaging incidents and “data breaches.”

Boy, have the times changed! The IT media would have us believe that 2010-2012 was the time when “everybody flocked to the cloud” – and I can tell you right away that this is a complete lie. Even now is not the time when everybody uses public cloud computing, and it is most definitely NOT the time when everybody uses cloud for important and business critical stuff. Sure, make no mistake, the use of cloud computing has grown, but mature approaches to security monitoring of the cloud assets are still really, really rare…

Still, I think this research is worth a revisit. Here is what I think really changed – and I would very much welcome your feedback:

  • CASB has risen [no, this is not related to Easter at all :-)] – overall cloud monitoring using the “in-between approach” has matured and has (I think) become a primary approach to be added to the above 3, especially for SaaS
  • Cloud logging has improved: one word – CloudTrail (one SIEM vendor told me that this was the most requested data sources to integrate in the entire history of their device integration team)
  • Monitoring agents to be baked into cloud instances have not become mainstream – while I intend to do more research on this, it seems like “monitor IaaS from the agent” has fizzled [it seemed very promising to me in 2012; BTW, if you are a vendor who can prove me wrong on this one, I am happy to be so proven]

So, got more ideas? Thoughts?

Vendors, want to showcase your relevant technology? Enterprises, got a fun “how I monitored the cloud?” story?

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
Microsoft and the Software Lifecycle Mon, 25 May 2015 14:28:11 -0500 By: Tyler Reguly

For some reason, Europe’s ‘The Final Countdown’ was playing in my head as I sat and pondered this write-up. I suppose that’s fitting given that we are about to cross the 60-day mark until Windows Server 2003 goes End-of-Life.

The concept of product EOL can be confusing, especially given the frequent cross-contamination that exists within Microsoft products. Since I suspect a number of people will be rushing to polish their legacy Server 2003 systems that they can’t migrate in the next 60 days to ensure the best possible situation once the EOL occurs, I wanted to address something that people may notice.

I’ve spoken to a few people about it with regard to past Microsoft bulletins, but MS15-048 raises the point again today, so it’s worth visiting.

Something that you may notice in the bulletin is a reference to a .NET 1.1 update for Windows Server 2003 without a matching update for .NET 1.1 on Windows Server 2003 x64. This may become more noticeable if you run a vulnerability management product like Tripwire IP360 and notice that this vulnerability is reported for .NET 1.1 on Windows Server 2003 x64 but can’t find the proper patch to install. Welcome to Microsoft and EOL products.

.NET 1.1 is officially EOL, and it has been since 2013. This means that Microsoft no longer provides updates for .NET Framework 1.1.

This is where your mind says, “But wait, Tyler… there’s a .NET Framework 1.1 patch in MS15-048.”

Your mind is correct. However, it’s not considered to be a .NET Patch; it’s technically a Windows Server 2003 update.

microsoft end of life 2003 (2)You see, Windows Server 2003 shipped with .NET 1.1, but Windows Server 2003 x64 did not. If you wanted .NET Framework 1.1 on Server 2003 x64, you had to install it yourself. This was done via a stand-alone installer, the one that has been EOL since 2013.

A quick way to tell if an update is available for the version of .NET Framework that shipped with Windows or the standalone version is to look at the update’s filename. Updates for the standalone will start with ‘NDP,’ while updates for the Windows Version will begin with ‘Windows.’

This is a key difference between patch management and vulnerability management. While patch management will tell you which patches are missing, vulnerability management will tell you the vulnerabilities that exist. So, regardless of the vulnerability management tools you use, keep in mind that there may not always be a patch to solve the problem they are reporting.

In this case, the security conscious individual would want to uninstall .NET Framework 1.1 from their Windows Server 2003 x64 in order to remediate the vulnerabilities.

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Should MAD Make its Way Into the National Cyber-Security Strategy? Mon, 25 May 2015 14:21:57 -0500 Arguably, Mutually Assured Destruction (MAD) has kept us safe from nuclear holocaust for more than half a century. Although we have been on the brink of nuclear war more than once and the Doomsday clock currently has us at three minutes ‘til midnight, nobody ever seems ready to actually push the button – and there have been some shaky fingers indeed on those buttons! 

Today, the Sword of Damocles hanging over our heads isn’t just the threat of nuclear annihilation; now we have to include the very real threat of cyber Armageddon. Imagine hundreds of coordinated cyber-attackers using dozens of zero-day exploits and other attack mechanisms all at once. The consequences could be staggering! GPS systems failing, power outages popping up, banking software failing, ICS systems going haywire, distributed denial of service attacks on hundreds of web sites, contradictory commands everywhere, bogus information popping up and web-based communications failures could be just a handful of the likely consequences. The populous would be hysterical! 

So, keeping these factors in mind, shouldn’t we be working diligently on developing a cyber-MAD capability to protect ourselves from this very real threat vector? It has a proven track record and we already have decades of experience in running, controlling and protecting such a system. That would ease the public’s very justifiable fear of creating a Frankenstein that may be misused to destroy ourselves.

Plus think of the security implications of developing cyber-MAD. So far in America there are no national cyber-security laws, and the current security mechanisms used in the country are varied and less than effective at best. Creating cyber-war capabilities would teach us lessons we can learn no other way. To the extent we become the masters of subverting and destroying cyber-systems, we would reciprocally become the masters of protecting them. When it comes right down to it, I guess I truly believe in the old adage “the best defense is a good offense”.

Thanks to John Davis for this post.

This was cross-posted from the MSI State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Will Your Contractors Take Down Your Business? Mon, 25 May 2015 10:17:56 -0500 Do you know how well your vendors, business associates, contracted third parties (who I will collectively call “contractors”) are protecting the information with which you’ve entrusted them to perform some sort of business activity? You need to know.

Late last year, a study of breaches in the retail industry revealed 33 percent of them were from third party vendor access vulnerabilities. The largest healthcare breach in 2014 was from a business associate (the contractor of a hospital system) and involved the records of 4.5 million patients.

The list of breaches caused by contractors throughout all industries could fill a large book. The damage that your third parties can cause to your business can be significant. Do you know the risks that your contractors and other third parties bring to your organization? Or, will your contractors take down your business because of their poor security and privacy practices?

I’ve led over 300 contractor information security and privacy assessments. I’ve see a lot of crazy things, risky things, and downright incredibly stupid things. I’ve also seen a lot of common information security and privacy problems that contractors bring to those hiring them.

As a start to your contractor information security and privacy management activities, here are five things to check on when contracting another company to perform services on your behalf, especially those involving personal information.

      1.  Documented information security and privacy policies and procedures need to exist. And not only exist, the employees also need to know they exist, and they need to be actually following them. The policies and procedures also need to be kept updated to address changes in the business environment, risk environment and to meet changes in legal requirements. A large portion of the contractors I’ve assessed said they had policies and procedures, but when I asked to see them they’ve replied something to the effect of, “Oh, they are undocumented but understood policies. We are a small company; we share our policies by word of mouth.

You need to make sure they have documented policies and procedures. If they aren’t documented they don’t exist.

      2.  They need to understand their obligations to appropriately safeguard personal information. In the past year I’ve actually had over a dozen contractors state that they did not believe that they needed to safeguard personal information if that information is discoverable online. What blockheads are continuing to spread this horrible advice? Worse yet, some of these contractors with this belief were even selling the personal information to create another revenue path.

You need to make sure your contractors understand that they must appropriately secure, and not share, the personal information you’ve entrusted to them.

    3.  They need to provide training or awareness activities. Many of the activities contractors say they do for training are not training. One contractor I assessed said their training was the message they sent to their employees telling them to read the information security policies; this is *not* training. Another contractor copied, verbatim, the entire HIPAA regulatory text and pasted into ~300 PowerPoint slides, and then told their workers to “view” the “training” slides. This is not training. Information security and privacy training, and awareness communications, must actually provide educational value!

You need to make sure your contractors provide regular information security and privacy training to their workers, and regularly send awareness reminders.

     4. They don’t perform risk assessments. A large percentage of the contractors I’ve assessed, around 25 – 30 percent, had never performed a risk assessment. An additional percentage, also around 25 – 30 percent, had performed a risk assessment once, and that was it. Some of those solitary risk assessments were performed over 5, 10, and even one was 17, years ago. Yes, these two types of contractors represent around half of the contractors. You cannot effectively secure information if you do not know where your risks are located, and what kind of risks you have. These types of contractors are leaving your organization vulnerable.

You need to make sure your contractors have a risk management process in place.

    5.  They don’t use basic security tools. Encryption, audit logs, mobile computing security tool, patch management, and other basics are not used by many contractors; even contractors providing IT services. Over the years I’ve found a large majority of contractors did not use encryption on their web sites, even for forms where they were collecting personal information on behalf of their client who contracted them. They also often do not have their mobile devices encrypted, and most also don’t encrypt sensitive information they send using emails and text messages. There is also a significant portion not logging access to personal information, and not logging major security events. And surprisingly, many still do not use comprehensive anti-malware tools or firewalls on personal devices. Even if such basic security was required within their SLA, that requirement was often not communicated to those who wound need to implement such tools.

Make sure your contractors have basic, expected security tools implemented, beyond just including within the contract and/or SLA. Your contractors need to use basic security tools to protect the information you’ve entrusted to them.

You Cannot Outsource Your Responsibility

This is also a very important thing to know: Generally, a hold harmless clause in the contract you use to try and relieve all responsibility for the bad things that may happen that are caused by the contractor will *not* alleviate you of all responsibility and accountability for breaches and other bad things that may occur as a result of their vendors’ actions, vulnerabilities, or unaddressed threats. I’ve heard this from well over half of the organizations I’ve spoken with and done projects for in the past five or so years.

I am still hearing way too many organizations state something very similar to: “We outsourced so we wouldn’t be liable for the security of the information when it is under the care of the outsourced entity.” It simply does not work that way, folks; for many reasons. But bottom line, your responsibility for securing and using information appropriately follows that information to whomever you have contracted.


Organizations will be judged by the company they keep … the businesses they contract. If organizations don’t want to become proactive about their oversight of those contracted entities, I have a question for them: Are they ready to pay for the security and privacy sins of their contracted entities?

This was cross-posted from the Privacy Professor blog. 

Copyright 2010 Respective Author at Infosec Island]]>
More Possible Common Threads in Major ICS Cyber Incidents – Unintended System Interactions Mon, 25 May 2015 10:10:10 -0500 One of the most important aspects in addressing ICS cyber security is the concept of “systems of systems”.  Unlike IT where you can test a box and label it and the system secure, control system cyber security requires testing the overall system. That is because even if the individual boxes are secure, the communication protocols may not be secure or there may be unintended system interactions.

The crash of an Airbus A400M airlifter that killed four people on May 9 may have been caused by new software that cut off the engine-fuel supply. The aircraft featured new software that would trim the fuel tanks allowing the aircraft to fly certain military maneuvers.

This is not the first instance where unintended ICS interactions have affected the overall system. A number of years ago, a power plant connected a dispatch system to its plant distributed control system (DCS) as it was the utility’s most economic unit. Both software systems were tested but there was no testing of the integrated system. When the systems were connected, the unit was ramped back and forth across its full load range at the maximum ramp rate. The DCS effectively maintained the control variables within constraints so the operator was not aware of the turbine cycling. This is really important as operators often rely on the DCS to protect the plant and in this case it was the DCS causing the problem with no indications.

However, the turbine rotor was subjected to significant stress. The analysis showed the turbine SIGNIFICANTLY exceeded the design stress curves 3 times in the 3 hour period. The event impacted the turbine rotor’s lifetime and the dispatch status of the unit reducing the revenue generation from the ancillary services market. The situation threatened the viability of the unit to compete in the marketplace and will result in the utility having to repair or replace the turbine rotor earlier than expected.

Another case of unintended system interaction was the implementation of a turbine vendor’s security patch. The turbine vendor did not coordinate the new security functionality with the existing engineering design.  The “uncoordinated” patch resulted in the loss of view and loss of control of the turbine.

There have been many other examples of significant system impacts from the unintended consequences of system interactions. To date, most of these problems have been unintentional. However, the impacts have been very significant.

Furthermore, it wouldn’t be that difficult to cause these issues intentionally with small chance of detection.

This was cross-posted from the Unfettered blog.

Copyright 2010 Respective Author at Infosec Island]]>
Traffic Intelligence: Open vs. Closed Crowd-Sourced Thu, 21 May 2015 13:30:56 -0500 In my last blog, we compared cyber reputation to physical real-world reputation. In keeping with a theme of comparisons, I recently travelled 4,400 miles over the holidays and became more familiar than I’ll admit with the famous I95 Interstate Highway traffic patterns.

I started out with using a Waze, a user friendly, gamification based and social networking tool that uses open crowd sourcing of highway traffic. Ultimately, I graduated to the Garmin Premium Traffic Subscription to enrich and automate my intelligence.

On roads with a reasonable amount of traffic, Waze can provide a fair amount of real-time intelligence and situational awareness while driving. The app works nicely with an iPhone and data plan with significant warning of accidents, traffic and even police traps.

Either a co-pilot or driver can generate alerts for the app to warn other users of this system. While driving you can swipe your hand over the phone to generate an alert, it takes a few alerts to register as I didn’t see my alert post while I was in traffic and at times there are false positives.

In short, Waze is free and a good companion on a trip and I think that over time Google will improve the user experience (read simplify).

On one side, I don’t think reporting while driving is safe, as it does cause a distraction at a critical time when road-side or traffic event should be demanding the attention of the driver.

On the other side, having real-time audio warnings of traffic at a halt ahead did make driving less stressful. The gamification aspect of this app does keep folks who were hooked on Farmville entertained while driving.

Where I felt Waze came up short, finding alternate routes and providing suggestions with and estimated time-savings of the detour. If the feature exists, other Wazers were not using it as the road was clogged up. I must admit, many people use the technology.

I also didn’t like the map overlay and found that any kind of manipulation of the map/app while driving could easily lead to an accident as my co-pilot had trouble navigating through the screens.

Garmin functions off of closed crowds-sourcing mechanisms, which seemed to be on par or better for traffic notifications but really shined when it came to being a map company.

Automatically suggesting better routes, providing speed limits within the interface and providing guidance on which lane to be in when driving through Washington D.C., elements that turned out to be where my attention was placed.

Waze was just a little too busy and therefore was used for audio warnings. Garmin just does a much better job of integrating and not taking over the entire driving experience with clutter.

On a number of occasions, I mentally justified paying extra for the premium service as I drove down an empty highway next to the bumper to bumper Wazers on I95.

Aside from the user interface, although Waze is free as an app, the data plan needed while roaming is not, where the Garmin is a fixed fee. My copilot enjoyed using Waze on the long trip, I have been using the Garmin even in areas where I don’t need it.

This comparison is quite analogous to the differences I’ve experienced between simple threat feeds and premium threat intelligence offerings. I’ll let you draw your own conclusion as to how.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Logjam Attack: What You Need to Know Thu, 21 May 2015 13:24:48 -0500 A group of security researchers and computer scientists have recently uncovered a vulnerability in how a Diffie-Hellman key exchange is deployed on the web.

Dubbed as Logjam, the vulnerability affects home users and corporations alike, and over 80,000 of the top one million domains worldwide were found to be vulnerable. The original report on Logjam can be found here.

What is Diffie Hellman?
Diffie-Hellman is used to establish session keys that are a shared secret between two communicating parties. Protocols like SSH (used for secure shell access) or TLS (a common protocol used to secure data on the web) can implement Diffie-Hellman session keys in order to transport data securely.

Common examples of where Diffie-Hellman may be used include securing bank transactions, e-mail communication, and VPN connections, just to name a few.

What did the researchers discover?
Unfortunately, many implementations of Diffie-Hellman on web servers use weaker parameters.

The researchers behind the Logjam attack found these web servers to be vulnerable, allowing an attacker to read or alter data on a secure connection. According to their report:

We identify a new attack on TLS, in which a man-in-the-middle attacker can downgrade a connection to export-grade cryptography. This attack is reminiscent of the FREAK attack [6], but applies to the ephemeral Diffie-Hellman ciphersuites and is a TLS protocol flaw rather than an implementation vulnerability

While much of the research is performed against a Diffie-Hellman 512-bit key group, the researchers behind the Logjam discovery also speculate that 1024-bit groups could be vulnerable to those with “nation-state” resources, making a suggestion that groups like the NSA might have already accomplished this. A comprehensive look at all of their research can be found here.

This all sounds great. What do I need to do?
You can use the link above to discover if your browser is vulnerable. If it is, you should see an image like the one below:


At the time of this writing, patches are still in works for all the major web browsers, including Chrome, Firefox, Safari, and Internet Explorer. They should be released in the next day or two, so ensure your browser updates correctly once its released. These updates should reject Diffie-Hellman key lengths that are less that 1024-bits.

In the meantime, you may want to use a Virtual Machine and avoid entering sensitive information into website forms.

For those running web servers that implement Diffie-Hellman, make sure the key group is 1024-bit or larger. There is also a help page that can be found here.

This was cross-posted from the Malwarebytes blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Microsoft Patching: Don’t Forget to Read the Fine Print Thu, 21 May 2015 09:58:38 -0500 By: Lane Thames 

During my career, I have built and managed hundreds of production-level client and server systems, and nothing can be more worrisome than when it comes time to apply patches and upgrades to software. Why? Because things can, and often times, do go wrong during patch and upgrade cycles.

According to a few reports, it is possible that system administrators will have some minor side effects to deal with after applying this month’s patches. I cannot really comment on the accuracies of these failure reports that are surfacing. However, I can say that Microsoft’s May 2015 Patch Tuesday contained a few complexities that, if nothing else, could result in confusion for administrators.

So, let me explain. First, let’s look at the overall bulletin numbers.

Microsoft released 13 bulletins: MS15-043 thru MS15-055. These thirteen bulletins covered 47 unique CVE IDs. With 47 unique CVE IDs, we can assume that at least 47 vulnerabilities were addressed—sometimes a single CVE ID is used to track more than one vulnerability.

Further, these 13 bulletins touched a slew of products and subsystems, including kernel, kernel mode drivers, Microsoft Office, .NET, Silverlight, Lync, SharePoint, SCM, JScript, VBScript, MMC, Schannel, and, of course, Internet Explorer. Indeed, it was a big patch cycle for system admins to deal with.

Second, we have MS15-052 and MS15-055. MS15-052 addressed a security feature bypass in the Windows kernel, whereas MS15-055 addressed an information disclosure vulnerability in Schannel (Secure Channel).

One potential area of confusion for admins, as well as a source of potential patch installation errors, related to these two bulletins is that KB3061518 in MS15-055 actually supersedes KB3050514 in MS15-052. According to Microsoft, manual installation of these patches requires that administrators install MS15-052 first, before installing MS15-055.

One of the reports surfacing is related to machines not being able to contact licensing servers after installing the Schannel patch. I don’t suspect that issue to be related to this MS15-052/MS15-055 supersession and upgrade sequence. This is likely due to some other software dependency. Software dependency is a huge factor that must be considered with developing, testing and deploying any type of patch or upgrade.

Lastly, we have MS15-044. MS15-044 was a beast of a bulletin.

One area of confusion results from the various updates provided by MS15-044 having identical update files provided by other bulletins released in the same cycle.

For example, MS15-049 addressed an elevation of privilege vulnerability in Silverlight, whereas MS15-044 addressed, amongst other things, a remote code execution vulnerability in Silverlight due to improper processing of TrueType fonts. However, both of these bulletins shipped the exact same patch set given as KB3056819.

This is not a huge deal but could cause confusion for those who choose to install patches manually and who don’t read the fine print.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Shock Therapy for Medical Device Malware Wed, 20 May 2015 13:00:54 -0500 Israel has over 700 medical device vendors.  Sometimes it seems like half of them are attaching to the cloud and the other are developing mobile apps for all kinds of crazy, innovative applications like ( Visual Input Turned Into Powerful Medical Insight – translation: an app that lets you do urine analysis using your smart phone).

But – let’s not forget that many Medical devices  such as bedside monitors, MRI, nuclear medicine and  catheterization devices all reside on today’s hospital enterprise network.

An enterprise hospital network is a dangerous place.

Medical devices based on Microsoft Windows  can be extremely vulnerable to attack from hackers and malware who penetrate the hospital network and exploit typical vulnerabilities such as default passwords.

More importantly – medical devices that are attached to a hospital network are a significant threat to the hospital network itself since they may propagate malware back into the network.

While a thorough software security assessment of the medical device and appropriate hardening of the operating system and user-space code is the best way to secure a medical device in a hostile hospital network – this is not usually an option for the hospital once the medical device is installed.

Taking a page out of side-channel attacks and using the technique to detect malware, University of Michigan researchers have developed WattsUpDoc, a system designed to detect malware on medical devices by noting small changes in their power consumption.

The researchers say the technology could give hospitals a quick way to identify medical devices with significant vulnerabilities.

The researchers tested WattsUpDoc on an industrial-control workstation and on a compounder, which is used to mix drugs.

The malware detector first learned the devices’ normal power-consumption patterns. Then it tested machines that had been intentionally infected with malware. The system was able to detect abnormal activity more than 94 percent of the time when it had been trained to recognize the malware, and up to 91 percent of the time with previously unknown malware. The researchers say the could alert hospital IT administrators that something is wrong, even if the exact virus is never identified.

For the full article see WattsUpDoc.

This was cross-posted from the Software Associates blog. 


Copyright 2010 Respective Author at Infosec Island]]>
Banking Customers Under Attack Beyond the Perimeter Wed, 20 May 2015 10:21:01 -0500 Although banking and financial services continues to be the target of increasingly effective cyber-attacks, the main security strategy for most banks is based on a perimeter that ends at their internal network’s firewall. While this approach might stave off some direct attacks it can fail to protect a bank’s customers or brand equity from attackers leveraging web assets and mobile apps that are well outside the traditional security perimeter.

Cyber criminals have figured out there are plenty of ways to defraud your customers without having to pierce your well-defended firewall – two recent examples are:

In 2014, the Dyre Wolf attack used automated social engineering tactics to gain access to depositor accounts. A spear phishing email was used to deliver the malware,which altered the display of a bank’s website, tricking customers into calling a fake customer service number and giving up their account credentials. Not only did this attack not require any direct breach of the bank’s perimeter, it completely side-stepped all the two-factor authentication methods being added to secure customer sessions.

In 2015, the Dridex banking Trojan used macro-infested XML files within Excel that were attached to phishing emails posing as remittance or payment notifications. When users opened the attachments it mimicked banking websites, capturing user credentials that were later used for theft of funds.

While these specific attacks leveraged consumer trust in banking websites and support lines, this problem also affects mobile devices. An earlier survey by RiskIQ determined that out of 350,000 banking-related Android apps, over 40,000, or 11%, were confirmed to contain malware or flagged as containing suspicious binaries from a consortium of 70+ AV vendors, with roughly 50% of those having signatures consistent with mobile-based Trojan malware.

To provide a quantitative assessment of these threats, RiskIQ performed a security survey of the web assets and mobile apps associated with each of the top 35 banks and financial service firms.

The survey scanned over 260,000 web assets and uncovered numerous unsecured assets, exploitable components, and misconfigured websites:

  • 100% had web assets hosted outside of internal networks managed by their IT group
  • 61% of the web assets were actually outside the firewall
  • 80% relied on one or more external web servers outside the firewall
  • All but 2 had one or more embedded analytics or tracking services
  • Over 30% used 10 or more third-party JavaScript libraries, averaging 7 per site
  • 97% had a minimum of 13 broken SSL certificates, averaging 431 per bank

On mobile platforms, over 1,777 bank-related apps were scanned with similar results:

  • There were an average of 51 apps per bank, even though most had only one or two official apps
  • 94% were outside official app stores, making updates and patches problematic.
  • 80% required 10 or more permissions, opening numerous security holes for users.

The truth is with banking transactions now spreading out via websites and mobile apps, defense should not end at your firewall--instead you must consider every web asset, social media site or mobile app under your brand to be at risk of breaching customer trust. What you must add to your defense posture is you customers and brand; not just your depositor funds and employees.

This was cross-posted from the RiskIQ blog. 

Copyright 2010 Respective Author at Infosec Island]]>
DDoS Attacks Spiked in Q1 2015: Akamai Wed, 20 May 2015 10:14:34 -0500 The state of affairs when it comes to dealing with distributed denial-of-service attacks is not particularly rosy, according to Akamai Technologies' Q1 2015 State of the Internet - Security Report.

The first quarter of the year set a record for the number of DDoS attacks observed across Akamai's Prolexic network, with the total number of attacks being more than double the number recorded in the first quarter of 2014. The number of attacks also represented a jump of more than 35 percent compared to the final quarter of last year.

Eric Kobrin, Akamai's director of information security responsible for adversarial resilience, tied the increase to the continued exploitation of network devices and associated protocols for reflection attacks, as well as the growing popularity of DDoS-for-hire sites.

"(It's) very inexpensive to launch attacks DDoS being used as a preferred attack method for malicious actors," he said. "As a result, we have noticed an increase within our customer base as well an overall higher demand for cloud-based DDoS mitigation services."

The typical DDoS attack during the first quarter of 2015 was less than 10 gigabits per second (Gbps) and lasted for more than 24 hours. There were also eight "mega-attacks" during the quarter that exceeded 10 Gbps, the report noted.

Read the rest of this article on 

Copyright 2010 Respective Author at Infosec Island]]>
FBI Says Researcher Admitted Hacking Airplane in Mid-Flight Tue, 19 May 2015 12:25:46 -0500 A researcher who specializes in aircraft security admitted hacking into an airplane’s systems during a flight and successfully sending a climb command to one of the engines, according to an FBI search warrant application.

Chris Roberts, security researcher and founder of enterprise security assessment and consulting firm One World Labs, was featured in news reports last month after he posted a tweet about hacking into the communication system and EICAS (Engine-Indicating and Crew-Alerting System) of the United Airlines flight he was on.

When he landed, the FBI detained him for questioning and seized his electronics. A few days later, when he attempted to board a United Airlines flight, he was banned from getting on the plane.

An FBI search warrant application related to the incident was obtained last week by Canada-based APTN. In the document, FBI Special Agent Mark Hurley revealed that Roberts stated during interviews that he identified vulnerabilities in the in-flight entertainment (IFE) systems of Boeing and Airbus aircraft.

According to Hurley, the researcher said he had compromised IFE systems 15-20 times between 2011 and 2014. The expert said he exploited IFE vulnerabilities while in flight.

Roberts apparently hacked the IFE systems on planes by connecting his laptop through a Cat 6 ethernet cable to the Seat Electronic Box (SEB) located under the passenger seat. FBI agents inspected the SEB located under the expert’s seat after a flight he took from Chicago to Philadelphia and determined that it was tampered with.

Read the rest of this article on

Copyright 2010 Respective Author at Infosec Island]]>
Top Three Attack Vectors for SAP Systems Tue, 19 May 2015 09:27:20 -0500 A new study based on the assessment of hundreds of SAP implementations found that over 95% of SAP systems were exposed to vulnerabilities that could lead to full compromise of an organization’s critical data.

The study reveals the three most common attack vectors for compromising SAP systems at the application layer which put intellectual property, financial data, customer/supplier information, and databases at risk.

SAP has been implemented by over 250,000 customers worldwide, including 87% of Global 2000 companies and 98% of the 100 most valued brands, and are not being adequately secured against threats by traditional security solutions.

“The big surprise is that SAP cybersecurity is falling through the cracks at most companies due to a ‘responsibility’ gap between the SAP Operations team and the IT Security team. The truth is that most patches applied are not security-related, are late or introduce further operational risk,” said Mariano Nunez of Onapsis.

“Breaches are happening every day but still many CISOs don’t know because they don’t have visibility into their SAP applications.”

The top three SAP attack vectors identified in the study include:

  • Pivoting Between SAP Systems, where the attack begins with a system with lower security to a critical system in order to execute remote function modules in the destination system
  • Portal Attacks, where backdoor users are created in the SAP J2EE User Management Engine and an attacker obtains access to SAP Portals and Process Integration platforms and their connected, internal systems
  • Database Warehousing Attacks through SAP proprietary protocols, where an attacker executes operating system commands under the privileges of a particular user and by exploiting vulnerabilities in the SAP RFC Gateway to gain access to the the SAP database

The researchers also found that the majority of companies experience protracted patching cycles which average 18 months or more, with 391 security patches released by SAP in 2014 – an average of more than 30 per month – with nearly half ranked as being “high priority” by SAP.

“Companies today are looking ahead at the opportunities presented by moving systems to the cloud, enabling user adoption through mobile devices and big data. The challenge is that most of these new possibilities rely on legacy systems such as SAP,” said Renee Guttmann of Accuvant.

“In a connected world, it is essential that critical business applications be protected. Securing a company’s crown jewels is a board-level discussion. Information security professionals need to re-evaluate how SAP is protected from cybersecurity threats.”

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>