Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Enterprise Security Pros Embracing Threat Intelligence, But Question Reliability: Survey Tue, 31 Mar 2015 13:16:59 -0500 Awareness of the role threat intelligence can play in improving cyber security may be growing, but some still remain unsold on its effectiveness, a new study has shown.

In a new report from the Ponemon Institute commissioned by Webroot, 80 percent of the IT professionals surveyed that had experienced a breach during the past two years said they felt threat intelligence would have helped prevent or minimize the consequences of the attack. The stat is telling, as 40 percent of the 693 people participating in the survey said their organization had been breached during that period.

However, the overall numbers tell a slightly different story. While 53 percent said threat intelligence was critical to having a strong security posture, 47 percent did not agree. According to the report, this may be due to the quality of threat intelligence, which in some cases has not evolved to the point where some consider it a critical component of IT security strategy.

In fact, later in the survey, many organizations indicated that while they are increasing the amount of intelligence data they consume, much of it is not considered all that useful. While 45 percent of respondents say they are increasing the amount of intelligence data they receive, and just nine percent classified the accuracy of that intelligence as "very reliable." In addition, on a scale of one to 10, with 10 being the best, 36 percent rated the accuracy of intelligence as a 3 or a 4. 

Read the rest of this article on 

Copyright 2010 Respective Author at Infosec Island]]>
NIST: Internet of Things Hampered by Lack of Effective Timing Signals Tue, 31 Mar 2015 11:51:36 -0500 As the rapid expansion of connected devices continues unabated, one small issue may may prove to be a major challenge the Internet of Things (IoT) – the lack of effective methods to integrate accurate timing systems with devices and networks.

The National Institute of Standards and Technology (NIST) has released a new report that examines how timing can affect the way systems are designed to process and exchange data, and what impact it will have for discrete processors and mechanical devices that are linked linked through information networks.

“Applications, computers, and communications systems have been developed with modules and layers that optimize data processing but degrade accurate timing. State-of-the-art systems now use timing only as a performance metric,” the authors of the report said.

“Correctness of timing as a metric cannot currently be designed into systems independent of hardware and/or software implementations. To enable the massive growth predicted, accurate timing needs cross-disciplinary research to be integrated into these existing systems.”

Accurate timing mechanisms will be crucial for the continued development of a range of products and applications, such as driverless cars, the smart electrical grid, and advanced remote controlled systems that need to be able to make split-second decisions while communicating through network pathways.

“The trouble is that these applications frequently will depend on precision timing in computers and networks, which were designed to operate optimally without it,” writes NIST’s Chad Boutin.

“For example, for a driverless car to decide whether what it senses ahead is a plastic bag blowing in the wind or a child running, its decision-making program needs to execute within a tight deadline. Yet modern computer programs only have probabilities on execution times, rather than the strong certainties that safety-critical systems require.”

In the report, the authors present an overview of several different areas that are dependent upon to the use of accurate timing signals and recommend cross-discipline research to improve current technologies and approaches, including clock design, the use of timing in networking systems, hardware and software architecture, and application design.

“Imagine writing a letter to your friend saying it is now 2:30 p.m., and then sending it by snail mail so he can synchronize his watch with yours,” said NIST’s Marc Weiss.

“That’s the equivalent of how accurate the timing of messages are in computers and systems right now. The transfer delay must be accounted for to do the things that are expected of the IoT.”

Currently, systems need to be upgraded and recalibrated as they become obsolete, so networked devices that need to coordinate time-sensitive processes will need to be designed from scratch with the ability to handle frequent updates to computer applications and networks as the systems change and grow.

“The kind of growth in the IoT that is expected to happen will be severely hampered without these improvements,” Weiss says. “It won’t be able to grow the way people want.”

The full NIST report is available here (PDF).

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Are You Prepared for Runaway Deprovisioning in Your IAM Environment? A True-Life Story… Tue, 31 Mar 2015 11:46:51 -0500 By: Walt Witucki

A colleague shared this story:

“It was one of ‘those’ mornings – overslept, running late, traffic, no close in parking left.  Not even time for the morning coffee stop. Then it happened.  Upon arriving, a co-worker said: “You are not going to have a good day.”  Turns out the automatic deprovisioning routines in IAM had received bad data feeds from the HR system and took action.”

And, yes, you guessed the rest of the story – not a good day.

Most are probably thinking – Why didn’t they consider this possibility?  In fact they did.  The solution for this use case was already developed, heading into test, and was 2 weeks away from deployment. Close but no cigar, as they say. True story.

What is your nightmare runaway deprovisioning use case?  Have you considered the possibility?  Here are a few tips that you may want to include in your planning if you have not developed a response plan for your IAM environment:

1.  Manual Restoration.

If you catch the auto deprovisioning fast enough, this may be the least disruptive solution.  Additionally, if your automation really means the automated creation of a work ticket someone, then your AD administrator may raise a red flag if they are suddenly flooded by “remove account” work tickets.

2. Restore Active Directory (AD).

If your IAM (de-)provisioning use cases only involve creating/removing user accounts in AD, then restoring AD from a backup is a solution – disruptive but a solution.

3.  Custom Recovery Script.

Perhaps your use cases are crisp enough that a script can be prefabricated.  Understanding exactly what was done would enable a script to be pre-staged for quick use.  Again, you would need a very defined use cases.  Most likely, however, this approach could be a partial solution and need to be paired with another recovery approach.

4.  Identify Recovery Data.

Your IAM logs, ticketing system, or even custom reports could be a valuable source of “what happened and to who”.  Prepare now to have that data easily accessible if and when an event occurs.  Preposition scripts, create reports, or increase event logging now if necessary.

If your IAM solution includes more than just account and email creation, and has expanded into entitlement    (de-)provisioning for applications, SharePoint sites, databases, network appliances, etc., then recovery becomes much more of a challenge.  The recovery team expands beyond the IDM group and the recovery strategy spans more of the IT environment.  Many of the same strategies above are still applicable, but now they must be established for each affected resource, not just AD.

6. Early Retirement.

No, just joking, but then again….

7.  Prevention!

For me, I never liked being in any type of recovery – whether it was error recovery in some code I wrote, or recovery from a failed system implementation at 3 AM Sunday morning.  Have your IAM team brainstorm with the HR system folks on how runaway deprovisioning could be (1) detected and then (2) prevented.  The answer could be as simple as establishing a threshold for deprovsioning.  For example, if your IAM system usually only processes 20 deprovision requests a night, then anything greater could be a sign of a problem.  Consider inserting a new routine into your workflow that “counts” the number of requests before they are actually processed.  The logic might look something like this:

-       How many requests are there?

-       What is the threshold number (stored in a updateable database)?

-       If over the threshold, abnormally terminate processing and generate a trouble ticket and call-out.

This puts the human factor back in the equation when needed.   Is this OK?  Should the threshold be temporarily increased for this one (say up to 21 or 22)?  What is the reason for double the number of requests tonight?   All good questions that can’t be scripted or assumed to be “normal”.  Of course, the threshold number should be easily modifiable for unusual, but planned events, involving a larger number of deprovisioining requests.

8.  Stay Generic, Stay Flexible.

One final tip - Keep analyzing your deprovisioning workflows for other situations that would enable runaway deprovisioning.  Consider that unexpected field values may come through your deprovisioning routines.  How will your Boolean logic respond?  Keep questioning – perhaps with each major patch cycle or upgrade.

This was cross-posted from the Identropy blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Should Infosec Professionals Hack To Understand the Mind of the Attacker? Tue, 31 Mar 2015 10:23:23 -0500 The fact that cyber threats are increasing in both variety and number is placing greater and greater demands on information security professionals, who are trying to stay one step ahead of the attackers. To anticipate where and how an attacker might strike next, security professionals are realizing the importance of being able to understand the mind of the attacker and what they value in a target.

To gain insight on an attacker’s perspective, some argue it is useful for aspiring security professionals to engage in black hat hacking. The reasoning goes that hacking would provide professionals firsthand experience in thinking and acting like an attacker, enabling them to develop an extensive foundation in offensive maneuvering that they could then use to defend and protect. Under this model, the contributions these professionals could make to security would, therefore, outweigh the downsides of their black hat exploits.

However, despite the benefits one might derive from hacking a company, some security experts agree that any and all malicious computer activity that goes unreported, especially conduct that stems from security professionals, is counterproductive to security.

More than that, however, hacking does not in any way help security professionals become better at protecting users, a viewpoint with which Tim Erlin, Director of Product Management, Security and IT Risk Strategist at Tripwire, agrees:

“The logic of this premise is fundamentally flawed. We don’t believe that law enforcement officers need to try out being criminals to understand how a criminal might think. Understanding the tools and techniques of your adversary is important to establishing an effective defense, but it doesn’t require an immersion into some shady underworld.”

Lane Thames, a software development engineer and security researcher with Tripwire’s Vulnerability and Exposure Research Team (VERT), is even firmer in rejecting undocumented hacking as a potential security tool.

“You want to learn how to hack, and you think it is ok to go hacking on someone’s (or some organization’s) website? Absolutely not,” Thames concludes. “This type of activity is purely malicious and should never be done without the organization’s (or person’s) permission.” Tweet: If you think it is ok to go hacking on someone’s website? You´re wrong! via @TripwireInc @Lane_Thames

HackThat is not to diminish the value of being able to understand the mind of an attacker. On the contrary, these viewpoints merely shift the conversation to various tools and solutions of which aspiring security professionals can take advantage without having to worry about causing harm to another company.

If a security professional is interested in learning about offensive computer measures at their own pace, they can turn to virtualization technology as a means to hack a computer system in an isolated environment.

“In a world where virtualization is freely available, there’s little to stop the average security analyst from setting up a few target systems and attacking them,” Erlin observes.

Thames is of the same mindset: “Learning the art of hacking is a good thing. Just remember that it is ‘How’ you hack that determines whether or not you are categorized as a black hat. Don’t be a black hat.” Tweet: ‘How’ you hack that determines whether or not you are categorized as a black hat via @TripwireInc @Lane_Thames

There are also a variety of safe hacking resources open to individuals who learn better in more team and group settings.

Dwayne Melancon, CISA and Tripwire’s Chief Technology Officer, explains more:

“When it comes to learning about information and system security, I love using simulations, ‘capture the flag’ events, and red team / blue team exercises as a way to understand the mindset of an attacker. They also help you practice your defenses in a more realistic environment.”

These kinds of simulations are now readily accessible at conferences and in training classes, including SANS, Blackhat Conference trainings, and SensePost.

“In these scenarios, you learn a lot quickly, then take that learning back to your day job where you can apply the principles without having to engage in any questionable activities,” said Melancon.

Additionally, security professionals who are interested in learning more about hacking can seek to join a pentesting team at their workplace, an opportunity of which Irfahn Khimji, CISSP and Senior Information Security Engineer at Tripwire, is a firm advocate.

“Many companies offer penetration testing type roles where the sole goal of the team is to find new exploits. Google’s Project Zero is a great example,” Khimji observes. “Their goal is to discover and responsibly disclose vulnerabilities they find in various products. An aspiring security professional can join a role like this to get a better understanding.”

Whether one pursues virtualization technologies, simulations, or a junior pentesting position, all of these resources convey the same message: security is not a zero-sum game. A security professional might derive some benefit from hacking a company. However, the losses borne by the victim would not only outweigh those benefits; they would also undermine the role of the security professional as one who can be trusted to protect users online.

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
3 Things I Learned Talking to InfoSec People About Crime Mon, 30 Mar 2015 12:54:35 -0500 Over the last several years, I have given many many talks about the behavior of criminal rings, how the criminal underground operates and black market economics. I wanted to share with my audiences some of the lessons I have learned about crime. Many people responded well and were interested in the content. Some replied with the predictable, “So what does this have to do with my firewall?” kind of response. One older security auditor even went so far as to ask me point blank “Why do you pay attention to the criminals? Shouldn’t you be working on helping people secure their networks?”  I tried to explain that understanding bad actors was a part of securing systems, but she wouldn’t hear of it…

That’s OK. I expected some of that kind of push back. Often, when I ask people what they want to hear about, or where my research should go, the responses I get back fall into two categories: “more of the same stuff” and “make x cheaper”, where x is some security product or tool. Neither is what I had in mind… :)

Recently, I announced that I was taking this year off from most public speaking. I don’t think I will be attending as many events or speaking beyond my podcast and webinars. Mostly, this is to help me recover some of my energy and spend more time focused on new research and new projects at MicroSolved. However, I do want to close out the previous chapter of my focus on Operation Aikido and crime with 3 distinct lessons I think infosec folks should focus on and think about.

1. Real world – i.e.” “offline” crime – is something that few infosec professionals pay much attention to. Many of them are unaware of how fraud and black markets work, how criminals launder money/data around the world. They should pay attention to this, because “offline” crime and “online” crime are often strongly correlated and highly related in many cases. Sadly, when approached with this information – much of the response was – “I don’t have time for this, I have 156,926 other things to do right now.”

2. Infosec practitioners still do not understand their foes. There is a complete disconnect between the way most bad guys think and operate and the way many infosec folks think and operate. So much so, that there is often a “reality gap” between them. In a world of so many logs, honeypots, new techniques and data analysis, the problem seems to be getting worse instead of better. Threat intelligence has been reduced to lists of IOCs by most vendors, which makes it seem like knowledge of a web site URL, hash value or IP address is “knowing your enemy”. NOTHING could be farther from the truth….

3. Few infosec practitioners can appreciate a global view of crime and see larger-scale impacts in a meaningful way. Even those infosec practitioners who do get a deeper view of crime seem unable to formulate global-level impacts or nuance influences. When asked how geo-political changes would impact various forms of crime around the world, more than 93% of those I polled could only identify “increases in crime” as an impact. Only around 7% of those polled could identify specific shifts in the types of crime or criminal actors when asked about changes in the geo-political or economic landscapes. Less than 2% of the respondents could identify or correlate accurate trends in response to a geo-political situation like the conflict in Ukraine. Clearly, most infosec folks are focused heavily ON THIER OWN STUFF and not on the world and threats around them.

I’m not slamming infosec folks. I love them. I want them to succeed and have devoted more than 20 years of my life to helping them. I will continue to do so. But, before I close my own chapter on this particular research focus, I think it is essential to level set. This is a part of that. I hope the conversation continues. I hope folks learn more and more about bad actors and crime. I hope to see more people doing this research. I hope to dig even deeper into it in the future.

Until then, thanks for reading, stay safe out there, and I will see you soon – even if I won’t be on stage at most events for a while. ;)

PS _ Thanks to all of the wonderful audiences I have had the pleasure to present to over the years. I appreciate and love each and every one of you! Thanks for all the applause, questions and, most of all, thanks for being there!  

This was cross-posted from the MSI State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
How to Manage Mac and Mobile Devices in Your Existing Infrastructure Mon, 30 Mar 2015 12:38:17 -0500 Live Webcast: Tuesday, March 31st at 1PM ET

Please join us on Tuesday, March 31 at 1PM ET for a special webcast: How to Manage Mac and mobile devices leveraging your existing infrastructure, presented by Centrify.

Centrify WebcastThe growth of Mac in the enterprise is undeniable. Apple's success with the iPhone and iPad is bleeding over into end user preference for laptops.

In this webcast we will look at how to effectively manage Macs in the enterprise as well as mobile devices leveraging your existing IT infrastructure.

Register Now

Can't make the live event? Register now and we'll send you an email with a link to watch it on demand.

Copyright 2010 Respective Author at Infosec Island]]>
PCI Swot Analysis Mon, 30 Mar 2015 11:51:42 -0500 I had someone ask me about my thoughts on this sort of analysis of the PCI DSS. While these comments are PCI focused, I found that they actually apply to all security frameworks.


The biggest strength in any security framework, PCI DSS included, is they are all based on the “best practices” from a wide variety of leading experts and organizations. Essentially, security frameworks are the shared knowledge base of what it takes to have basic security. We talk today about sharing breach information better and potentially in near real time, but security frameworks are the original method of sharing such information.


Unfortunately, I see a number of weaknesses with security frameworks.

The largest weakness with security frameworks I see is that most people, including a lot of security professionals, seem to believe that complying with the framework is all it takes to be secure. With the PCI DSS a lot of this misinformation can be laid at the feet of the card brands. It was the card brands that originally marketed the PCI DSS as the “be all, to end all” for securing the payment process.

The unfortunate fact of life for security frameworks is that they only minimize and manage security risks, they rarely ever eliminate them. Therefore, even following the PCI DSS to the letter is no guarantee that an organization could not be breached. Yet this concept of risk minimization, risk management and the fact that security is not perfect consistently gets missed by executives. So when the inevitable breach occurs, executives go after the security people for supposedly misleading them.

Another area of weakness is the time with which it takes to make an update to the framework. In October 2014, the National Institute of Standards and Technology (NIST) issued a bulletin on secure sockets layer (SSL) indicating that they had found a flaw in the protocol and that they no longer found the protocol secure. A few weeks later the Internet was introduced to POODLE and SSL was declared insecure. It took a few months for the PCI SSC to react to this and officially declare SSL was no longer to be relied upon for secure communications. It took vulnerability scanners almost a month to begin flagging SSL implementations as high vulnerabilities as the CVE had not yet been updated. And we were recently informed that it will be April at the earliest before we will get the latest version of the PCI DSS. In the meantime, all of this administrivia did not stop attackers from using POODLE to their advantage.

The final weakness I see with security frameworks is that organizations find it impossible to execute them consistently at near 100%, 24×7. In theory the PCI DSS will provide reasonable security for all but the most dedicated attacks such as with advanced persistent threat (APT). For an organization to achieve basic security, they would have to execute the requirements of the PCI DSS at least at 95%+ and would have to remediate any issues within a few days. Unfortunately as we have seen in the recently released Merchant Acquirer Committee study, merchants are typically only compliant with the PCI DSS between 39% and 64% of the time – far from 95%+. Verizon’s recently released PCI report backs this up with their findings. The bottom line is that most organizations lack the discipline to execute any security framework consistently enough to achieve basic information security.


The biggest opportunity I see for the PCI DSS is it gives organizations the impetus to simplify their environments. The biggest reason for the failure to execute the PCI DSS consistently is because a lot of organizations have technology environments that mimic a Rube Goldberg cartoon. Only by simplifying that environment will an organization have a reasonable chance of securing it.

Another opportunity this gives organizations is a reason to enhance their security operations. Most organizations run bare bones security operations no different than other areas. However, what PCI compliance assessments typically point out is that those security operations are grossly understaffed and not capable of ensuring an organization maintains its compliance at that 95%+ level.

Related to these two opportunities is what the PCI SSC calls business as usual (BAU). BAU is the embedding of the relevant PCI requirements into an organization’s business processes to make it easier to identify non-compliance as soon as possible so that the non-compliance situation can be rectified. BAU is primarily designed to address the execution weakness but can also have a significant effect on the other weaknesses.

Finally, the last opportunity is to address the failings of an organization’s security awareness program. Organizations finally come to the realization that all it takes to defeat all of their expensive security technology is human error. The only way to address human error is extensive security awareness training. No one likes this, but in the end it is the only thing that remains when you have implemented all of the requisite security technology.


The obvious threat that will never go away is the attackers. As long as we have our interconnected and networked world, attackers will continue their attacks.

The final threat is complacency. A lot of organizations think that once they achieve PCI compliance that their work is done and that could not be further from the truth. Security is a journey not something you achieve and then move on to the next issue. The reason is that no organization is static. Therefore security must constantly evolve and change to address organizational change.

There are likely even more items that could be added to each of these categories. However, in my humble opinion, these are the key points.

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Government Says It Has a Policy on Disclosing Zero-Days, But Where Are the Documents to Prove It? Mon, 30 Mar 2015 11:43:10 -0500 We have known for some time that the U.S. intelligence and law enforcement community looks to find and exploit vulnerabilities in commercial software for surveillance purposes. As part of its reluctant, fitful transparency efforts after the Snowden leaks, the government has even officially acknowledged that it sometimes uses so-called zero-days. These statements are intended to reassure the public that the government nearly always discloses vulnerabilities to software vendors, and that any decision to instead exploit the vulnerability for intelligence purposes is a thoroughly considered one.

But now, through documents EFF has obtained from a Freedom of Information Act (FOIA) lawsuit, we have learned more about the extent of the government’s policies, and one thing is clear: there’s very little to back up the Administration’s reassuring statements. In fact, despite the White House’s claim that it had “reinvigorated” its policies in spring 2014 and “established a disciplined, rigorous and high-level decision-making process for vulnerability disclosure,” none of the documents released in response to our lawsuit appear to be newer than 2010.

Last spring, the Office of the Director of National Intelligence (ODNI) issued a strong denial of press reports that the NSA knew about and exploited the Heartbleed vulnerability in the OpenSSL library. As part of that denial, the ODNI described the “Vulnerabilities Equities Process” (VEP), an “interagency process for deciding when to share vulnerabilities” with developers. EFF submitted a FOIA request to ODNI and NSA to learn more about the VEP and then sued to force the agencies to release documents.

ODNI has now finished releasing documents in response to our suit, and the results are surprisingly meager. Among the handful of heavily redacted documents is a one-page list of VEP “Highlights” from 2010. It briefly describes the history of the interagency working group that led to the development of the VEP and notes that the VEP established an office called the “Executive Secretariat” within the NSA. The only other highlight left unredacted explains that the VEP “creates a process for notification, decision-making, and appeals.”

And that’s it. This document, which is almost five years old, is the most recent one released. So where are the documents supporting the “reinvigorated” VEP 2.0 described by the White House in 2014? Nor do the documents we have seen do much to back up the claim that VEP 1.0 ever functioned as a guide for helping the government decide whether to disclose zero-days. Meanwhile, reports describing the CIA’s annual hacker “jamboree” instead suggest that there’s little stopping the government from exploiting vulnerabilities it comes across. Indeed, none of the documents describing the CIA’s jamboree contain the slightest suggestion that the VEP was actively considered.

Writing about the newly released documents in Wired, Kim Zetter places them in the context of the government's development of the Stuxnet worm:

We know that Stuxnet, a digital weapon designed by the U.S. and Israel to sabotage centrifuges enriching uranium for Iran’s nuclear program, used five zero-day exploits to spread between 2009 and 2010—before the equities process was in place. One of these zero-days exploited a fundamental vulnerability in the Windows operating system that, during the time it remained unpatched, left millions of machines around the world vulnerable to attack. Since the equities process was established in 2010, the government has continued to purchase and use zero days supplied by contractors.

The older documents [.pdf] released to EFF by ODNI are so thoroughly redacted that it’s difficult to glean much from them, though they seem mainly to report progress made by the working group developing the VEP over the course of several months in 2008. One suggests that the working group recognized different considerations between the government’s “Offense” and “Defense” functions in dealing with zero-days. Another tantalizingly mentions that the working group asked stakeholders to begin “drafting of scenarios (vignettes)” to illustrate the policy issues involved in the VEP, but of course any such vignettes in the documents are redacted.

The core of the concern over the government’s use of zero-days is that these vulnerabilities often exist in products that are used widely by the general public. If the government keeps a vulnerability secret for intelligence purposes, it does not notify the developer, which would likely otherwise issue a patch and protect users from online adversaries such as identity thieves or foreign governments who may also be aware of the zero-day. Nevertheless, the Snowden leaks have shown that the government apparently routinely sits on zero-days, something that President Obama’s own Review Group strongly recommended against [.pdf]. The VEP is supposedly an answer to these concerns, but right now it looks like just so much vaporware.

All the documents released in response to EFF’s FOIA suit so far are available here. We’re still awaiting documents from NSA due to be released in the next three weeks. 

This was cross-posted from EFF's DeepLinks blog. 

Copyright 2010 Respective Author at Infosec Island]]>
SXSW: Three Cybersecurity, Remote Access Takeaways from Austin Mon, 30 Mar 2015 10:50:54 -0500 The South by Southwest (SXSW) Interactive Festival wrapped up last week in Austin, Texas, where 65,000 industry movers and shakers learned about some of the most innovative technology expected to hit the market over the next few years. What was on the minds of presenters, panelists, and attendees alike? “The Future” – all of its possibilities and its promise.

Given all of these technology advancements, it makes sense that some of the panels and conversations happening in Austin took on a more cautious tone and focused on the surrounding cybersecurity concerns. We’ve identified three panels from SXSW that addressed cybersecurity directly – or brought to light security issues that weren’t on the agenda – and provide these lessons for each.

1. ‘Everything is Connected, Everything is Vulnerable’

Marc Goodman is hardly the first network security expert to predict that cyberthreats will become increasingly pervasive and damaging in the coming years. But few people have gone into such detail about these threats, as Goodman did during his SXSW panel, “Future Crimes of the Digital Underworld.”

Goodman, the author of “Future Crimes: Everything Is Connected, Everyone Is Vulnerable,” brought with him to Austin a laundry list of possible new targets for hackers, including but not limited to Internet of Things devices like pacemakers, baby monitors, insulin dispensers, and even drone aircraft. He warned, “We’re not going to solve these problems by burying our heads and pretending they don’t exist.”

For network administrators, that means acknowledging that these devices could enter their workplace, and then taking steps to neutralize any threat they may pose. As we’ve written before when discussing the Internet of Things, there’s no such thing as too much network security. At a minimum, enterprises should make sure that all devices connected to their network have up-to-date firmware and that they’re all protected by a shield of interconnected network security technologies, including VPNs, firewalls and intrusion prevention systems.

2. Yahoo Reveals Encrypted, Password-less Email

Since launching in 1997, Yahoo Mail has been a market leader, and now, two decades later, the service is again evolving – this time, to become more secure. During SXSW, CIO Alex Stamos gave the first public demo of Yahoo’s new encrypted email service, which would make users’ messages more private. Stamos said the service would be available to users before the end of the year.

Network security advocates should also be encouraged by another announcement Yahoo made during SXSW – that Yahoo Mail users are now able to forego traditional passwords in favor of one-time passwords sent to their mobile devices each time they want to access their email.

Chris Stoner, Yahoo’s Director of Product Management, wrote on Yahoo’s Tumblr page that the on-demand password feature is intended to make logging into email “less anxiety-inducing.” This new feature also better protects Yahoo Mail accounts, even though it doesn’t go quite as far as the more secure two-factor authentication option.

3. Gladwell, Gurley Spar over Driverless Cars

In February, supporters of connected cars had to tap the brakes on their enthusiasm, following a report by German researchers that found more than 2 million BMWs were vulnerable to remote access hacks. The reason the security community was startled by this news is that there’s growing momentum not only behind connected vehicles – those with network access capabilities – but also those that are completely autonomous.

The topic came up at SXSW during a spirited conversation between author Malcolm Gladwell and prominent venture capitalist Bill Gurley. Gurley said he was skeptical of driverless cars, because the public would be “less tolerant of a machine error causing a death than a human error causing a death.” Gladwell came down on the other side, claiming that the number of lives saved by driverless cars – since drunk driving would be reduced – justifies their existence.

What their discussion reveals is that we’re not quite ready for fully autonomous vehicles yet. As a society, we first need to make sure we fully understand cybersecurity and secure machine-to-machine (M2M) communication, before these vehicles take over the nation’s highways.

Innovation and Caution Can Co-Exist

It’s encouraging that despite all the excitement in Austin around new technology and other advancements, the SXSW crowd still got a good sense of the necessary steps to ensure cybersecurity is not jeopardized by new innovations. There’s no reason not to exercise a bit of caution while boldly stepping into the future.

This was cross-posted from the VPN Haus blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Threat Intelligence: A Force Multiplier for Security Pros Thu, 26 Mar 2015 11:52:25 -0500 With all of the threats and exploits that are discovered on a daily basis we, as security professionals, are always looking for an advantage.

The advantages that we seek when we are securing our infrastructures range from physical hardware solutions, software solutions, education and training, services, and either adding headcount or seeking third party staff augmentation.

These advantages give us a leg up in our ability to maintain our current responsibilities, or free us up to take on new ones. I have seen this occur under many conditions, hardware and software to pick up slack because of a lack of staffing, staffing to pick up where priorities have shifted, or any combination to assist in securing our infrastructures.

Recently I have been engaged with several organizations that are looking to increase their success rate on the security battlefield. I have heard from CISOs, security directors, practitioners and responders how their biggest problem is not the individual fight, but the volume of fights that are occurring simultaneously.

This has become a common theme across security organizations. Small, medium and large corporate security organization are fighting multiple security events and need force multipliers.

I have been hearing the term force multiplier a lot lately. I have even added it to my vernacular… apologetically I’ll add that my wife overheard me using it more times than I care to admit, and I think it is starting to wear on her.

I have done this because it is seems to be the most appropriate term I can come up with for what we do in Threat Intelligence. I conducted a little bit of research by Googling the term force multiplier to gain a better insight. I am going to use the Wikipedia definition, as it was the first hit:

force multiplier refers to a factor that dramatically increases (hence “multiplies”) the effectiveness of an item or group. Some common force multipliers are: Morale. Technology.

I wanted to make sure I was correct. I also wanted to validate the assessments that are being made by industry professionals, practitioners and security leaders. It is used to describe the needs of security practitioners to assist in reducing the number of exploits and threats that they are dealing with on a daily basis.

Threat Intelligence is a force multiplier. It has also become a necessary part of any security solution. Threat Intelligence can serve as a stand-alone solution or can be incorporated into a variety of security infrastructures to add to or become the full value of that solution.

Take for example the firewall. A firewall blocks and allows traffic based on a set of rules. Battling traffic that could be implicitly blocked before it gets to the firewall makes Threat Intelligence a stand-alone force multiplier.

Conversely, if we are talking about a Gen 2 firewall, adding Threat Intelligence can enhance the performance and value of the protection offered by the G2 technology. This becomes a force multiplier at a higher magnitude.

The gain of Threat Intelligence as a force multiplier is visible in so many ways. Think about the ability to block known threats before they even hit. These same known threats take advantage of and use alternate networks and systems as test beds for an exploit before they attempt to hit the corporate enterprise.

Think about the knowledge that can be derived from Threat Intelligence via appliance, vetted and validated lists that create a real threat solution and the ability to stage and emulate an environment to determine your threat level for targeted attacks.

Imagine a significant reduction in the need to respond to host based events because these events don’t hit the internal hosts. The response focus is smaller and more manageable, and the cleanup costs are reduced greatly.

I am not speaking about a simple “threat feed,” but of actionable Live Threat Intelligence – there is a difference.

The biggest is the difference between gathering the information before they become payload, analyzing the data and assigning a risk to the data, while maintaining an ongoing evaluation of the data.

This represents the who, what, why, where, when and how or more precisely the actor, developer, motivation, building, delivery and payload data delivered by Threat Intelligence.

The information that is delivered through Threat Intelligence is not the hobbled together feed that is gathered from customers, or includes information that is dated or worse, publicly available lists that can and have been poisoned by our adversaries.

Threat Intelligence, real Threat Intelligence, has value, changes quickly, and can be a complete independent solution or an enhancement to an existing security solution. Threat Intelligence is a force multiplier.

I would encourage every security professional to reach out to your vendors, attend the conferences, trade shows and webinars, read all of the blogs, white papers and books and learn about how Threat Intelligence can become a force multiplier in your battles against the adversaries that look to gain access to their internal network.

Challenge your vendors, challenge yourself, challenge your executives and continue persevering – the battle isn’t over, it is just beginning.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Premera Breach Emphasizes Risk to Holders of Medical Records — and the Importance of Network Security Thu, 26 Mar 2015 11:00:54 -0500 The recently announced breach of Premera, following so closely on the heels of Anthem, should set off alarm bells to other organizations in the healthcare industry, as it is an unfortunate likelihood that we will soon hear of other compromised healthcare companies.  In both of these cases, the actual breach took place long before it was discovered, meaning every other healthcare company should be actively working to ensure their network is secure.

This attack on Premera’s health insurance data has been identified as the second-largest on record with about 11 million customers, employees and more affected, dating back to 2002. In February, Anthem’s system was hacked, resulting in information theft from around 80 million current and former customers.

While the Anthem breach was much larger in terms of the total number of records, the Premera Blue Cross breach is believed to include medical records along with personally identifiable information (PII), which could unlock the potential for significant medical fraud.  If insurance plan information is stolen along with identity information, data thieves would have a good indicator on which identities hold a higher value, based on the value of the insurance plan.  If thieves focus on the individuals with the highest plan costs, these are likely to be people who are more established in their lives, have families, higher incomes and better credit, meaning their identities are worth even more on the black market.

In addition, with the full medical records, someone who is committing ID fraud can target known issues with unscrupulous doctors and submit logical, albeit fraudulent claims. Imagine if a cancer patient's records were stolen, for example, and the thief had enough information to pose as that individual. They could then work with a corrupt medical practice and submit reimbursement for expensive chemo therapy session claims (which are never actually provided). Since the real patient is a known cancer patient, this might not even set off any audit flags.  This is just one example in which medical fraud could occur.

This breach again calls into focus the reality that data security is not limited to the processing of payments and credit cards.  The same day that Premera publicly announced its breach, a relatively small dental company in Oregon announced it was also breached, and over 150,000 names and social security numbers and other PII was stolen.  Compared to Anthem or Premera, this breach seems minor, but it highlights the vast sources of data hackers have to choose from. Businesses of all kinds and across all industries must act to protect sensitive information stored in their systems.

The problem is data security is boring and tedious, making it easy to become the task we push off until tomorrow, and the next day, and the next day. There needs to be a broad understanding that in order to be truly protected, enterprises must become proactive in securing network access, encrypting data and auditing security methods on a regular basis.  While larger enterprises are potentially targets for highly sophisticated attacks, it is often the simple things that get missed. Being sure that every system has updated security patches, configurations are kept current, passwords are changed often and not used on two different systems and that two factor authentication is used were all possible in the cases of the breaches and are reasonable suggestions for companies of all sizes. 

After the fact, audits of breaches often discover a number of possible security issues and may or may not accurately identify the true source of the breach in question.  However, what they do point out every time is that it only takes one mistake—one unsecure server, one password that was used on an unsecure system and exposed, one employee who mistakenly clicks on the link in the email, one firewall that wasn’t configured properly, and more—to become the next compromised company in the headlines. 

Kevin Watson is CEO of managed IT services provider Netsurion.


Copyright 2010 Respective Author at Infosec Island]]>
China Named Top Originator of Attack Traffic in Q4 2014: Akamai Thu, 26 Mar 2015 09:50:19 -0500 A new report from Akamai Technologies names China as the top source of attack traffic on the Web.

In its 'Fourth Quarter, 2014 State of the Internet Report', Akamai cited China as the originator of 41 percent of observed attack traffic. According to the report, during the fourth quarter of last year Akamai observed attack traffic originating from 199 unique countries and regions. Out of the 199, China was the clear leader of the pack, accounting for more than triple the amount originating from the U.S.

"Akamai maintains a distributed set of unadvertised agents deployed across the Internet to log connection attempts that the company classifies as attack traffic," according to the report. "Based on the data collected by these agents, Akamai is able to identify the top countries from which attack traffic originates, as well as the top ports targeted by these attacks."

China and the U.S. were again the only two countries to originate more than 10 percent of the observed attack traffic during the fourth quarter, with the remaining regions and countries all below five percent. Germany (1.8 percent) and Hong Kong (1.3 percent) joined the top 10, while Indonesia and Venezuela fell off. India was the only remaining top 10 country to see observed traffic percentages decline, dropping from 2.9 percent in the third quarter to 2.4 percent in the last few months of the year.

"The overall concentration of observed attack traffic decreased in the fourth quarter, with the top 10 countries/regions originating 75% of observed attacks, down from 84% and 82% in the second and third quarters, respectively," according to the report.

Read the rest of this article on


Copyright 2010 Respective Author at Infosec Island]]>
Understanding Internet Protocol Security (IPsec) Wed, 25 Mar 2015 12:36:00 -0500 IPSEC is the most popular form of VPN used today. It is important to understand how IPSEC works in order to trouble shoot issues with IPSEC tunnels. IPSEC is an end-to-end security scheme. This means that data is encrypted on one end and decrypted on the other end of the connection.

IPsec uses ESP or AH for initial encryption
ESP is the most popular method and is used in Barracuda VPNs

ESP Encapsulating Security Payload or ESP

In tunnel mode ESP adds a new ip header, after encrypting the original ip header and payload.

In transport mode ESP does not create a new header. It simply encypts the payload

In both modes an ESP header field is added to the packet.

ESP divides the date into smaller pieces in which it encrypts it with AES

Phase one: IKE1 (internet key exchange)

ISAKMP SA (phase1)

Occurs on UDP port 500
The server and client negotiate an encryption algorithm that will be used to transport the encryption keys to be used during the transfer of data.

This phase requires the following:

  • An encryption algorithm- This determines the depth and type of encryption.  (Keep in mind the deeper the encryption the slower the connection)
  • AES (advanced encryption Standard).  The key size used for an AES cipher specifies the number of repetitions of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of cycles of repetition are as follows:
  1. 10 cycles of repetition for 128-bit keys.
  2. 12 cycles of repetition for 192-bit keys.
  3. 14 cycles of repetition for 256-bit keys.
  • 3DES – (Triple DES) they tripled the depth of the encryption of DES
  • DES – A 56 bit algorithm that is susceptible to brute force attacks
  • AES256 – Considered the top encryption cipher has the most combinations of possible keys
  • Blowfish – A symmetric-key block cipher. This a good encryption cipher that encrypts in blocks
  • Cast – another symmetric-key block cipher
  • A hashing algorithm
  • SHA- (secure Hash Algorithm) this is a 160-bit hash
  • MD5 – (message-digest algorithm) a 128-bit hash
  • Diffie-Hellman key (DH key ) also known as MODP Group on other Site 2 Site VPNs
  • Group1 – 768-bit
  • Group2 – 1024-bit
  • Group5 – 1536-bit
  • Lifetime – Rekeying time in milliseconds.

Phase2: (Ike2) internet key exchange


Phase two also  occurs over UDP port 500.

Phase two requires the same information that phase one needs in set up and also provides with one more layer of security through the form of authentication

Authentication can take place through the following means:

  • PSK (pre shared key) most commonly used, this is simply a pass phase that is shared.
  • Client certificate

After the two IKE phases have completed data is transported through the establish VPN tunnel
The best way to troubleshoot IPSEC is to look at a packet capture. The IPSEC exchange is easy to see and identify in a packet capture.

Here is a sample packet capture showing the ISAKMP information you will need when troubleshooting both ends of a site 2 site VPN tunnel.

The packet you want look at is “Identity protection” this packet shows all of the phase one settings you are using to negotiate phase one between the two ends on the tunnel. There are several packets of this type. You will need to look at each until you find the one with the IKE attributes.

After opening the packet you will want to expand out the payload so you can see how each end is set up. You can also expand each IKE attribute to see what their individual settings are.
In this packet the initiating server is proposing the listed protocols it will be followed by an informational packet from the receiving server accepting the protocols or not accepting.

In the following packet the proposal was not chosen. This means that the IKE 1 settings are not the same on both devices. And the receiving device is rejecting the proposed IKE 1 settings.

Understanding How IPsec works will help you to set up and troubleshoot your Site to Site and client to site connections.

Packet captures are displayed in Wireshark.

This was cross-posted from Barracuda Networks' blog. 

Copyright 2010 Respective Author at Infosec Island]]>
CSIS Issues Recommendations for Threat Intelligence Sharing Wed, 25 Mar 2015 11:23:45 -0500 The Center for Strategic and International Studies (CSIS), a non-profit think tank which conducts research and analysis to develop policy initiatives, has issued a set of recommendations (PDF) for Congress and the Obama Administration regarding the steps that should be taken in order to increase the level if threat information sharing between the government and private sector.

After years of political wrangling, apprehensions about corporate liability, and a host of data privacy objections, Congress finally moved on the passage of some key cybersecurity legislation last December.

But the four bills that were approved last month did not address all of the top concerns, namely the creation of an information-sharing platform that would enable better information exchange about cyber-based threats between the public and private sectors.

Similar legislation had died in the Senate last year, but President Obama opened the door for new proposals in his recent State of the Union address, and key Congressional committee members in both the House and Senate are introducing new legislation.

The main obstacle to the passage of information-sharing is concerns that businesses may share too much private information about their customers with the government, an issue that has some civil liberties groups lined up to oppose any such legislation.

“Information sharing empowers organizations to take individual as well as collective action to reduce risks, deter attackers, and enhance overall resilience. Initially, cyber threat information sharing was conducted in an informal, ad hoc manner,” CSIS said.

“Today, sharing of cyber threat information between private companies and with government is more structured, frequent, and regular. However, there are still several outstanding legal and structural challenges to improved sharing, such as concerns about privacy, risk of liability, and the appropriate role of government.

CSIS has issued the following recommendations to congress and the Administration on how best to proceed in building a reliable and effective threat information sharing platform:

  • Recommendation 1: Sectors and industries have different risk profiles for cybersecurity. Stronger information-sharing arrangements with the government are appropriate for some private entities but not others.
    • “Each sector has unique needs for government involvement, and operates in a different regulatory environment. Information-sharing arrangements between government and private entities should be informed by a cost-benefit analysis that takes into account industry and sector risk profiles.”
  • Recommendation 2: Private-to-private sharing with a minimal role for government can help promote voluntary information sharing and alleviate privacy concerns.
    • “For most organizations, particularly those that store or process large amounts of personal information and communications, day-to-day sharing of cyber threat information should focus on private-to-private sharing without government involvement. “
  • Recommendation 3: Entities should make reasonable efforts to eliminate personal information that is irrelevant to the threat prior to sharing.
    • “Legislation should require companies to make reasonable efforts to eliminate PII that is irrelevant to the threat prior to sharing and provide liability protections to companies that take such measures.”
  • Recommendation 4: Build upon existing information-sharing organizations and mechanisms.
    • “Rather than creating duplicate entities for sharing, government should support operationalizing and maturing existing information-sharing organizations. For critical infrastructure sectors where ISACs do not exist, government should encourage private-sector efforts to form information sharing and analysis organizations.”
  • Recommendation 5: Streamline procedures for companies to share cyber threat information with the government as well as within and among sectors.
    • Legislation should establish a standardized and streamlined process for companies to enter into collaborative information-sharing arrangements with the government.”
  • Recommendation 6: Cyber threat information shared voluntarily with the government should be protected from disclosure through Freedom of Information Act (FOIA) requests and barred from use in civil litigation or regulatory purposes.
    • Legislation should provide clear protection of voluntarily shared cyber threat information from disclosure through FOIA requests and from use in regulatory actions.
  • Recommendation 7: Identify ways for information sharing models to demonstrate value for all parties involved.
    • Effective cyber threat information must be actionable. It should be timely, accurate, relevant to the recipient, and specific enough for the recipient to take action in response to the threat.
  • Recommendation 8: Centralized and decentralized models for information sharing each have unique benefits. Government should encourage both models for sharing.
    • “Government should encourage both types of sharing and avoid prescribing one over the other. “
  • Recommendation 9: Information-sharing arrangements should take into account the type of information being shared. Sharing technical threat indicators poses little risk to privacy, disclosure of sensitive business information, or regulatory exposure. Sharing more sensitive contextual threat information poses a greater risk to individual privacy and to companies.
  • Recommendation 10: Permissible law enforcement uses of cyber threat information shared by companies with the government should be restricted to cybersecurity purposes and a limited set of other activities.
    • “Cyber threat information voluntarily shared by private entities with government should be limited to use for cybersecurity purposes and to a limited set of other circumstances, such as to prevent or mitigate imminent threat of death or bodily harm.”
  • Recommendation 11: Legislation should authorize monitoring and sharing of cyber threat information, and provide a safe harbor from civil and criminal liability for good-faith actions in conducting such activities.
    • “Legislation should provide explicit authorization to share cyber threat information and a safe harbor from liability for sharing in good faith. It should also seek to reduce legal uncertainty around lawful countermeasures. “

“Improved cyber threat information sharing has many benefits, but information sharing only provides a means for achieving specific goals and outcomes; it is not an end in itself,” CSIS concluded.

“As such, government and companies should articulate the objectives and goals for information sharing, and tailor mechanisms for information sharing to achieve those goals.”

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
5 Social Engineering Attacks to Watch Out For Wed, 25 Mar 2015 11:21:24 -0500 By: David Bisson

We have become all too familiar with the type of attacker who leverages their technical expertise to infiltrate protected computer systems and compromise sensitive data. We hear about this breed of hacker in the news all the time, and we are motivated to counter their exploits by investing in new technologies that will bolster our network defenses.

However, there is another type of attacker who can use their tactics to skirt our tools and solutions. They are the social engineers, hackers who exploit the one weakness that is found in each and every organization: human psychology. Using a variety of media, including phone calls and social media, these attackers trick people into offering them access to sensitive information.

Social engineering is a term that encompasses a broad spectrum of malicious activity. For the purposes of this article, however, we will focus on the five most common attack types that social engineers use to target their victims: phishing, pretexting, baiting, quid pro quo and tailgating.

Phishing scams might be the most common types of social engineering attacks used today. Most phishing scams demonstrate the following characteristics:

  • Seek to obtain personal information, such as names, addresses and social security numbers.
  • Use link shorteners or embed links that redirect users to suspicious websites in URLs that appear legitimate.
  • Incorporates threats, fear and a sense of urgency in an attempt to manipulate the user into acting promptly.

Some phishing emails are more poorly crafted than others to the extent that their messages oftentimes exhibit spelling and grammar errors but these emails are no less focused on directing victims to a fake website or form where they can steal user login credentials and other personal information.

A recent scam sent phishing emails to users after they installed cracked APK files from Google Play Books that were pre-loaded with malware. This specific phishing campaign demonstrates how attackers commonly pair malware with phishing attacks in an effort to steal users’ information.

Pretexting is another form of social engineering where attackers focus on creating a good pretext, or a fabricated scenario, that they can use to try and steal their victims’ personal information. These types of attacks commonly take the form of a scammer who pretends that they need certain bits of information from their target in order to confirm their identity.

More advanced attacks will also try to manipulate their targets into performing an action that enables them to exploit the structural weaknesses of an organization or company. A good example of this would be an attacker who impersonates an external IT services auditor and manipulates a company’s physical security staff into letting them into the building.

Unlike phishing emails, which use fear and urgency to their advantage, pretexting attacks rely on building a false sense of trust with the victim. This requires the attacker to build a credible story that leaves little room for doubt on the part of their target.

Pretexting attacks are commonly used to gain both sensitive and non-sensitive information. Back in October, for instance, a group of scammers posed as representatives from modeling agencies and escort services, invented fake background stories and interview questions in order to have women, including teenage girls, send them nude pictures of themselves.

Baiting is in many ways similar to phishing attacks. However, what distinguishes them from other types of social engineering is the promise of an item or good that hackers use to entice victims. Baiters may offer users free music or movie downloads, if they surrender their login credentials to a certain site.

Baiting attacks are not restricted to online schemes, either. Attackers can also focus on exploiting human curiosity via the use of physical media.

One such attack was documented by Steve Stasiukonis, VP and founder of Secure Network Technologies, Inc., back in 2006. To assess the security of a financial client, Steve and his team infected dozens of USBs with a Trojan virus and dispersed them around the organization’s parking lot. Curious, many of the client’s employees picked up the USBs and plugged them into their computers, which activated a keylogger and gave Steve access to a number of employees’ login credentials.

Similarly, quid pro quo attacks promise a benefit in exchange for information. This benefit usually assumes the form of a service, whereas baiting frequently takes the form of a good.

One of the most common types of quid pro quo attacks involve fraudsters who impersonate IT service people and who spam call as many direct numbers that belong to a company as they can find. These attackers offer IT assistance to each and every one of their victims. The fraudsters will  promise a quick fix in exchange for the employee disabling their AV program and for installing malware on their computers that assumes the guise of software updates.

It is important to note, however, that attackers can use much less sophisticated quid pro quo offers than IT fixes. As real world examples have shown, office workers are more than willing to give away their passwords for a cheap pen or even a bar of chocolate.

Another social engineering attack type is known as tailgating or “piggybacking.” These types of attacks involve someone who lacks the proper authentication following an employee into a restricted area.

In a common type of tailgating attack, a person impersonates a delivery driver and waits outside a building. When an employee gains security’s approval and opens their door, the attacker asks that the employee hold the door, thereby gaining access off of someone who is authorized to enter the company.

Tailgating does not work in all corporate settings, such as in larger companies where all persons entering a building are required to swipe a card. However, in mid-size enterprises, attackers can strike up conversations with employees and use this show of familiarity to successfully get past the front desk.

In fact, Colin Greenless, a security consultant at Siemens Enterprise Communications, used these same tactics to gain access to several different floors, as well as the data room at an FTSE-listed financial firm. He was even able to base himself in a third floor meeting room, out of which he worked for several days.

Hackers who engage in social engineering attacks prey off of human psychology and curiosity in order to compromise their targets’ information. With this human-centric focus in mind, it is up to users and employees to counter these types of attacks.

Here are a few tips on how users can avoid social engineering schemes:

  • Do not open any emails from untrusted sources. Be sure to contact a friend or family member in person or via phone if you ever receive an email message that seems unlike them in any way.
  • Do not give offers from strangers the benefit of the doubt. If they seem too good to be true, they probably are.
  • Lock your laptop whenever you are away from your workstation.
  • Purchase anti-virus software. No AV solution can defend against every threat that seeks to jeopardize users’ information, but they can help protect against some.
  • Read your company’s privacy policy to understand under what circumstances you can or should let a stranger into the building.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Air-Gapped Computers Can Communicate Through Heat: Researchers Tue, 24 Mar 2015 11:41:16 -0500 Researchers at the Ben Gurion University in Israel have demonstrated that two computers in close proximity to each other can communicate using heat emissions and built-in thermal sensors.

In an experimental scenario involving two debitvices placed at up to 15 inches from each other, researchers have managed to transmit up to 8 bits of data per hour, which is enough for exfiltrating sensitive data such as passwords and secret keys, and for sending commands. This novel attack method has been dubbed BitWhisper.

It is not uncommon for organizations that handle highly sensitive information to isolate certain computers in order to protect valuable assets. Air-gap security is often used for industrial control systems (ICS) and military networks. However, as it has been demonstrated before, such as in the case of the notorious Stuxnet worm which targeted Iranian nuclear facilities, air-gap security can be breached.

Over the past months, Ben Gurion University researchers have analyzed several techniques that can be leveraged to exfiltrate data from an air-gapped computer, including by using radio signals emitted by a device’s graphics card, and by using a multifunctional printer to receive and transmit data.

Read the rest of this story on 

Copyright 2010 Respective Author at Infosec Island]]>