Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Hacker Myths Debunked https://www.infosecisland.com/blogview/24042-Hacker-Myths-Debunked.html https://www.infosecisland.com/blogview/24042-Hacker-Myths-Debunked.html Mon, 20 Oct 2014 12:12:23 -0500 By: David Bisson

We’ve been hearing a lot about hackers recently, mostly in connection to serious data breaches. We think of hackers compromising the nude photographs of popular female celebrities, including Jennifer Lawrence and Kate Upton. We think of them stealing 56 million Home Depot customers’ credit card information. Or using Backoff malware to infiltrate Kmart or Dairy Queen.

All of these incidents teach us to think of hackers as nefarious individuals.They will stop at nothing to degrade our privacy, steal our identities, and ruin our experiences in cyberspace. Their craft is dishonorable, and so they deserve to be hated—and feared.

But is this stereotypical? Are all hackers like this?

In honor of National Cyber Security Awareness Month, which aims to improve user awareness about cyber threats online, below we problematize some of the most common hacker stereotypes we’ve come to learn and love. We do this in an effort to appreciate hacking for the complicated, variable and highly individualized practice that it is.

   Myth #1: Hackers Are Maladjusted Young People Who Live In Their Mothers’ Basements

shutterstock_150161756We all know this one quite well. Some of the most dangerous hackers—the myth goes—wear black T-shirts, have long hair and are under 30 years of age. They spend all of their time on the computer – a passion which they use to isolate themselves from the rest of society. They are weird and maladjusted, which helps to explain why they want to do what they do.

Sure, there might be hackers that fit this stereotype but countless others do not. Take the idea that hackers spend endless hours at the computer—this is a common misperception of computer scientists that, despite its wide appeal, doesn’t hold any water. In fact, many hackers have balanced relationships with their computers while others even have “day jobs” and just hack on the side.

Hackers can have healthy relationships with their peers and families and have proven records of academic excellence in school. Some may be young, but others are not, having spent decades accumulating their technical expertise. Many are well-adjusted to society, which in one light could make some hackers more dangerous.

John Walker, CTO of the Cytelligence Cyber Forensics OSINT Platform and a Blogger for Tripwire, explains: “There are [some] in our midst equally dangerous and very well accomplished over a number of years in which they have learned their trade, honed their skills, and could just be that guy sitting next to you in your office – so think again, don’t make too many preconceived judgements, and remember to consider the ‘Unusual Suspect Factor.’”

Myth #2: Hacking Is A “Boys Only” Club

shutterstock_158390291Hacking may be a predominantly male activity but that doesn’t mean that there aren’t female hackers out there. For instance, a loose 22-year-old group of women known as Haecksen, a hacker club that uses for its name the German word for “witch,” helped organize the Chaos Computer Club (CCC) Congress in 2010.

Other female hackers have spoken at DefCon or write viruses that destroy information instead of stealing it. We might hear the most about male hackers, but women are just as active in hacking communities.

Myth #3: All Hackers Are Masters of Their Craft

The way we paint hackers today elevates them to a level of unmatched technical prowess. Using this platform of expertise, they compromise any system they want with ease, regardless of whatever security protocols may be in place. Subsequently, as information security professionals, we are forced to play defense against these computer masters. shutterstock_131313473

Mark Stanislav, Security Project Manager at Duo Security, explains this is not always the case: “Manipulation of systems is often as predictable as watching the sunrise from the east every morning. After enough practice and/or education, a hacker of a specific context can likely say, ‘Oh, I’d totally try to do XYZ to hack that’ given a scenario.”

Additionally, not all hackers are necessarily skilled computer programmers. Sometimes all hackers need to know is where to look with respect to a particular system configuration or maybe they let a tool do that for them, despite having minimal understanding of how the tool works. Ultimately, we all know that it doesn’t take a computer expert to break into a network.

Myth #4: All Hacking Is Bad

The notion that all hackers intend to cause harm is one of the biggest hacking myths today. Lamar Bailey, Director of Security R&D at Tripwire, says:

shutterstock_169402199“Hacking systems to gain access to data or features that are denied to the current user is the most popular definition that most people think of when it comes to hackers, but it goes much deeper. Hacking hardware to add new features has become a very popular way to extend the life and increase the security of all devices in our homes.”

Ultimately, hacking has less to do with compromising data then with developing creative solutions to technical problems. Ken Westin of Tripwire rightly notes this fact: “Hacking is about understanding the underlying nature of technology—knowing specifically how things work from a high level all the way down to its most granular components. When you fully understand how things work, there is power in being able to manipulate it, shape it and utilize it in ways it may not have been intended to.”

In this sense, hacking, like many other things, comes down to intentions. Ethical hacking can improve the security of various products, whereas malicious hacking seeks to undermine data integrity. It’s how people hack which shapes the nature of a particular incident.

Hacking In All Its Colors

We hear a lot about hackers these days, but mainly those who are after people’s personal and financial information. The majority of hackers out there aren’t social miscreants who are technical masters bent on shutting down the Internet. They may be less knowledgeable, or they may be in the hacking business for the sake of computer security. The sooner we realize hacking’s variability, the sooner we can champion the whitehats who are helping to protect us, and the sooner we can broaden our focus to target those who threaten our security online.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The ASV Process Is Broken – Part 1 https://www.infosecisland.com/blogview/24040-The-ASV-Process-Is-Broken--Part-1.html https://www.infosecisland.com/blogview/24040-The-ASV-Process-Is-Broken--Part-1.html Mon, 20 Oct 2014 10:34:25 -0500 The topic of ASV scanning came up as usual at the 2014 PCI Community Meeting.  The questions all seemed to revolve around how to obtain a passing scan.  What the Council representatives suggested is that multiple scans can be put together to create a passing scan.  Unfortunately, what the Council keeps suggesting as the solution is impossible to implement and here is why.

In a typical environment, an ASV customer logs onto their account with the ASV and schedules their ASV scans of their PCI in-scope assets.  The customer may also add or subtract the number of IP addresses that are scanned as the scope of their external environment may change.  Depending on a number of factors, there may be one scan or multiple scans.  The vulnerability scans are executed on the schedule and the results are returned to the customer.

If there are false positive results or results the customer does not agree, they can apply back to the ASV to have those results removed.  If there are actual vulnerabilities, the customer can contact the ASV with how they have mitigated the vulnerabilities and the ASV can either accept those mitigates and give the customer a passing scan or allow the results to stand.

So where are the problems?

Whether or not the Council acted on facts that cheating was occurring or anecdotal evidence is unknown.  But because of the potential for cheating by customers, the Council mandated a number of years ago that ASVs lock down their scanning solutions so that customers cannot modify anything regarding testing other than the IP addresses involved.  The ASV Program Guide v2.0 on page 11, states:

“However, only an authorized ASV employee is permitted to configure any settings (for example, modify or disable any vulnerability checks, assign severity levels, alter scan parameters, etc), or modify the output of the scan.  Additionally, the ASV scan solution must not provide the ability for anyone other than an authorized ASV employee to alter or edit any reports, or reinterpret any results.”

So right off the bat, the Council’s recommendation of “putting together multiple reports” is not as easily accomplished based on their earlier directives.  That is because it will require the ASV’s customer to get the ASV to agree to put together multiple reports so that they can achieve a passing scan.  That implies that the ASV’s solution will even accommodate that request, but then the ASV needs to be agreeable to even do that task.  Based on the Council’s concerns regarding manipulation of scanning results and the threat of the Council putting ASVs in remediation, I do not believe the ASVs will be agreeable to combining reports as that would clearly be manipulating results to achieve a passing scan.

But it gets worse.  As a lot of people have experienced, they can scan one day and get a passing scan and then scan a day or even hours later and get a failing scan.  The reason this happens is that the vulnerability scanning vendors are adding vulnerabilities to their signature sets as soon as they can, sometimes even before vendors have a patch.  As a result, it is very easy to encounter different results from scan to scan including failing due to a vulnerability that does not yet have a solution or the vendor only just provided a patch.

But if that is not enough, it gets even worse.  Statistically, the odds of getting a passing scan are nearly impossible and gets even worse if you are only doing quarterly scanning.  A review of the National Vulnerability Database (NVD) shows that 94% of vulnerabilities from 2002 to 2014 have a common vulnerability scoring system (CVSS) score of 4.0 or greater.  That means that it is almost impossible to obtain a passing vulnerability scan, particularly if you are only scanning quarterly, when vulnerabilities are announced almost daily and vendors such as Microsoft are coming out monthly with patches.  Those of you scanning monthly can attest that even on a 30 day schedule, a passing scan is nearly impossible to get.

For an organization that has only one Web site, this situation is likely not a problem.  But when organizations have multiple Web sites which a lot of organizations large and small have, you are really struggling in some cases to get passing scans.

But let us add insult to injury.  A lot of organizations have their eCommerce environments running on multiple platforms such as Oracle eCommerce or IBM Websphere.  In those examples, this situation becomes a nightmare.

Platforms such as those from Oracle and IBM may run on Windows or Linux, but Oracle and IBM do not allow the customer to patch those underlying OSes as they choose.  These vendors ship quarterly, semi-annually or on some other schedule, a full update that patches not only their eCommerce frameworks, but also the underlying OS.  The vendors test the full compatibility of their updates to ensure that the update will not break their frameworks.  In today’s 24x7x365 world, these vendors can run into serious issues if eCommerce sites begin to not function due to an update.  However, that also means there is the possibility that critical patches may be left out of an update due to compatibility and stability reasons.  As a result, it is not surprising that in some updates, vulnerabilities may still be present both those that are new and those that have been around for a while.

But if Oracle and IBM are not patching on 30 day schedules, that means there is a high likelihood that the scans will not be passing.  This means that the customer must go to their ASV with compensating controls (CCW) to mitigate these vulnerabilities to obtain passing scans.

The bottom line is that the deck is stacked against an organization obtaining a passing scan.  While the Council and the card brands do not recognize this, the rest of the world sure has come to that determination.

In Part 2, I will discuss the whole ASV approach and how I believe the drive to be the cheapest has turned the ASV process into a mess.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
Last Chance to Register for 2014 ICS Cyber Security Conference https://www.infosecisland.com/blogview/24038-Last-Chance-to-Register-for-2014-ICS-Cyber-Security-Conference.html https://www.infosecisland.com/blogview/24038-Last-Chance-to-Register-for-2014-ICS-Cyber-Security-Conference.html Fri, 17 Oct 2014 13:21:00 -0500 ATLANTA--(BUSINESS WIRE)--On Monday, October 20, 2014, attendees from around the world will gather in Atlanta, Georgia for the 2014 Industrial Control Systems (ICS) Cyber Security Conference.

“It has become paramount that critical infrastructures balance the needs of ICS reliability and safety with cyber security”

Held from October 20 – 23, 2014 at the Georgia Tech Hotel and Conference Center, hundreds of professionals will benefit from the robust exchange of technical information, actual incidents, insights, and best practices to help protect critical infrastructures from cyber attacks.

As the longest-running cyber security-focused conference for the industrial control systems sector, the event will cater to the energy, utility, chemical, transportation, manufacturing, and other industrial and critical infrastructure organizations.

Produced by SecurityWeek, the conference will address the myriad cyber threats facing operators of ICS around the world, and will address topics covering ICS, including protection for SCADA systems, plant control systems, engineering workstations, substation equipment, programmable logic controllers (PLCs), and other field control system devices.

The Conference is unique and has historically focused on control system end-users from various industries and what cyber vulnerabilities mean to control system reliability and safe operation. It also has a history of having discussions of actual ICS cyber incidents.

Online registration is still available, and full conference passes include educational workshops on Monday.

“The IT community and ICS community are now bridging the security gap,” said Joe Weiss, founder of the ICS Cyber Security Conference, which was recently acquired by SecurityWeek. “It has become paramount that critical infrastructures balance the needs of ICS reliability and safety with cyber security,” Weiss added.

Sponsors of the conference include Honeywell, Ultra Electronics 3eTI, BAE Systems, Lockheed Martin, Siemens, Waterfall Security, Symantec, Check Point Software Technologies, PFP Cybersecurity and Kaspersky Lab.

For more information and online registration visit:
http://www.icscybersecurityconference.com

About the ICS Cyber Security conference:

The ICS Cyber Security Conference is the conference where ICS users, ICS vendors, system security providers and government representatives meet to discuss the latest cyber-incidents, analyze their causes and cooperate on solutions. Since its first edition in 2002, the conference has attracted a continually rising interest as both the stakes of critical infrastructure protection and the distinctiveness of securing ICSs become increasingly apparent.

About Security Week:
The mission of SecurityWeek is to help information security professionals do their jobs better by providing timely and insightful news, information, viewpoints, analysis and experiences from experts in the trenches of information security. With content written by industry professionals and a seasoned news team, SecurityWeek produces and distributes insightful and useful content and data to information security professionals around the globe. Visit www.securityweek.com for more information.

Copyright 2010 Respective Author at Infosec Island]]>
The Chinese Truly are Attacking our Critical Infrastructure https://www.infosecisland.com/blogview/24037-The-Chinese-Truly-are-Attacking-our-Critical-Infrastructure.html https://www.infosecisland.com/blogview/24037-The-Chinese-Truly-are-Attacking-our-Critical-Infrastructure.html Fri, 17 Oct 2014 09:10:16 -0500 There have been many reports of the Chinese and others attacking our critical infrastructure. Last year, Kyle Wilhoit from Trend Micro developed a control system honeypot representing a small water utility in rural Missouri and then identified the attackers some of whom were from China. Bob Radvanovsky from Infracritical took a similar approach and the results are astounding. He acquired some Ruggedom switches from E-Bay and set up a network emulating a well pumping station. Within 2 hours of connecting the systems, he was being attacked primarily from China. This is even more interesting when you realize when the attack started, the honeypot was not seen on Shodan. This shows the level of monitoring going on in China. Bob will be discussing these results October 21st at the ICS Cyber Security Conference in Atlanta.

 

Copyright 2010 Respective Author at Infosec Island]]>
Acting on MSSP Alerts https://www.infosecisland.com/blogview/24036-Acting-on-MSSP-Alerts.html https://www.infosecisland.com/blogview/24036-Acting-on-MSSP-Alerts.html Thu, 16 Oct 2014 10:24:00 -0500 Have you seen the funnel lately?

funnel

(source: https://flic.kr/p/fxKbT)

In any case, while you are contemplating the funnel, also ponder this:

what do you get from your MSSP, ALERTS or INCIDENTS [if you are getting LOGS from them, please reconsider paying any money for their services]

What’s the difference? Security incidents call for an immediate incident response (by definition), while alerts need to be reviewed via an alert triage process in order to decide whether they indicate an incident, a minor “trouble” to be resolved immediately, a false alarm or a cause to change the alerting rules in order to not see it ever again. Here is an example triage process:

old-triage

(source: http://bit.ly/1wCI0dt)

Now, personally, I have an issue with a situation when an MSSP is tasked with declaring an incident for you. As you can learn from our incident response research, declaring a security incident is a big decision made by several stakeholders (see examples of incident definitions here). If your MSSP partner has a consistent history of sending you the alerts that always lead to incident declaration (!) and IR process activation – this is marvelous. However, I am willing to bet that such a “perfect” track record is achieved at a heavy cost of false negatives i.e. not being informed about many potential problems.

So, it is most likely that you get ALERTS. Now, a bonus question: whose job is it to triage the alerts to decide on the appropriate action?

thinking

(source: https://flic.kr/p/6wdLat)

Think harder – whose job is it to triage the alerts that MSSPs sends you?

After you figured out that it is indeed the job of the MSSP customer, how do you think they should go about it? Some of the data needed to triage the alert may be in the alert itself (such as a destination IP address), while some may be in other systems or data repositories. Some of said systems may be available to your MSSP for access (example: your Active Directory) and some are very unlikely to be (example: your HR automation platform). So, a good MSSP will actually triage the alerts coming from their technology platform to the best of their ability – they do have the analysts and some of the data, after all. So, think of MSSP alerts as of “half-triaged” alerts that requires further triage.

For example:

  • NIPS alerts + firewall log data showing all sessions between the IP pair + logs from an attack target + business role of a target (all of these may be available to the MSSP) = high-fidelity alert that arrives from the MSSP; it can probably be acted upon without much analysis
  • NIPS alerts + firewall log data (these are often available to the MSSP) = “half-triaged” alerts that often need additional work by the customer
  • NIPS alerts (occasionally these are the only data available) = study this Wikipedia entry on GIGO.

A revelation: MSSPs are in the business of … eh… business. So, MSSP analysts are expected to deliver on the promise of cost-effectiveness. Therefore, the quality of their triage will depend on the effectiveness of their technology platform, available data (that customers provide to them!), skills of a particular analyst and – yes! – expected effort/time to be spent on each alert (BTW, fast may mean effective, but it may also mean sloppy. Slow may mean the same…)

Another revelation: MSSP success with alert triage will heavily depend on the data available to their tools and analysts. As a funny aside, will you go into this business: I will send you my NIDS alerts only (and provide no other data about my IT, business, etc) and then offer to pay you a $1,000,000 if you only send me the alerts that I really care about and that justify an immediate action in my environment. Huh? No takers?

So, how would an MSSP customer triage those alerts? They need (surprise!):

  • People i.e. security analysts who are willing and capable of triaging alerts
  • Data i.e. logs, flows, system context data, business context, etc that can be used to understand the impact of the alerts.

The result may look like this:

MSSP-flow

Mind you that some of the systems that house the data useful for alert triage (and IR!) are the same systems you can use for monitoring – but you outsourced the monitoring to the MSSP. Eh… that can be a bit of problem :-) That is why many MSSP clients prefer to keep their own local log storage inside a cheap log management tool – not great for monitoring, but handy for IR.

Shockingly, I have personally heard about cases where MSSP clients were ignoring 100% of their MSSP alerts, had them sent to an unattended mailbox or hired an intern to delete them on a weekly basis (yup!). This may mean that their MSSP was no good, or that they didn’t give them enough data to do their job well… As a result, your choice is:

  • you can give more data to an MSSP and [if they are any good!] you will get better alerts that require less work on your behalf, or
  • you can give them the bare minimum and then complain about poor relevance of alerts (in this case, you will get no sympathy from me, BTW)

And finally, an extra credit question: if your MSSP offers incident response services that costs extra, will you call them when you have an incident that said MSSP failed to detect?! Ponder this one…

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
When Remote Access Becomes Your Enemy https://www.infosecisland.com/blogview/24035-When-Remote-Access-Becomes-Your-Enemy.html https://www.infosecisland.com/blogview/24035-When-Remote-Access-Becomes-Your-Enemy.html Thu, 16 Oct 2014 10:15:13 -0500 As convenient as it would be for businesses to have all their IT service providers working on-site, just down the hall, that’s not always possible. That’s why secure remote access is a component frequently found in the digital toolboxes of service providers that offer maintenance, troubleshooting and support from locations other than where the product or system is being used.

This arrangement makes sense: It saves enterprises time and money.

Yet, that doesn’t mean remote access is always foolproof. Although it’s long been possible to securely implement remote access, sloppy work and carelessness have increasingly created critical vulnerabilities.

In April 2013, for example, it became possible to damage Vaillant Group ecoPower 1.0 heating systems by exploiting a highly critical security hole in the remote maintenance module. The vendor advised customers to simply pull the network plug and wait for the visit of a service technician.

About one year later, AVM, the maker of the Fritz!Box router, also suffered a security vulnerability. For a time, it was possible to gain remote access to routers and, via the phone port functionality, to make phone calls that were sometimes extremely expensive. Only remote access users were affected.

Then, in August 2014, Synology, a network attached storage (NAS) supplier, was affected. In this case, it was possible to gain control over the entire NAS server data through a remote access point.

Finally, at this year’s Black Hat conference in August, two security researchers revealed that up to 2 billion smartphones could be easily attacked through security gaps in software.

It’s clear that these attacks and vulnerabilities are all part of a trend – and they speak to the importance of businesses eliminating remote access security gaps.

Who is Responsible for Securing Remote Access?

There’s no doubt that remote access is an important network feature. IT support speed and troubleshooting capability would be greatly hampered without remote access. It is also needed for mobile workers to establish connections to their corporate networks via a VPN.

VPNs by design are secure and when users implement, maintain and utilize them properly, the technology works perfectly. However, security lapses may occur in cases where a user is unaware that secure remote access has been provided, i.e. it’s more or less a hidden feature, or he does not show any interest in it.

In the Fritz!Box case, the critical issue of increasing digitization in private environments could be seen very clearly. Despite the problem being reported by numerous media outlets and the vendor quickly releasing a firmware update, tens of thousands of routers were still affected, many of them weeks later.

Unfortunately for IT administrators responsible for network security, not every Internet user reads computer magazines and stays up-to-date with information from various news services. Not every router owner has the tech savvy or feels comfortable updating device firmware. They may do the bare minimum – understand the purpose of a VPN and comply with the necessary security policies – but what if they don’t? Or what if they aren’t even aware of security measures?

The value of VPN solutions is that they provide a layer of security protection, for when users unknowingly create security vulnerabilities. This means IT administrators are responsible for improving the security of remote access, by using up-to-date, approved technology and implementing automated update procedures that fix reported bugs quickly and without user intervention.

This was cross-posted from the VPN HAUS blog.

 

Copyright 2010 Respective Author at Infosec Island]]>
Security Companies Hit Hikit Backdoor Used by APT Group https://www.infosecisland.com/blogview/24033-Security-Companies-Hit-Hikit-Backdoor-Used-by-APT-Group.html https://www.infosecisland.com/blogview/24033-Security-Companies-Hit-Hikit-Backdoor-Used-by-APT-Group.html Thu, 16 Oct 2014 01:30:00 -0500 [SecurityWeek] - A coordinated effort by security companies has struck a blow against malware tools used by a cyber-espionage group known as Hidden Lynx.

Hidden Lynx is believed to be based in China and has been tied to attacks against U.S. defense contractors and other organizations around the world. In a collaboration dubbed 'Operation SMN', researchers from a number of companies joined forces to target the Hikit backdoor and other malware used by the group.

The effort was coordinated by security firm Novetta as part of Microsoft's new Coordinated Malware Eradication program, and also involved Symantec, Cisco Systems, FireEye, F-Secure, iSight Partners, ThreatConnect, Tenable, Microsoft, ThreatTrack Security and Volexity. A report with technical details about the effort is set to be released Oct. 28.

"We felt it was important to take action proactively in coordination with our coalition security industry partners," said Novetta CEO Peter B. LaMontagne, in a statement. "The cumulative effect of such coordinated approaches could prove quite disruptive to the adversaries in question and mitigate some of the threat activity that plagues the joint customer base of this coalition."

Read the full report on SecurityWeek. 

Copyright 2010 Respective Author at Infosec Island]]>
Spying Flashlight Apps Reveal User Inattentiveness to Cyber Security https://www.infosecisland.com/blogview/24032-Spying-Flashlight-Apps-Reveal-User-Inattentiveness-to-Cyber-Security.html https://www.infosecisland.com/blogview/24032-Spying-Flashlight-Apps-Reveal-User-Inattentiveness-to-Cyber-Security.html Wed, 15 Oct 2014 12:27:00 -0500 By: David Bisson

Our smartphones have become a tool that most of us admit we could not live without. After only a few taps on our screen, we can monitor our inbox, our bank account, our social media networks and now, even our homes.

What we often don’t realize, however, is the amount of personal information our phones actually store and how easily accessible we make this data, not only for ourselves, but for others, too. A recent Android study proves many of us are likely not careful enough.

A group of researchers at Snoopwall—a technology solution that detects and blocks spyware and malware on a variety of platforms—found that the most widely used flashlight apps are furtively stealing personal information stored on users’ mobile devices.

According to the company’s Threat Assessment Report, the top 10 searched flashlight apps in the Google Play Store all perform functions that surpass the basic needs of what flashlight apps should be executing.

flash

Screenshot of “flashlight app” keyword search in Google Play, displaying all malicious flashlight apps detected thus far.

These seemingly harmless apps, which have accumulated half a billion downloads, have put the privacy and security of users at risk simply by requesting overzealous permissions that users unknowingly adhere to, including permission to:

  • Modify or delete the contents of your USB storage
  • Change system display settings
  • Precise location (GPS and network-based)
  • Write Home settings and shortcuts
  • View all network connections

For Ken Westin, a security researcher at Tripwire, this is all too familiar: “There is little vetting of applications before they are deployed. When you install an Android app, it shows you what it has permissions to access, but most people ignore it and just click next to get the app installed. There are a lot of free apps that have permissions on devices they shouldn’t, even ‘security’ applications.”

Some users might have felt safe downloading the apps because they installed them using Google Play and not a third-party site but as Tripwire CTO Dwayne Melancon explains, that doesn’t make an app any more secure.

“Android is pretty ‘Wild Wild West’ because the apps are not well curated,” said Melancon. “People often misunderstand the warning not to download apps from unknown or trusted sources. They’ll say, ‘I got it off the Play store—I trust that source’ without realizing the unknown and untrusted author of the app is the actual source.”

For the short term, users are encouraged to uninstall any of the malicious flashlight apps listed here. If your app is able to modify your phone’s storage and/or write settings, it is recommended that you reset your phone. A factory reset and/or complete wipe might be necessary.

Going forward, users are recommended to follow a number of best practices that optimize both their privacy and security on their mobile devices, such as:

  1. Disabling GPS, except when traveling or in the event of an emergency
  2. Disabling Near Field Communications (or iBeacon for Apple devices) permanently
  3. Disabling Bluetooth, except when making a hands-free call while driving
  4. Covering the microphone and/or webcam with tape when neither is in use

Most importantly, however, users need to begin looking at the permissions their apps request of them more closely. We should all be using common sense to ask whether a particular app needs access to the information it wants. If it doesn’t, we’re better off doing some research online and looking for safer alternatives, like this privacy flashlight developed by Snoopwall.

Common sense goes a long way in protecting ourselves online and on our phones, and it’s up to us to accept that responsibility.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Introducing the Shoulders of InfoSec Project https://www.infosecisland.com/blogview/24031--Introducing-the-Shoulders-of-InfoSec-Project-.html https://www.infosecisland.com/blogview/24031--Introducing-the-Shoulders-of-InfoSec-Project-.html Wed, 15 Oct 2014 12:19:49 -0500 "If I have seen further it is by standing on the shoulders of giants"

Sir_Isaac_Newton_by_Sir_Godfrey_Kneller,_Bt
Most famously attributed to Sir Isaac Newton, this quote reflects the sentiment of a new project.  In InfoSec we all stand on the shoulders of giants.

It was just supposed to be a talk at DerbyCon, but as I dug into the topic I realized it needed to be more than just one talk.

Another relevant quote is George Santayana’s oft-misquoted:

“Those who cannot remember the past are condemned to repeat it.”

In information security we have a very bad habit of ignoring the past; many times it isn’t even a failure to remember, it is a failure to ever have known who and what came before.

Thus, the Shoulders of InfoSec Project.  It is an attempt to compile a lot of information about early figures in InfoSec (and hopefully it will move beyond just the early figures).  There are some great resources out there already, notably the University of Minnesota's Charles Babbage Institute which includes a great set of oral histories of security luminaries.  The goal is not to compete with, but to complement and highlight other relevant projects.

A note about the name: the project’s name is “Shoulders…”, not “Giants…”, because you do not need to be a giant to offer a shoulder to help others see further.  Many people

There are two components to the project at this time, a low-volume blog and the wiki.  The project wiki is a work in progress, it includes an ever-expanding list of names, each with a dedicated page including links to relevant information, and will hopefully gain some more color and context as the project develops.  The wiki also includes a references and resources page which has links to several related sites and projects.

The presentation I delivered at DerbyCon is up on Adrian Crenshaw’s Irongeek site if you would like to see some of the ideas and people featured in this project.

Suggestions and contributions are welcome, see the wiki for information about contribution to the project.

This was cross-posted from the Uncommon Sense Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
SandWorm Hacking Team Exploited 0-day Against NATO and Other Government Entities https://www.infosecisland.com/blogview/24030-SandWorm-Hacking-Team-Exploited-0-day-Against-NATO-and-Other-Government-Entities.html https://www.infosecisland.com/blogview/24030-SandWorm-Hacking-Team-Exploited-0-day-Against-NATO-and-Other-Government-Entities.html Tue, 14 Oct 2014 17:05:26 -0500 iSIGHT Partners firm uncovered a Russian hacking team dubbed Sandworm that was running a cyber espionage campaign on NATO and other Government entities.

According to a new report issued by the cyber security firm iSIGHT Partners, a group of Russian hackers has been exploiting a previously unknown flaw in Microsoft’s Windows operating system to spy on NATO, the Ukrainian government, a U.S. university researcher and many other entities. The researchers at ISight dubbed the hacking group SandWorm because of references discovered in its code to the science-fiction novel “Dune.”

The experts at iSIGHT Partners have worked in close collaboration with Microsoft during the investigation. The company announced the discovery of a zero-day vulnerability affecting all supported versions of Microsoft Windows and Windows Server 2008 and 2012. The vulnerability has been classified with the code CVE-2014-4114, and according to the revelation made by iSIGHT has been exploited in the cyber espionage operation on a large scale by a Russia hacking team. The nature of the target and the tactics, techniques, and procedures (TTP) adopted lead the experts to believe that this is the work of state-sponsored hackers.

“This is consistent with espionage activity,” said iSight Senior Director Stephen Ward. “All indicators from a targeting and lures perspective would indicate espionage with Russian national interests.”

Microsoft is already working on a security update for the CVE-2014-4114 that will be available in the next patch updates on the October 14th.

According to the report issued by iSIGHT, the APT has been active since at least 2009. Its targets in the recent campaign also included a Polish energy firm, a Western European government agency and also a French telecommunications firm.

iSIGHT_Partners sandworm timeline_13oct2014

The experts began the investigation in late 2013 when the NATO alliance was targeted by the SandWorm hacking team with exploits other than the zero-day, but they discovered the critical zero-day  in August when the group targeted the Ukrainian government in the lead-up to the NATO summit in Wales.

“In late August, while tracking the Sandworm Team, iSIGHT discovered a spear-phishing campaign targeting the Ukrainian government and at least one United States organization. Notably, these spear-phishing attacks coincided with the NATO summit on Ukraine held in Wales.” states the report published by iSIGHT.

Security experts speculated that the intensification of the cyber dispute  between Russian and Ukraine could have increased the likelihood to discover operations that went under the radar for so long.

iSIGHT Partners sandworm

Below chronological details provided by the researchers on the Sandworm activity:

  • The NATO alliance was targeted as early as December 2013 with exploits other than the zero-day
  • GlobSec attendees were targeted in May of 2014 with exploits other than the zero-day
  • June 2014
    • Broad targeting against a specific Western European government
    • Targeting of a Polish energy firm using CVE-2013-3906
    • Targeting of a French telecommunications firm using a BlackEnergy variant configured with a Base64-encoded reference to the firm

The SandWorm hacking team sent spear-phishing emails with malicious attachments to compromise the victim’s machine. The threat actors mentioned a global security forum in Russia and a purported list of Russian terrorists.

Another element that suggests Russia is responsible for the cyber espionage campaign are codes discovered on  the C&C server, located in Germany, that had not been properly secured and that contains Russian-language computer files that had been uploaded by the hackers.

“They could have closed it off, and they didn’t,” he said of the server. “It was poor operational security.”

The investigators noticed that SandWorm apparently re-engineered malware previously by other APT probably to masquerade its campaigns.

This was edited and cross-posted from the Security Affairs blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security Lessons from Complex, Dynamic Environments https://www.infosecisland.com/blogview/24029--Security-Lessons-from-Complex-Dynamic-Environments-.html https://www.infosecisland.com/blogview/24029--Security-Lessons-from-Complex-Dynamic-Environments-.html Tue, 14 Oct 2014 10:12:41 -0500 Security is hard.

Check that- security is relatively hard in static environments, but when you take on a dynamic company environment security becomes unpossible. I'm injecting a bit of humor here because you're going to need a chuckle before you read this.

Some of us in the security industry live in what's referred to as a static environment. Low rate of change (low entropy) means that you can implement a security control or measure and leave it there, knowing that it'll be just as effective today as tomorrow or the day after. Of course, this takes into account the rate at which effectiveness of security tools degrades, and understanding whether things were effective in the first place. It also means that you don't have to worry about things like a new system showing up on the network very often or a new route to the Internet. And when these do happen, you can be relatively sure something is wrong.

Early on in my career I worked for a technical recruiting firm. Computers were just a tool and companies having websites was a novelty. The ancient Novell NetWare 3.11 systems had not seen a reboot in literally half a decade but nothing was broken so everything just kept running and slowly accumulating inches of dust in the back room. When I worked there we modernized to NT 3.51 (don't laugh, I'm dating myself here) and built an IIS-based web page for external consumption. That place was a low entropy environment. We changed out server equipment never, and workstations every 5 years. If all of a sudden something new showed up in the 30 node network, I'd immediately suspect something was amiss. At the time, nothing that exciting ever happened.

Fast forward a few years and I'm working for a financial start-up. It's the early 2000's and this company is the polar opposite of a static company. We have at least 1 new server coming online a day, typically 5-10 new IP addresses showing up that no one can identify. We get by because we have one thing going for us. That one thing is the on-ramp to the Internet. We have a single T1 which connects us to the rest of the world. We drop a firewall and an IDS (I think we used an early SNORT version, maybe, plus a Sonic Wall firewall). When that changed and our employees started to go mobile and thus VPN things got a little hairy.

Fast forward another few years and I'm working at one of the world's largest companies on arguably one of the most complex networks mankind has ever seen. Forget trying to understand or know the everything - we're struggling to keep track of the few things we DO know. Heck we spend 4 weeks NMap'ing (and accidentally causing a minor crisis, oops) our own IP subnets to find all the NT4 systems when support finally and seriously for real this time, ran out.

Now let's look at security in the context of this article (and reported breach) - http://www.nextgov.com/cybersecurity/2014/10/dhs-attackers-hacked-critical-manufacturing-firm-months/96317/. Let me highlight a few key quotes for you-

"The event was complicated by the fact that the company had undergone corporate acquisitions, which introduced more network connections, and consequently a wider attack surface. The firm had more than 100 entry and exit points to the Internet."

You may chuckle at that, but I bet you have pretty close to this at your organization. Sure, maybe the ingress/egress points you control are few, and well protected, but it's the ones you don't know about which will hurt you. Therein lies the big problem - the disconnect between business risk and information security ("cyber") risk. If information security isn't a part of the fabric of your business, and a part of the core of the business decision-making process you're going to continue to fail big, or suffer by a thousand papercuts.

While not necessarily as sexy as that APT Defender Super Deluxe Edition v2.0 box your vendor is trying to sell you, network and system configuration management, change management and asset management are things you absolutely must get right, and must be involved in as a security professional for your enterprise. The alternative is you have total chaos wherein you're trying to plug each new issue as you find out about it, while the business has long forgotten about the project and has moved on. This sort of asynchronous approach is brutal in both human effort and capital expenditure.

Now let's focus on another interesting quote from the article. Everyone like to offer advice to breach victims, as if they have any clue what they're saying. This one is a gem-

"Going forward, “rearchitecting the network is the best approach to ensure that the company has a consistent security posture across its wide enterprise," officials advised."

What sort of half-baked advice is that?! Those of you who have worked incidents in your careers, have you ever told someone that the best thing to do with your super-complex network is to totally rearchitect it? How quickly would you get thrown out of a 2nd story window if you did? While this advice sounds sane to the person who's saying it - and likely has never had to follow the advice - can you imagine being given the task of completely rearchitecting a large, complex network in-place? I've seen it done. Once. And it took super-human effort, an army of consultants, more outages than I'd care to admit, and it was still cobbled together in some places for "legacy support".

Anyway, somewhere in this was a point about how large, complex networks and dynamic environments are doomed to security failure unless security is elevated to the business level and becomes an executive priority. I recognize that not every company will be able to do this because it won't fit their operating and risk models - but if that's the case you have to prepare for the fallout. In the cases where risk models say security is a business-level issue you have a chance to "get it right"; this means you have to give a solid effort and align to business, and so on.

Security is hard, folks.

This was cross-posted from the Follow the Wh1t3 Rabbit blog.

Copyright 2010 Respective Author at Infosec Island]]>
Lawyer Or Security Professional? https://www.infosecisland.com/blogview/24028-Lawyer-Or-Security-Professional.html https://www.infosecisland.com/blogview/24028-Lawyer-Or-Security-Professional.html Mon, 13 Oct 2014 13:05:11 -0500

“It depends upon what the meaning of the word ‘is’ is. If ‘is’ means ‘is and never has been’ that’s one thing – if it means ‘there is none’, that was a completely true statement.” –President of The United States of America, William Clinton

It has been an interesting time as the December 31, 2014 deadline approaches and version 2 of the PCI DSS comes to its end of life.  I have started to notice that there are a lot of security professionals and others that are closet lawyers based on the discussions I have had with some of you regarding compliance with the PCI DSS.

The first thing I want to remind people of is that if you do not want to comply with one or more of the PCI DSS requirements, all you have to do is write a position paper defining for each requirement you find onerous, why it is not relevant or not applicable for your environment and get your management and acquiring bank to sign off on that paper.  But stop wasting your QSA’s or ISA’s time with your arguments.  It is not that we do not care, but without such approval from your management and acquiring bank, QSAs and ISAs cannot let you off the hook for any requirement.

With that said, the first lawyerly argument we are dealing with these days revolves around the December deadline.  We continue to get into arguments over what the deadline actually means.

It appears that the PCI SSC and card brands’ repeatedly saying that version 2 is done as of December 31, 2014 was not clear enough for some of you.  And further clarifications from them that any reports submitted after that date must be under version 3 are also apparently too much for some of you to handle.  I do not know how there could be any misinterpretation of ‘DEADLINE’, ‘DONE’ or “AFTER THAT DATE’ but apparently, there are a lot of people out in the world that do not understand such words and phrases.  Then there are the amazing contortions that some people will go to in a twisted dance to the death to get around this deadline.

Where have you been?  How could you have missed this deadline?  It has been known since the PCI SSC announced their change when standard updates would be issued back with the release of the PCI DSS v2 more than three years ago.  But even assuming you were not involved back then, the PCI SSC announced the deadline over a year ago with the release of PCI DSS v3.  Either way, it certainly should not have been a surprise as there has been plenty of warning.

But then do not take this out on your QSA.  QSAs are just the messenger in this process and had nothing to do with setting the deadline.  The PCI SSC and the card brands set that deadline.  You have a problem with the deadline, complain to them.  But if you are willing to listen, I can save you that discussion.  They will politely tell you the deadline is the deadline.  You are out of luck.  If you do not like that answer, then stop taking credit/debit cards for payment for your organization’s goods and services.

The next lawyerly argument is around the June 30, 2015 deadlines for requirements 6.5.10, 8.5.1, 9.9, 11.3 and 12.9.  Again, it is as though these dates were kept from you, which they were not.  I even wrote a post about these requirements titled ‘Coming Attractions’ back in September 2013.

For those that are calendar challenged, June 30, 2015 is practically just around the corner in business terms.  If you had years to get ready for the PCI DSS v3, what makes you think that you can just turn something on in a year and a half?  Yet we continually see people arguing that until that date, they are not going to address any of these requirements.  All as though, like a light switch, something magical will occur on July 1, 2015 that will meet those requirements.

For merchants, requirements 9.9 and 11.3 are going to be huge issues particularly for those of you with large networks and lots of retail outlets.  If you have not gotten started on these requirements now, there is no way you will be compliant with these requirements by July 1.  Both of these require thought, planning and training.  They cannot just be started overnight resulting in compliance.

For requirement 11.3, the new approach required for penetration testing is resulting in vulnerabilities being uncovered.  Organizations that did not want to get caught flat footed are finding that their network segmentation is not as segmented as they once believed.  They are also finding new “old” vulnerabilities because of these network segmentation issues.  The bottom line is that these early adopters are scrambling to address their penetration testing issues.  In some cases ACLs need to be adjusted, but I have a few that have found they need to re-architect their networks in order to get back to compliance.  Obviously the latter is not an overnight kind of fix.

Requirement 9.9 is all about ensuring the security of points of interaction (POI) as card terminals are referred.  Because of all of the POI tampering and hacks that have occurred, the Council has added the requirements in 9.9 to minimize that threat.  The biggest problems early adopters are running into is getting their retail management and cashiers trained so that they understand the threats and know how to deal with those threats.  This requires creating new procedures for daily or more often inventorying of the POIs and visually inspecting them to ensure they have not been tampered with.  Companies are rolling out serialized security tape that must be applied to the seams of POIs so that any opening of the case can be visually determined.  Locking cradles are being installed for every POI to secure them to the counter.  Let alone implementing those new procedures for doing at least daily inspections and what to do if you suspect tampering and how to inform corporate of potential issues.  Again, not something that just happens and works day one.

For service providers, besides 11.3, requirement 8.5.1 is going to be their biggest issue.  This requires the service provider to use different remote access credentials for every customer.  This is in response to the breaches that occurred at a number of restaurants in Louisiana a few years ago as well as more recent breaches.

The problem that early adopters of 8.5.1 are finding is that implementing enterprise-wide credential vaults is not as simple as it appears.  The biggest impact with these implementations is that service providers start missing their service level agreements (SLA).  Missing SLAs typically costs money.  So these service providers are not only incurring the costs related to implementing the credential vault solution, but they are suffering SLA issues that just pile on the injuries.

But the final straw is all of the people that closely parse the PCI DSS and only the DSS.  You saw this with some of the questions asked at the latest Community Meeting.  You also see it in the questions I get on this blog and the prospects and I clients I deal with daily.  These people are hunting for a way to get around complying with a particular requirement.

This occurs because people only read the DSS and not the Glossary, information supplements and other documents provided by the Council.  At least with v3 of the DSS the Council included the Guidance for each of the requirements.  Not that adding Guidance makes a whole lot of difference based on the arguments laid out by some people.  The Council could do us all a favor if they generally published the Reporting Template with all of the other documents.  Not so much that people would necessarily read it, but it would give QSAs and ISAs more ammunition to use when these discussions come up.

Successful security professionals understand the purpose of security frameworks.  These frameworks are meant to share the collective knowledge and lessons learned regarding security with everyone so that everyone can have a leg up and know ways of detecting and mitigating threats.  Successful security professionals use these frameworks to get things done, not waste their time developing scholarly legal arguments or twisting the English language as to why they do not need to meet some security requirement.  They put their heads down, review the frameworks, develop plans to implement the changes necessary to improve security, work the plan and deliver results.  Do those plans always meet requirement deadline dates?  Not always, but they are close or as close as they can get given other business issues.

The bottom line is that security professionals are not lawyers and good security professionals certainly do not sound like lawyers.  But if you constantly find yourself sounding like a lawyer digging so deep to split legal hairs, in my very humble opinion, you really need to re-examine your career or lack thereof.  I say lack thereof because, in my experience, security professionals that operate like lawyers do not have long careers.  They move around a lot because once people realize that they cannot deliver, they are forced to move on.  Eventually a reputation is developed and after that point these people end up forced to find a new career because the security community knows their modus operandi.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
How to Build Up Your Secure Development https://www.infosecisland.com/blogview/24027-How-to-Build-Up-Your-Secure-Development.html https://www.infosecisland.com/blogview/24027-How-to-Build-Up-Your-Secure-Development.html Mon, 13 Oct 2014 10:55:56 -0500 By: Andrew Wagner

At some point, your company is going to get the security wake-up call. Whether it’s a breach or an inquiry from an important customer that triggers it, your executives are going to call you one morning, demanding you focus on security in the development of your product.

The goal is simple enough – fewer vulnerabilities and agility in responding to issues – but the development of a Secure Software Development Life Cycle (SSDLC) is not an overnight kind of thing. Think about what it takes to be consistently developing bulletproof products and turning around fixes on a dime.

We’re talking knowledgeable, trained staff. We’re talking automation. We’re talking penetration tests and time allocated to respond to them.

Given all of the investment required, the natural question is, “Where do I start?” Most likely, your first impulse is to go after training – after all, you can’t write secure software without engineers who know how to write secure software, right?

Sure – training is never a bad idea, and there are certainly options out there, both online and live, but it comes with many challenges:

  • Acquiring training outside of your company is can be a hit-and-miss and expensive. Did you get a good instructor? Did you pick the right course? Are you willing to pay thousands of dollars to give it a shot?
  • Building it inside your company might be more expensive. What’s the actual cost of having your senior engineer produce a training course on cross-site-scripting? What about the opportunity costs?
  • If your engineers don’t use the information, they won’t retain the information. Remember all of those history classes you took in college? Yeah – neither do I. How are you going to make sure the information sticks?

Rather than beginning with the training, start with the doing. While it certainly isn’t true for all engineers, it is often true that engineers learn by doing. Why not focus on applied activities that force awareness and drive your engineers to seek training where necessary? Here are a few suggestions:

Threat Modeling

Your developers consider themselves professionals, so what do professional developers do? They design!  Threat modeling is an avenue of design that is centered around security. Developers are asked to think about the system and draw it out with respect, not to functionality or scale, but with respect to avenues of potential attack.

By asking your developers to design to prevent attacks, you are essentially asking them to learn about those attacks and best practices for mitigation. This challenges them to learn and seek security information in order to solve a problem, which is what developers enjoy doing. If you’re successful, you won’t have to force training down their throats – they’ll identify what they need and bring it to you.

Identification (and Testing) of Security Mechanisms

This activity is aimed squarely at your quality assurance (QA) organization and represents a subtle change in the way that requirements are defined. By requiring the identification of security mechanisms, you’re shifting from a common way of creating requirements – “we need users to authenticate using Lightweight Directory Access Protocol (LDAP)” – to a manner that recognizes the actual security goals – “we need to prevent unauthorized users from accessing sensitive data, and the mechanism we will use is LDAP.”

The result is that your testers are driven to no longer think about “security testing” functional mechanisms, which will lead to nebulous results at best. They’re led to think about functionally testing security mechanisms – and functional testing against known requirements is something every tester knows how to do. It will also encourage research and training to ensure that the goal is actually being met with the mechanisms in place – is that data really inaccessible?

Requiring the identification of security mechanisms plays nicely into the Threat Modeling exercise. As your team identifies potential threats, they will need to identify the mechanisms to mitigate the threats and the behavior of those mechanisms. Having QA in the room for those discussions can be invaluable.

Penetration Testing

I know. I know. Testing security into your product will never work. By the time your product has hit the pen testers it is already ridiculously vulnerable. You need to have the development practices built-in ahead of time.

But then again… when your developers get overrun with issues from a competent pen testing group, they’re going to learn a lot about security in a big hurry. The training program you sent them to might have shown them how to make an alert box appear inside a website vulnerable to an XSS attack, but the pen testers just owned their application using the same mechanism. Which is going to get their attention?

Testing in this way is not cheap but if you have the budget to provide this kind of wake-up call to your developers, it’s not a bad way to start.

Starting with any of these activities sends the message of, “I’m going to expect you to apply this. Tell me what you need to know in order to do so.” For many engineers, the doing is what makes it stick.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
First Look at Suits and Spooks DC 2015: 3 Hot Workshops and Over 20 Talks and Panels https://www.infosecisland.com/blogview/24026-First-Look-at-Suits-and-Spooks-DC-2015-3-Hot-Workshops-and-Over-20-Talks-and-Panels.html https://www.infosecisland.com/blogview/24026-First-Look-at-Suits-and-Spooks-DC-2015-3-Hot-Workshops-and-Over-20-Talks-and-Panels.html Sat, 11 Oct 2014 12:38:00 -0500 Early bird registration is now open for Suits and Spooks - Washington, DC. We've expanded it to three days so as to include one optional day of training (Wednesday Feb 4). Since this is Suits and Spooks and not your typical Security conference, you've never had training like this before:

A Cyber Intelligence Analyst's Workshop: Connecting More Dots With Carmen Medina
A Cyber Security Entrepreneur's Workshop: Transitioning from a Spook to a Suit (taught by Barbara Hunt, Rick Holland, and to-be-announced panelists)
The PRC People's Liberation Army Information Warfare Infrastructure Workshop by Mark Stokes (Project 2049 Institute)

Suits and Spooks LogoThe training will be given in a tiered classroom setting with microphones at every seat and two large projection screens behind the instructor.

On Thursday (Feb 5) and Friday (Feb 6) our DC collision event will be held with a very unique collection of speakers that include John Robb (military strategist, futurist, and author of Open Source Warfare), Zachary Tumin (Deputy Commissioner for Strategic Initiatives at the NYPD), Thomas Rid (Professor at Kings College London and author "Cyber War Does Not Exist"), John Holland (CISO of Risk Division of Credit Suisse), and Ben Milne (founder of Dwolla).

You'll also get a very rare, inside look at how one of the world's largest defense contractors defends its global network, learn about Bitcoins and how at least one international bank is dealing with them, engage in a Q&A with a US Assistant District Attorney (invited), and much, much more.

Register Now

Our Early Bird discount is $675 for all three days or $575 without the workshops. GOV/MIL rates are $395/$325. This event always sells out so 
register early.

Copyright 2010 Respective Author at Infosec Island]]>
Kmart Says Hackers Breached Payment System https://www.infosecisland.com/blogview/24024-Kmart-Says-Hackers-Breached-Payment-System.html https://www.infosecisland.com/blogview/24024-Kmart-Says-Hackers-Breached-Payment-System.html Fri, 10 Oct 2014 17:28:58 -0500 [SecurityWeek] - Kmart is the latest large U.S. retailer to experience a breach of its payment systems, joining a fast growing club dealing successful hack attacks that have resulted in the exposure of customer data and payment card information.

The company said that on Thursday, Oct. 9, its IT team detected that its payment data systems had been breached, sparking them to quickly initiate an investigation.

The company believes debit and credit card numbers have been compromised.

Read the full story at SecurityWeek

Copyright 2010 Respective Author at Infosec Island]]>
Shellshock Leaves Deep Impact on Network Security https://www.infosecisland.com/blogview/24023-Shellshock-Leaves-Deep-Impact-on-Network-Security.html https://www.infosecisland.com/blogview/24023-Shellshock-Leaves-Deep-Impact-on-Network-Security.html Thu, 09 Oct 2014 16:20:31 -0500 For the last 30 years, a common line of code found in a piece of software has quietly been a dormant security vulnerability – but now, news of the exploit has gone public, sending the network security community into reaction mode.

The Shellshock vulnerability can be traced back to Bash, a command shell that is commonly used across the Internet on Linux and UNIX platforms. Bash translates user commands into language a computer can understand and then act upon. In the case of Shellshock, hackers could exploit Bash by issuing arbitrary software commands, potentially allowing them to control systems.

In the immediate aftermath of Shellshock’s discovery, security experts claimed the exploit had surpassed last spring’s Heartbleed as the worst software vulnerability of all time. One reason is that Shellshock’s reach could be even greater than the Heartbleed vulnerability, which only affected software using the OpenSSL encryption protocol. Shellshock’s reach could even extend to Internet of Things devices, since their software is built on Bash script.

For the last few weeks, website administrators have been making the necessary updates to protect users. Within a week of the vulnerability going public, Amazon, Google and Apple responded with patches and internal server updates.

Even so, it will take some time for the fallout from Shellshock to subside.

The Year of the Cyberattack Continues

This year has not been kind to the network security community. Although the Target breach occurred in 2013, the fallout has continued well into this year. Then came attacks at Neiman Marcus, eBay and, just last month, Home Depot. And, of course, Heartbleed and Shellshock.

Even in the last few weeks, news broke that more than 200 stores in the Jimmy John’s sandwich chain were breached by a remote hacker who stole customer credit and debit card information. And just like in the Target breach, where hackers infiltrated the network through an HVAC contractor, a third party of Jimmy John’s was also to blame – attackers gained network access and login credentials from a point-of-sale vendor.

The Jimmy John’s attack provides yet another example of why network security isn’t as straightforward as guarding against attacks just on the immediate network. Every network endpoint is a potential attack vector, whether it’s part of the direct network or operated by a third party who only accesses the network occasionally. This is why it’s so critical for network administrators to implement secure VPNs, as part of a comprehensive, layered, defense in-depth approach to network security.

Now, there have been reports that some VPNs could be vulnerable to attacks launched through the Shellshock exploit, but it’s important to note that these remote attacks only apply to servers rooted in OpenVPN. VPNs using the proven IPsec standard, on the other hand, ensure privacy, shield remote users from a range of malicious attacks, and serve as another line of defense.

And in the fight against Shellshock, users need every defense mechanism they can get their hands on.

This was cross-posted from the VPN HAUS blog

Copyright 2010 Respective Author at Infosec Island]]>