Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Regin: A Malicious Platform Capable Of Spying on GSM Networks https://www.infosecisland.com/blogview/24104-Regin-A-Malicious-Platform-Capable-Of-Spying-on-GSM-Networks.html https://www.infosecisland.com/blogview/24104-Regin-A-Malicious-Platform-Capable-Of-Spying-on-GSM-Networks.html Tue, 25 Nov 2014 09:28:00 -0600 Kaspersky Lab's Global Research and Analysis Team has published its research on Regin - the first cyber-attack platform known to penetrate and monitor GSM networks in addition to other "standard" spying tasks. The attackers behind this platform have compromised computer networks in at least 14 countries around the world.

Quick facts:

  • The main victims of this actor are: telecom operators, governments, financial institutions, research organisations, multinational political bodies and individuals involved in advanced mathematical/cryptographical research.
  • Victims of this actor have been found in AlgeriaAfghanistanBelgiumBrazilFiji,GermanyIranIndiaIndonesiaKiribatiMalaysia, Pakistan, Syria and Russia.
  • The Regin platform consists of multiple malicious tools capable of compromising the entire network of an attacked organization. The Regin platform uses an incredibly complex communication method between infected networks and command and control servers, allowing remote control and data transmission by stealth.
  • One particular Regin module is capable of monitoring GSM base station controllers, collecting data about  GSM cells and the network infrastructure.  
  • Over the course of a single month in April 2008 the attackers collected administrative credentials that would allow them to manipulate a GSM network in a Middle Eastern country.
  • Some of the earliest samples of Regin appear to have been created as early as 2003.  

In spring 2012 Kaspersky Lab experts became aware of Regin malware, which seemed to belong to a sophisticated espionage campaign. For almost three subsequent years Kaspersky Lab's experts tracked this malware all over the world. From time to time, samples would appear on various multi-scanner services, but they were all unrelated to each other, cryptic in functionality and lacking context. However, Kaspersky Lab experts were able to obtain samples involved in several real world attacks, including those against governmental institutions and telecom operators, and this provided enough information to research more deeply into this threat.

The in-depth study found that Regin is not just a single malicious program, but a platform - a software package, consisting of multiple modules, capable of infecting the entire networks of targeted organisations to seize full remote control at all possible levels. Regin is aimed at gathering confidential data from attacked networks and performing several other types of attacks.

The actor behind the Regin platform has a well-developed method to control the infected networks. Kaspersky Lab experts observed several compromised organisations in one country, but only one of them was programmed to communicate with the command and control server located in another country.

However all the Regin victims in the region were joined together in a peer to peer VPN-like network and able to communicate with each other. Thus, attackers turned compromised organisations in one vast unified victim and were able to send commands and steal the information via a single entry point. According to Kaspersky Lab's research this structure allowed the actor to operate silently for years without raising suspicions.

The most original and interesting feature of the Regin platform, though, is its ability to attack GSM networks. According to an activity log on a GSM Base Station Controller obtained by Kaspersky Lab researchers during the investigation, attackers were able to obtain credentials that would allow them to control GSM cells in the network of a large cellular operator. This means that they could have had access to information about which calls are processed by a particular cell, redirect these calls to other cells, activate neighbour cells and perform other offensive activities. At the present time, the attackers behind Regin are the only ones known to have been capable of doing such operations.

"The ability to penetrate and monitor GSM networks is perhaps the most unusual and interesting aspect of these operations. In today's world, we have become too dependent on mobile phone networks which rely on ancient communication protocols with little or no security available for the end user. Although all GSM networks have mechanisms embedded which allow entities such as law enforcement to track suspects, other parties can hijack this ability and abuse it to launch different attacks against mobile users," said Costin Raiu, Director of Global Research and Analysis Team at Kaspersky Lab.

Source: Kaspersky Lab

RelatedSymantec Uncovers Stealthy Nation-State Cyber Attack Platform

RelatedCyberspying Tool Could Have US, British Origins

 

Copyright 2010 Respective Author at Infosec Island]]>
3 Internet of Things Security Nuances You May Not Have Considered https://www.infosecisland.com/blogview/24100-3-Internet-of-Things-Security-Nuances-You-May-Not-Have-Considered.html https://www.infosecisland.com/blogview/24100-3-Internet-of-Things-Security-Nuances-You-May-Not-Have-Considered.html Tue, 25 Nov 2014 08:49:50 -0600 Over the past 18 months, I’ve been in a variety of situations where I had the opportunity to discuss the Internet of Things (IoT) with various industry professionals, developers and journalists. It intrigued me to realize that many of my viewpoints often differed from others discussing the topic in articles or event presentations.

With that in mind, I wanted to share three topics that I find are discussed less frequently. Regardless, these topics should be an important aspect of the conversations we need to be having on this rapidly growing subject.

1. Crowd funding and IoT Go Hand-in-Hand

For most people, IoT will be defined as crowd-funded products throughout the next few years. While the most successful products will surely be acquired by a behemoth, or find venture capital funding, or go private, their start will have come from the cutting-edge technology buyers among us.

For example, Samsung acquired the IoT hub company SmartThings this August for a reported $200 million, less than two years after it gather $1.2 million in crowdfunding. Not to be outdone, LIFX – a connected light-bulb company – raised $1.3 million before receiving $12 million of investment from Sequoia Capital.

Why does this matter?

Because the people initially developing these connected products likely have little-to-no experience in information security across mobile, embedded, cloud and web applications – all of which are likely core to their business.

This is not a shot at these two companies but rather a state of affairs when people are simply trying to get enough money to get their idea off the ground, let alone spending a chunk of their budget on security audits and security engineering staff.

Although big companies make security mistakes, the likelihood that two or three smart folks quickly pushing a connected product to the market will fumble security is all but guaranteed. LIFX, for instance, did unfortunately have some early security issues.

2. Technology Tedium and Human Needs

I’ve admittedly asked this question many times at events this year, but when was the last time you upgraded your home router’s firmware? How about your parents’ router?

For all but the very technically-inclined, the answer may be “never.” This is due to two primary reasons: nobody cares to do it and/or they don’t know how to do it.

Think five years into the future when your light bulbs, electrical outlets, switches, cameras, watches and children’s toys are connected devices. What is your honest thought about upgrading one to two dozen devices, maybe every week or bi-weekly, depending on update cycles?

For me, this sounds like a hellish level of tedium resulting in a waste of my time, effort and sanity.

Further, the means to upgrade firmware varies widely from device to device, which leaves users confused on how to do the process per IoT offering. This results in even technical folks having to keep track of processes on scale to ensure they aren’t missing out on security updates across their home- or office-of-things.

I, personally, am a big advocate of auto-upgrading firmware. While this adds some engineering overhead, it could dramatically reduce the future risks people will deal with when bugs are discovered across their many devices. If large organizations have a hard time with inventory, threat and vulnerability management, what makes us think consumers can do any better handling their technology if people still can’t seem to get their Windows machines upgraded?!

3. We’re Not Waiting for Massive Failure to Start Fixing

The last thought I want to leave you with is a bit more of a plug than unbiased insight, but the core message is still notable.

Earlier this year, Zach Lanier and I formed an initiative dubbed BuildItSecure.ly. The initiative has the broader goal of putting IoT development on a safer, more security-driven path by educating engineering teams through content and hacking the devices to provide direct security improvement.

While the underpinnings here are basically what already happen in the security research community, the means to accomplish these goals is different.

BuildItSecure.ly is focused on having vendors effectively opt-in to this initiative, providing their hardware to handpicked security researchers that we vet for competency and professionalism. We also only focus on IoT to give extra attention to this nascent and malleable world of technology.

This effort has thus far resulted in five vendors, including big names like Belkin and Dropcam, being part of leading the charge for IoT that considers security a first-class citizen rather than an after thought in the development cycle.

We’re just getting started in a lot of ways, but the relationships being built, the good will for the security community, and the number of devices now being developed more securely due to our efforts is already making a big impact.

There’s a lot of work to do, but we’re at least focused on giving IoT a fair shot at being the poster child of security and not the antithesis of it.

 

About the Author: Mark Stanislav (@markstanislav) is a Security Project Manager for Duo Security.

This was cross-posted from Tripwire's State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
You Need to Know About Ransomware https://www.infosecisland.com/blogview/24099-You-Need-to-Know-About-Ransomware.html https://www.infosecisland.com/blogview/24099-You-Need-to-Know-About-Ransomware.html Tue, 25 Nov 2014 08:40:41 -0600 When was the last time you made a backup of all your data? How often do you make incremental backups? Do you keep these backups on a separate storage device and disconnected (or firewalled away from) the rest of your network?

“Say, why do you ask?”

The primary reason I’m asking right now is because ransomware is growing rapidly in occurrences;over 700% from last year.  Three of the best ways you can help defend against it is by:

1)    Making backups of all your data and software on a separate storage device that is not attached to your network or computer except only when backups are made (during which time you should be offline). Have you done this lately? How often do you back up your critical data? Are your backups of your operating system and applications good and able to work when you need them?

2)    Using effective and constantly updated anti-malware tools. When was the last time you updated your anti-malware tools? Do they check for zero-day types of malware? Do they check for signs of ransomware? If they don’t consider getting an anti-malware tool that does.

3)    Not falling victim of phishing attempts. Educate yourself, and your co-workers, friends and family, about ransomware; how to spot it and prevent from being a victim. Show them this article as a start. Provide ongoing reminders, and more formal training as appropriate.

What is ransomware?

Ransomware is malware crooks use to encrypt your hard drive. They then require you to pay them a ransom to decrypt it. What I’ve seen is typical is a requirement to pay them $500 if you pay quickly, $1000 if you take longer than a few days, and after a specific period of time they will delete everything from your hard drive.

How do they load this ransomware on a computer?

Typically the crooks will trick you into clicking a link or downloading afile through a phishing message via email or text. For example, in one that has been effective, they will send you an email stating you’ve been caught by a red light camera or a speeding camera and now need to pay a fine. They then provide a link to a website where they claim you can see the video and tell you that you can pay the fine with your credit card. Such sites often even try to replicate a valid site by requiring you to make a captcha entry. IBM provides a good overview of captcha here if you aren’t familiar with it.

Once you’ve entered any preliminary information, the malware is downloaded to your computer, and your entire computer (this includes smartphones) is encrypted. You then have a message shown on your screen directing you where to go to get the decryption software to pay, typically via a prepaid money card or Bitcoin, to get your computer contents back. The problem is, if you pay the ransom, it is likely there are still links from the crooks to your computer so they can continue to hold you ransom whenever they want.

Ransomware is also spread through malicious adware (this is called “malvertising”) on legitimate sites, such as Yahoo, AOL and Match.Com, so you really need to be careful what you click even on the site you trust.

Who do they target?

A statement I’ve heard from literally hundreds (if not thousands) of small to midsize businesses, not to mention a large portion of the general public, is: “I’m not large enough for hackers or crooks to target.” This is a dangerous, and completely false, belief. Crooks target ANYONE (businesses of all sizes, cities and other government agencies and individuals) with their digital crimes. Why? Because the more they target, the more victims they’ll get; and with unlimited digital crime paths it is really easy to target literally millions of people and businesses.

How successful have the crooks been?

This has been a particularly lucrative crime in a comparatively short period.

As more crooks see how much money their buddies are making, you will see more and more types of ransomware being launched, putting you and your business at risk if you are not on the lookout for the signs of such a crime.

Bottom line for organizations of all sizes…

Every business, no matter how small and in many ways even more so if they are small, needs to be aware of this current and growing criminal activity. Make your co-workers aware, and take the necessary precautions, including making frequent and full backups and using effective anti-malware tools, to keep from becoming a victim.

This was cross-posted from the Privacy Professor blog.

Copyright 2010 Respective Author at Infosec Island]]>
Avoiding the Bait: Helpful Tips to Protect Yourself Against Phishing Scams https://www.infosecisland.com/blogview/24098-Avoiding-the-Bait-Helpful-Tips-to-Protect-Yourself-Against-Phishing-Scams.html https://www.infosecisland.com/blogview/24098-Avoiding-the-Bait-Helpful-Tips-to-Protect-Yourself-Against-Phishing-Scams.html Mon, 24 Nov 2014 09:40:01 -0600 Phishing scams come in all shapes and sizes. But one thing is for certain: they are all around us.

Fraudsters who craft and disseminate these fake emails are often experienced in their methodology, incorporating newsworthy items, such as the Ebola outbreak and the upcoming holiday season into their ploys to maximize their appeal.

Today, phishing scams know no bounds when it comes to the type of operating system we use or the type of perpetrator—whether it be Windows, Linux, or Apple, every user is susceptible to phishes.

In a recent study performed by Google, the company found that the most expertly crafted scams were successful 45 percent of the time. Additionally, even those that were obviously fake, as evidenced by misspelled words and poor grammar, worked on 3-14 percent of users.

Phishing continues to be leveraged by criminals as a ‘tried and true’ tactic to deceiving everyday users and organizations. To combat this ongoing cyber threat, many enterprises have developed anti-phishing solutions to protect companies when, not if, an employee clicks on a malicious link.

These tools often consist of monitoring, predictive modeling and malware analysis features to catch a phish before it wreaks too much havoc on a company’s network.

But this is half the battle. Phishing intrinsically plays on human curiosity and weakness. As such, any anti-phishing strategy must incorporate a degree of security awareness if it wishes to be effective.

Earlier this month, we outlined several tips to help spot a “good” phish from a “bad” one. To continue our effort of bringing anti-phishing awareness, especially with the holidays approaching, here is a short video with a few tips for users to protect themselves against phishes.

Equipped with these strategies, users can know what to look for in a phishing scam.

Ultimately, phishers may be clever with their tactics to trick users into clicking on a suspicious link. But when pitted against a healthy dose of security awareness, they don’t stand a chance.

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Face It, You Are A Poor Judge Of Risk https://www.infosecisland.com/blogview/24097--Face-It-You-Are-A-Poor-Judge-Of-Risk.html https://www.infosecisland.com/blogview/24097--Face-It-You-Are-A-Poor-Judge-Of-Risk.html Mon, 24 Nov 2014 09:15:37 -0600

“The oldest and strongest emotion of mankind is fear, and the oldest and strongest kind of fear is fear of the unknown.” HP Lovecraft

We have a pop quiz today.

  1. Are you more likely to die from an alligator attack or a shark attack?
  2. Are you more likely to win the PowerBall lottery jackpot or become a movie star?
  3. Are you more likely to die in a vending machine accident or from a lightning strike?
  4. Are you more likely to be elected President of the United States or to date a supermodel?
  5. Are you more likely to die from influenza or from drowning?
  6. Are you more likely to catch influenza or Ebola?

The purpose of this pop quiz is to demonstrate how poorly we humans evaluate and understand risks. I have to admit I got caught on a couple of these as I did the research.

If anything, the Ebola discussion has brought this issue of risk judgment to the forefront given the unfounded fear people have of Ebola. As a mathematician by schooling it has fascinated me as I watch the media reports and government officials cave into the spread of fear over something very highly unlikely to occur to anyone in the general population.

Do not get me wrong. If I were a health care worker anywhere in the world, I would have concerns about my risk of catching Ebola. After all, they are on the front line and Ebola has around a 50% fatality rate. Add into that the informative, but frightening, video that Dr. Sanjay Gupta of CNN did on the difficulty of removing a containment suit without potentially infecting yourself, and it confirms the threat a health care worker should be feeling if confronted with a potential Ebola patient that is symptomatic.

But for anyone outside of health care, there should be little if any reason to be concerned. Yet a good percentage of the public is irrational when it comes to Ebola regardless of the fact that it requires contact with a symptomatic person’s bodily fluids in order to be infected. But unlike a person with influenza, an Ebola infected person that is contagious does not have the mobility required to have contact with people unless those people come to them. As a result, all of these mental gymnastics that people go through about the possibility that an infection could occur on a bus or the subway are silly because the person with Ebola when they are contagious would look worse than a zombie off of ‘The Walking Dead’, assuming they could even walk at that point.

I am sure you are all saying that this is all good and well, but what is the point here in regards to PCI?

Glad you asked. I bring this up because the PCI DSS is heading more and more to be driven by risk and the assessment of that risk. Yet as I have hopefully shown by my quiz questions, people and their organizations are poor at understanding and determining risks. So organizations need to get much better at performing risk assessments (if they are performed at all) so that they can truly understand and manage risks. That said, a risk assessment does not have to be, nor should it be, a huge “death march” of a project. A proper risk assessment should answer the following questions.

  • What are the risks to the organization? This does not have to be an exhaustive, all inclusive list as you find in the various risk assessment methodology frameworks. But should include all of the most likely risks. For PCI compliance, this risk assessment only needs to address the risks to those things that are in-scope for the assessment. However, most organizations need the risk assessment for other reasons, so it often contains all risks, not just PCI risks. If it does contain risks outside of PCI, you should add columns for your other requirements so you can filter out just the PCI, HIPAA, GLBA, FISMA and any other risk frameworks.
  • What is the likelihood of the risk occurring? Typically, I use a scale of 1 to 5 where 1 is it occurs infrequently and 5 represents that it occurs often. If something never occurs, then it should be removed from the list.
  • If the risk occurs, what is the impact on the organization? Here I use a scale of 1 to 3 where 1 is low, 2 is moderate and 3 is high.
  • Multiply the likelihood with the impact and you get the risk rating.
  • Sort the risk ratings from highest to lowest and you have your risk assessment rating completed.

But hold on, you are not done just yet. Now you need to set your organization’s risk threshold. This will likely be a very contentious discussion as you will find that people within the organization have widely differing views on the level of risk they are willing to accept. However, it is important to capture the highlights of this discussion so that you have documentation for future discussions as you discuss future risk assessment results and reset the organization’s risk threshold.

Risks that fall below a certain risk rating are accepted and management formally agrees to accept them. Those above that level you develop methods of mitigating and managing those risks. Under my rating system, the lowest score that can be achieved is 1 and the highest score is 15. A lot of organizations might say that a total score of below 4 is to be accepted. For some organizations a better approach to accepting risk is sometimes to only accept those risks that have an impact of ‘Low’ (i.e., equal to 1). Therefore, all moderate and high impact risks are mitigated and managed.

Once you have your analysis done you will have a list of risks that require mitigation and management through monitoring and other methods.

Answers

  1. According to theFlorida Museum of Natural History, between 1948 and 2005 there were 391 alligator attacks resulting in 18 fatalities whereas there were 592 shark attacks with 9 fatalities. That makes the alligator fatality rate almost three times as high as the shark fatality rate.
  2. The odds of winning the PowerBall are around one in 175M. While still incredibly long, the odds of becoming a movie star are significantly lower at one in 1.5M.
  3. Lightning is more deadly but do not underestimate that vending machine. According to the US National Oceanic and Atmospheric Administration (NOAA), the odds of being hit by lightning in the US are one in 1.9M. According to the US National Safety Council, there is a one in 112M chance of dying in a vending machine accident.
  4. The odds are in your favor if you are interested in dating a supermodel. Even better than becoming a movie star. You have a one in 88K chance of dating a supermodel according to Ask the Odds. The odds of being elected President are slim at one in 10M.
  5. The US Centers for Disease Control (CDC) estimate that the odds of drowning are one in 31.4. The CDC estimates that the odds of dying from influenza are around one in 345K.
  6. The CDC estimates that one in eight people will catch the flu in any given year and as seen in a previous answer, there is a one in 345K chance that a person will die as a result. Given the population of the US is around 315M and only four people have actually caught the Ebola virus in the US, there is around a one in 78M chance of catching Ebola in the US but that could change slightly if more infected people enter the US.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security or Checking a Box? https://www.infosecisland.com/blogview/24095-Security-or-Checking-a-Box.html https://www.infosecisland.com/blogview/24095-Security-or-Checking-a-Box.html Thu, 20 Nov 2014 12:21:00 -0600 “Better to remain silent and be thought a fool than to speak out and remove all doubt.” Abraham Lincoln

 

What is your organization interested in?  Security or checking a box?

Not surprisingly, most people answer “security” and then go on to prove with their actions and words that they are only interested in checking a box.

For all of you out there that argue ad nausea about the meaning of PCI DSS testing requirements and the requisite documentation are interested in one thing and one thing only; checking a box.  I am not talking about the few that have honest differences of opinion on a few of the requirements and how a QSA is interpreting them and assessing them.  I am talking about those of you that fight constantly with your QSA or acquiring bank on the process as a whole.

If you were to step back and listen to your arguments, you would hear someone that is splitting hairs in a vain attempt to avoid having to do something that would improve your organization’s security posture.  In essence, you want to only be judged PCI compliant, not actually be secure.

To add insult to injury, these are also typically the people that argue the most vehemently over the fact that the PCI DSS is worthless because it does not make an organization secure.  Wow!  Want to have your cake and eat it too!  Sorry, but you cannot have it both ways.

Everyone, including the Council, has been very clear that the PCI DSS is a bare minimum for security, not the “be all to end all” for securing an organization.  Organizations must go beyond the PCI DSS to actually be secure.  This where these people and their organizations get stumped because they cannot think beyond the standard.  Without a detailed road map, they are totally and utterly lost.  And heaven forbid they should pay a consultant for help.

But I am encountering a more insidious side to all of this.  As you listen to the arguments, a lot of you arguing about PCI compliance appear to have no interest in breaking a sweat and doing the actual work that is required.  More and more I find only partially implemented security tools, only partially implemented monitoring and only partially implemented controls.  And when you dig into it as we must do with the PCI assessment process, it becomes painfully obvious that when it got hard is when the progress stopped.

 

“It’s supposed to be hard. If it wasn’t hard, everyone would do it.” Jimmy Dugan – A League Of Their Own

 

Security guru Bruce Schneier was speaking at a local ISSA meeting recently and when asked about why security is not being addressed better he stated that one of the big reasons is that it is hard and complex at times to secure our technology.  And he is right, security is hard.  It is hard because of our poor planning, lack of inclusion, pick the reason and I am sure there is some truth to it.  But he went on to say that it is not going to get any easier any time soon.  Yes, we will get better tools, but the nature of what we have built and implemented will still make security hard.  We need to admit it will be hard and not sugar coat that fact to management.

Management also needs to clearly understand as well that security is not perfect.  The analogy I like to use is banks.  I point out to people the security around banks.  They have one or more vaults with time locks.  They have video cameras.  They have dye packs in teller drawers.  Yet, banks still get robbed.  But, the banks only stock their teller drawers with a minimal amount of money so the robber can only get a few thousand dollars in one robbery.  Therefore to be successful, a robber has to rob many banks to make a living which increases the likelihood they will get caught.  We need to do the same thing with information security and recognize that breaches will still occur, but because we have controls in place that minimizes the amount or type of information they can obtain.

 

“There’s a sucker born every minute.” David Hannum

 

Finally, there is the neglected human element.  It is most often neglected because security people are not people, people.  A lot of people went into information security so that they did not have to interact a lot with people – they wanted to play with the cool tools.  Read the Verizon, Trustwave, etc. breach analysis reports and time and again, the root cause of a breach comes down to human error, not a flaw in one of our cool tools.  Yet what do we do about human error?  Little to nothing.  The reason being that supposedly security awareness training does not work.  Security awareness training does not work because we try to achieve success only doing it once per year not continuously.

To prove a point, I often ask people how long it took them to get their spouse, partner or friend to change a bad habit of say putting the toilet seat down or not using a particular word or phrase.  Never in my life have I ever gotten a response of “immediately”, “days” or “months”, it has always been measured in “years”.  And you always get comments about the arguments over the constant harping about changing the habit.  So why would any rational person think that a single annual security awareness event is going to be successful in changing any human habits?  It is the continuous discussion of security awareness that results in changes in people’s habits.

Not that you have to harp or drone on the topic, but you must keep it in the forefront of people’s mind.  The discussion must be relevant and explain why a particular issue is occurring, what the threat is trying to accomplish and then what the individual needs to do to avoid becoming a victim.  If your organization operates retail outlets, explaining a banking scam to your clerks is pointless.  However, explaining that there is now a flood of fraudulent coupons being generated and how to recognize phony coupons is a skill that all retail clerks need to know.

·        Why are fraudulent coupons flooding the marketplace? Because people need to reduce expenses and they are using creative ways to accomplish that including fraudulent ways.

·        What do the fraudulent coupons do to our company? People using fraudulent coupons are stealing from our company.  When we submit fraudulent coupons to our suppliers for reimbursement, they reject them and we are forced to absorb that as a loss.

·        What can you do to minimize our losses? Here are the ways to identify a fraudulent coupon.  [Describe the characteristics of a fraudulent coupon]  When in doubt, call the store manager for assistance.

Every organization I know has more than enough issues that make writing these sorts of messages easy to come up with a topic at least once a week.  Information security personnel need to work with their organization’s Loss Prevention personnel to identify those issues and then write them up so that all employees can act to prevent becoming victims.

Those of you closet box checkers need to give it up.  You are doing your organizations a huge disservice because you are not advancing information security; you are advancing a check in a box.

This was cross-posted from the PCI Guru blog. 

 

 

 

Copyright 2010 Respective Author at Infosec Island]]>
Access Governance 101: Job Changes and Elevated Permissions https://www.infosecisland.com/blogview/24094-Access-Governance-101-Job-Changes-and-Elevated-Permissions.html https://www.infosecisland.com/blogview/24094-Access-Governance-101-Job-Changes-and-Elevated-Permissions.html Thu, 20 Nov 2014 12:07:00 -0600 By: Fernando Labastida 

Identity and Access Management (IAM) has become an increasingly important discipline, from its tentative beginnings in the early 90s to the multi-disciplinary, cloud-based process it is today. With 2.4 million Google results, IAM has gotten lots of ink.

But the growing discipline of Identity and Access Governance, an essential component of IAM projects, hasn’t seen as much coverage in the news and blogosphere. So we thought we’d focus on Access Governance for a few posts.

Today [Nov. 4] we’re launching a series called Identity and Access Governance 101, with the aim of clarifying the discipline for those new to the practice. We wanted to explain it not so much from a technology perspective, but from a business process perspective.

Today we cover how Access Governance handles job changes and privileged access.

Keep Tabs On Your Users and Their Access

How do you keep tabs on whether your employees, contractors, partners and customers continue to have appropriate access to the resources they need, and are kept from the technology they shouldn’t have access to?

Depending on the functionality and importance of your applications, databases and document folders, access should be reviewed periodically to ensure your organization is secure.

The Job Change Scenario

In today’s fluid corporate environment, your employees get promoted, make lateral moves, become contractors, quit, get laid-off, come back again, and experience every scenario in between.

These commonplace activities can sometimes be fraught with peril.

For example, in a large enterprise scenario some employees acquire privileged access or elevated permissions. These are typically domain admins: network administrators, database administrators or application owners. Like most people, they have regular access to your standard corporate applications, but they also have elevated permissions to the applications they own.

During job changes, this elevated access can be overlooked.

For example, Craig just changed job functions. In his previous job Craig was a DBA in the HR department. But now he’s an application administrator in the manufacturing department. If Craig’s DBA access is not revoked during the job change process, he’ll have lingering DBA permissions.

Could Craig shut down a database or cause even worse damage if he wanted to? Yes. Would Craig do that? You never know. People do some funky stuff.

The Job Change Access Review

changeEnter the job change access review, a key process for avoiding lingering elevated permissions.

How do you define the job change?

Each firm has its own criteria. It could be a department change, or a job core change. The employee might get a new manager, or move to a new location. It really depends on your particular situation.

But when your chosen criteria is met (and your IAG system sets off the appropriate alerts), it’s time to schedule an access review for that person.

If an employee moves from finance to manufacturing, the manufacturing manager might look at the access of the person now reporting to him and say: “Hey, why does he have this general ledger access? He doesn’t need this!”

The manager will then give the order to revoke the new employee’s general ledger access.

Elevated Access Review

The job change scenario is a great start, and is perfect for employees with standard-level access to applications. But for privileged access, why wait for somebody to change positions? 

Enter the Elevated Access Review. This is a periodic review for domain admins, and is typically scheduled more frequently than quarterly. Monthly or even weekly are appropriate. Again it depends on your organization.

How do you review elevated access? Here are a few scenarios you could choose from:

1.      Manager Review

The employee’s immediate manager is responsible for reviewing elevated permissions, ensuring his direct reports have appropriate access according to his or her job functions.

2.      Application Owner Review

Some organizations feel the manager may not know what access his direct reports should have, and so defer to the application owner. For example, if employee A has financial application access, the owner of the financial application should review employee A’s access to his applications and have the power to allow or revoke that access.

3.      Two-Step Scenario.

You might want to combine one and two above using the two-step scenario. First, the manager could say “yes, my employee should have access to the financial application.”  Then there’s an additional sign-off from the application owner. He could say “Ok, her manager said she should have access to this financial app, but as the app owner I don’t think it’s required.” He has the higher level say-so, and he can revoke her access.

It’s entirely up to you which scenario you choose for elevated access approval.

Conclusion

So far we’ve covered rules and processes for ensuring your employees have access to the appropriate technology resources, including job change triggers and regular reviews for those with elevated access.

In the next post in this series we’ll cover role-based access reviews.

This was cross-posted from the Identropy blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Operation Onymous Challenges Tor to Strengthen Its Security https://www.infosecisland.com/blogview/24093-Operation-Onymous-Challenges-Tor-to-Strengthen-Its-Security-.html https://www.infosecisland.com/blogview/24093-Operation-Onymous-Challenges-Tor-to-Strengthen-Its-Security-.html Wed, 19 Nov 2014 11:22:19 -0600 By: David Bisson 

Earlier in November, Europol, the FBI and the Department of Homeland Security coordinated a global sting against the “Dark Web” drug trade.

Codenamed “Operation Onymous,” the international legal effort arrested 17 people and shut down a number of drug and contraband Internet underground websites, including Topix, Cloud 9, Black Market and the infamous Silk Road 2.0.

In all, the operation seized more than 400 websites with the “.onion” domain, which belongs to the anonymity service Tor that grants users access to public networks without requiring the forfeiture of their privacy.

How the domains were found currently remains a mystery—a degree of uncertainty that has many Tor users worried about the security of the service’s anonymity shield.

To understand the full implications of this takedown, a little background on Tor is helpful.

AN ONION SURFS THE WEB

Tor, otherwise known as “The Onion Routing” program, was a project originally designed by the U.S. Naval Research Laboratory. Its intention was to protect governmental communications by safeguarding interlocutors’ identity.

Today, Tor has expanded into the private sector. All kinds of users, from cyber drug lords to journalists and dissidents who wish to conceal their online activities from repressive governments, allegedly employ the anonymizing service.

Put simply, onion routing encrypts users’ data that is sent through the web in multiple layers, thereby mimicking the layers of an onion, and transmits user traffic through several different computers.

The service’s functionality depends on a unique infrastructure of middle relays, bridges and end relays. Any user can supply a middle router from the comfort of their own home and not fear retaliation from law enforcement. Bridges go a level deeper, acting as private relays that are protected from those who wish to block users’ IP addresses.

End relays, by contrast, are the final relays in a chain of connections and, as such, are often targeted by law enforcement and copyright holders.

OPERATION ONYMOUS: THE DARK SIDE OF TOR

Many have celebrated Tor for its Browser Bundle package, which does not require users to download any software, and for its multi-language interface.

Additionally, human rights advocates approve of the service because of its roundabout accessibility in states that censor the web. If network firewalls block users from accessing Tor’s website, even in states like China and Iran, users can send a message to a particular email address, from which a reply message will be sent to them with installation instructions.

These benefits to users notwithstanding, Operation Onymous has revealed that Tor – once thought to be impenetrable – can successfully be infiltrated by government agencies.

To some, including Craig Young, a security researcher at Tripwire, this does not come as much of a surprise:

“The FBI has generally demonstrated in recent years that they can and will go after cybercriminals operating in the relative anonymity of the TOR network. Although the legality of some of the law enforcement tools has been called into question at times, there is no denying the effectiveness with which US law enforcement has been able to identify and shutdown illegal services provided over the dark web.”

Meanwhile, others, including those who help run the Tor Project, are still trying to figure out how law enforcement agencies located and took control of so many hidden services.

In a letter posted to users, the editors of the Tor blog propose a number of possible attack vectors that may have been used in the takedown, such as operational security shortcomings on the part of the affected websites, SQLi attack and Bitcoin deanonymization.

Regardless of the method of exploit, the fact that the international community broke into Tor is, in a larger sense, a testament to the dangers of the service’s growing popularity.

“Until recently, Tor was mainly utilized by the technically savvy and security communities,” said Valerie Thomas, Principal Consultant at Securicon. “Now that Tor is widely known, it has caught the attention of several organizations and federal agencies.”

The level of anonymity afforded by Tor, when exercised in a networked society, constitutes an unacceptable threat to those charged with defending national security. This explains why the NSA has a surveillance program called X-Keyscore that collects information on people who have used or been invited to install anonymizing services, such as Tor.

But many users are unaware of these risks to their anonymity and privacy, with most assuming that their use of Tor’s services is enough to protect them online.

According to Chris Czub, Security Research Engineer at Duo Security, that’s just not the case. “The major issue with Tor is that it can’t protect people from operational security or software error,” he explains. “This makes lay-users feel a sense of security and privacy that isn’t necessarily justified.”

The insecurities of Tor are perhaps best evident in our fluid understanding of the “Dark Web,” as John Walker, CTO of Cytelligence, suggests:

“When we use the tagline ‘Dark Web,’ we need to take care that we are not placing our subject in a box that limits its characterization to either this or that. . . .We can conceive of the Dark Web as anything from a closed environment that uses dynamic URL to share information system-to-system with the support of securely encapsulated lines, to a full blown space residing in a public cloud, to an environment of an unwitting company that has allowed unauthorised and illicit hosting to occur.”

These variable manifestations of the Dark Web mean that the same issues that plague the regular web are still issues for Tor. If attackers were to compromise a web app hosted on a Tor service, Czub explains, this could potentially lead to a breach in user data and perhaps even deanonymization.

With this in mind, Tor comes down to the issue of trust and whether users feel their privacy and anonymity are safe in the hands of others.

THE FUTURE OF ONION ROUTING

Operation Onymous, in the words of Young, “is a great example of how 20th century law enforcement tactics and undercover operations are still viable in the 21st century.”

Undoubtedly, the international community’s seizure of 400 hidden websites has rocked Tor users and advocates of web anonymity.

Even so, that doesn’t mean Tor is out for the count. In fact, those who maintain the Tor Project can learn from this experience to make its networks stronger and more secure.

As the service is open-source, one recommendation is to periodically host bug bounty competitions.Lamar Bailey, Director of Security Research and Development for Tripwire, is a strong proponent of this idea:

“It’s obvious from the recent takedowns that TOR users are very aware they’re targets for law enforcement. Starting a bug bounty program is an interesting counter measure. If issues can be fixed before they are exploited by law enforcement, it will help keep their users’ privacy more secure.”

Another option is for Tor to continue to partner with popular websites, such as Facebook, to make it easier for users to access the sites they love. This could lead to more users installing Tor, which would translate into additional bridges and relays, thereby making the service more secure for all.

Ultimately, it is the role of Tor’s users and admins to learn from Operation Onymous. As noted by Claus Houmann Cramon, an information security curator and librarian, “There shouldn’t be any need to be an OPSEC expert to be able to have a reasonable expectation of security and privacy online. We as information security experts need to build our devices and software securely by default. Once we have, we need to enforce this to prevent future attacks.”

This was cross-posted from Tripwire's The State of Security blog.  Copyright 2010 Respective Author at Infosec Island]]>
Centralization: The Hidden Trap https://www.infosecisland.com/blogview/24092-Centralization-The-Hidden-Trap.html https://www.infosecisland.com/blogview/24092-Centralization-The-Hidden-Trap.html Wed, 19 Nov 2014 10:57:18 -0600 Everything is about efficiency and economies of scale now days. Thats all we seem to care about. We build vast power generation plants and happily pay the electrical resistance price to push energy across great distances. We establish large central natural gas pipelines that carry most of the gas that is eventually distributed to our homes and factories. And we establish giant data centers that hold and process enormous amounts of our private and business information; information that if lost or altered could produce immediate adverse impacts on our everyday lives.

Centralization like this has obvious benefits. It allows us to provide more products and services while employing less people. It allows us to build and maintain less facilities and infrastructure while keeping our service levels high. It is simply more efficient and cost effective. But the cost” that is more effective” here is purely rated in dollars. How about the hidden cost” in these systems that nobody seems to talk about?

What I am referring to here is the vulnerability centralization brings to any system. It is great to pay less for electricity and to avoid some of the local blackouts we used to experience, but how many power plants and transmission towers would an enemy have to take out to cripple the whole grid? How many pipeline segments and pumping stations would an enemy have to destroy to widely interrupt gas delivery? And how many data centers would an enemy need to compromise to gain access to the bulk of our important records? The answer to these questions is: not as many as yesterday, and the number becomes smaller every year.

However, I am not advocating eschewing efficiency and economies of scale; they make life in this overcrowded world better for everyone. What I am saying is that we need to realize the dangers we are putting ourselves in and make plans and infrastructure alterations to cope with attacks and disasters when they come. These kinds of systems need to have built-in redundancies and effective disaster recovery plans if we are to avoid crisis.

Common wisdom tells us that you shouldnt put all your eggs in one basket, and Murphys Law tells us that anything that can go wrong eventually will go wrong. Lets remember these gems of wisdom. That way our progeny cannot say of us: those that ignore history are doomed to repeat it

Thanks to John Davis for this post.

This was cross-posted from the MSI State of Security blog. Copyright 2010 Respective Author at Infosec Island]]>
Launching in 2015: A Certificate Authority to Encrypt the Entire Web https://www.infosecisland.com/blogview/24091-Launching-in-2015-A-Certificate-Authority-to-Encrypt-the-Entire-Web.html https://www.infosecisland.com/blogview/24091-Launching-in-2015-A-Certificate-Authority-to-Encrypt-the-Entire-Web.html Tue, 18 Nov 2014 11:23:57 -0600 Today EFF is pleased to announce Let’s Encrypt, a new certificate authority (CA) initiative that we have put together with Mozilla, Cisco, Akamai, Identrust, and researchers at the University of Michigan that aims to clear the remaining roadblocks to transition the Web from HTTP to HTTPS.

Although the HTTP protocol has been hugely successful, it is inherently insecure. Whenever you use an HTTP website, you are always vulnerable to problems, including account hijacking and identity theft; surveillance and tracking by governmentscompanies, and both in concert; injection of malicious scripts into pages; and censorship that targets specific keywords orspecific pages on sites. The HTTPS protocol, though it is not yet flawless, is a vast improvement on all of these fronts, and we need to move to a future where every website is HTTPS by default.With a launch scheduled for summer 2015, the Let’s Encrypt CA will automatically issue and manage free certificates for any website that needs them. Switching a webserver from HTTP to HTTPS with this CA will be as easy as issuing one command, or clicking one button.

The biggest obstacle to HTTPS deployment has been the complexity, bureaucracy, and cost of the certificates that HTTPS requires. We’re all familiar with the warnings and error messages produced by misconfigured certificates. These warnings are a hint that HTTPS (and other uses of TLS/SSL) is dependent on a horrifyingly complex and often structurally dysfunctional bureaucracy for authentication.

 

example certificate warningLet's Encrypt will eliminate most kinds of erroneous certificate warnings

 

The need to obtain, install, and manage certificates from that bureaucracy is the largest reason that sites keep using HTTP instead of HTTPS. In our tests, it typically takes a web developer 1-3 hours to enable encryption for the first time. The Let’s Encrypt project is aiming to fix that by reducing setup time to 20-30 seconds. You can help test and hack on the developer preview of our Let's Encrypt agent software or watch a video of it in action here:

Let’s Encrypt will employ a number of new technologies to manage secure automated verification of domains and issuance of certificates. We will use a protocol we’re developing called ACME between web servers and the CA, which includes support for new and stronger forms of domain validation. We will also employ Internet-wide datasets of certificates, such as EFF’s own Decentralized SSL Observatory, the University of Michigan’s scans.io, and Google'sCertificate Transparency logs, to make higher-security decisions about when a certificate is safe to issue.

The Let’s Encrypt CA will be operated by a new non-profit organization called the Internet Security Research Group (ISRG). EFF helped to put together this initiative with Mozilla and the University of Michigan, and it has been joined for launch by partners including Cisco, Akamai, and Identrust.

The core team working on the Let's Encrypt CA and agent software includes James KastenSeth Schoen, and Peter Eckersley at EFF; Josh AasRichard Barnes, Kevin Dick and Eric Rescorla at Mozilla; Alex Halderman and James Kasten and the University of Michigan.

This was cross-posted from EFF's DeepLinks blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Cyber Threats in 2015: New Attack Vectors, More Severe Incidents https://www.infosecisland.com/blogview/24090-Cyber-Threats-in-2015-New-Attack-Vectors-More-Severe-Incidents.html https://www.infosecisland.com/blogview/24090-Cyber-Threats-in-2015-New-Attack-Vectors-More-Severe-Incidents.html Tue, 18 Nov 2014 11:03:47 -0600 One year ago today, Target was gearing up for Black Friday sales and projecting a strong end to the year. That was the company’s primary focus. The same could be said for Neiman Marcus and Home Depot. And no one had even heard of Heartbleed or Shellshock yet.

Needless to say, much has changed in the last year.

If 2014 ends up going down in the history books as the “Year of the Cyberattack,” then what does 2015 have in store for network administrators? We’re already started to see the predictions start to roll in, the first coming from the report, “The Invisible Becomes Visible,” by Trend Micro.

The report paints the new network security threat landscape as becoming much more broad and diverse than it has ever been, evolving beyond the advanced persistent threats (APTs) and targeted attacks that have been the favorite weapon of hackers.

Trend Micro CTO Raimund Genes told InfoSecurity that cyberattack tools now require less expertise to use and don’t cost as much. He listed “botnets for hire … downloadable tools such as password sniffers, brute-force and cryptanalysis hacking programs … [and] routing protocols analysis” as just a few of hackers’ new favorites.

Given these new threats, how can network administrators shore up their network security for 2015 and beyond?

The ‘Three-Legged Stool’ of Network Security

As network administrators build out their network security infrastructure, it’s best to focus on the so-called “three-legged stool” approach – prevention, detection and response. Network security cannot be limited to simply installing prevention measures and hoping for the best. Why? Because there is no one universal, surefire way to prevent an attack, especially as attackers diversify and escalate their efforts.

Even if network administrators are cautious to the point where they assume their network could be hacked at any minute, some endpoints could still be exploited. Or, employees might not follow network security protocol.

In the event that these prevention measures are not entirely successful, organizations need to have a plan, and that means putting in place strong detection and response protocols – these are the two other “legs” in the stool. What do they look like in practice?

In the case of VPN management, central management capabilities within the technology provide network administrators with a single view of all remote access endpoints, allowing them to quickly launch a response when an attack is detected, often by deprovisioning the vulnerable device.

With these three elements working in tandem, network administrators will be prepared and armed for any threat 2015 might bring to their network security.

This was cross-posted from the VPN HAUS blog. 

Copyright 2010 Respective Author at Infosec Island]]>
MSSP Client Onboarding – A Critical Process! https://www.infosecisland.com/blogview/24088-MSSP-Client-Onboarding--A-Critical-Process.html https://www.infosecisland.com/blogview/24088-MSSP-Client-Onboarding--A-Critical-Process.html Mon, 17 Nov 2014 10:55:54 -0600 Many MSSP relationships are doomed at the on-boarding stage when the organization first becomes a customer. Given how critical the first 2-8 weeks of your MSSP partnership are, let’s explore it a bit.

Here are a few focus areas to note (this, BTW, assumes that both sides are in full agreement about the scope of services and can quote from the SOW if woken up at 3AM):

  • Technology deployment: unless MSSP sensors are deployed and are able to capture logs, flows, packets, etc, you don’t yet have a monitoring capability. Making sure that your devices log – and sending logs to the MSSP sensor – is central to this (this also implies that you are in agreement on what log messages they need for their analysis – yes, I am talking about you, authentication success messages :-))
  • Access methods and credential sharing: extra-critical for device management contracts, no amount of SLA negotiation will help your partner apply changes faster if they do not have the passwords (this also implies that you log all remote access events by the MSSP personnel and then send these logs to …. oops!)
  • Context information transfer: lists of assets (and, especially, assets considered critical by the customer), security devices (whether managed by the MSSP or not), network diagrams, etc all make a nice information bundle to share with the MSSPpartner
  • Contacts and escalation trees: critical alerts are great, but not if the only person whose phone number was given to the MSSP is on a 3 week Caribbean vacation… Escalation and multiple current contacts are a must.
  • Process synchronization: now for the fun part: your risk assessment (maybe) andincident response (likely) processes may now be “jointly run” with your MSSP, but have you clarified the touch points, dependencies and information handoffs?

If you, the MSSP client, fail to follow through with these, the chance of success is severely diminished. Now, as my research on MSSP progresses, the amount of sad hilarity I am encountering piles on – and you don’t want to be part of that! For example, an MSSP asks a client: “To improve our alert triage, can we please get the list of your most critical assets?” The client response? “Damn, we’d like to know that too!” When asked for their incident response plan, another client sheepishly responded that they don’t have it yet, but can we please create it together – that is, only if it doesn’t cost extra…. BTW, if your MSSP never asked you aboutyour IR plans during on-boarding, run, run, run (it is much better to actually walk thru an incident scenario together with your MSSP at this stage).

In another case, a client asked an MSSP “to monitor for policy violations.” When asked for a copy of their most recent security policy, the client responded that it has not been created yet. On the other hand, a sneaky client once scheduled a pentest of their network during the MSSP onboarding period – but after their sensors were already operational. You can easily imagine the painful conversations that transpired when the MSSP failed to alert them…. Note that all of the above examples and quotes are fictitious, NOT based on real clients and are entirely made up (which is the same as fictitious anyway, right? Just wanted to make sure!)

Overall, our recent poll of MSSP clients indicated that most wished they’d spent more time on-boarding their MSSPs. Expect things to be very much in flux for at least several weeks – yourMSSP should ask a lot of questions, and so should you! While your boss may be tempted by the promises of fast service implementation, longer on-boarding often means better service for the next year. Of course, not every MSSP engagement starts with a 12-week hardcore consulting project involving 4 “top shelf” consultants, but such timeline for a large, complex monitoring and management effort is not at all offensive. In fact, one quality MSSP told me that they can deploy the service much faster than it takes their clients to actually fulfill their end of the bargain (share asset info, contacts, deploy sensors, tweak the existing processes, etc).

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Arrogance of the US Nuclear Power Industry - We Don't Want to Look at Everything https://www.infosecisland.com/blogview/24087-The-Arrogance-of-the-US-Nuclear-Power-Industry-We-Dont-Want-to-Look-at-Everything.html https://www.infosecisland.com/blogview/24087-The-Arrogance-of-the-US-Nuclear-Power-Industry-We-Dont-Want-to-Look-at-Everything.html Mon, 17 Nov 2014 10:53:41 -0600 The Nuclear Energy Institute (NEI) in support of the US nuclear utilities has filed a request for rulemaking with the Nuclear Regulatory Commission (NRC) to modify the nuclear plant cyber security rule (www.nrc.gov, Docket ID NRC-2014-0165). The gist of the draft rulemaking is NEI and the nuclear utilities feel the NRC is making the industry spend too much money by looking at too many of the systems and components in a nuclear power plant.

In today’s environment with nuclear plants being prime cyber targets, industry should be looking at more not less. There are new ICS cyber vulnerabilities being identified what seems like weekly that affect control systems including those used in nuclear power plants. The Chinese, Russians, Iranians, etc continue to cyber attack our infrastructures - nuclear plants are certainly on their list of targets.  DHS is holding cleared briefings on Havex and BlackEnergy that can affect control system HMIs in nuclear plants.

The NEI petition keeps the following in the existing rule – systems and components necessary to:

- “…prevent significant core damage and spent fuel sabotage; or

- Whose failure would cause a reactor scram.”

However, the petition wants to explicitly exclude the following categories in the existing rule:

-“safety-related and important-to-safety functions,

- security functions,

- emergency preparedness functions, including off-site communications,

- and support systems and equipment, which if compromised, would adversely impact safety, security, or emergency preparedness functions.”

The perception is the nuclear utilities want to reduce cyber security not increase it. Considering the categories they want to exclude have already contributed to core melt and nuclear plant scrams and there is so much focus on cyber security, why are NEI and the utilities doing this now?

This was cross-posted. 

Copyright 2010 Respective Author at Infosec Island]]>
Tips for Writing Good Security Policies https://www.infosecisland.com/blogview/24085-Tips-for-Writing-Good-Security-Policies.html https://www.infosecisland.com/blogview/24085-Tips-for-Writing-Good-Security-Policies.html Thu, 13 Nov 2014 13:58:19 -0600 Almost all organizations dread writing security policies. When I ask people why this process is so intimidating, the answer I get most often is that the task just seems overwhelming and they don’t know where to start. But this chore does not have to be as onerous or difficult as most people think. The key is pre-planning and taking one step at a time.

First you should outline all the policies you are going to need for your particular organization. Now this step itself is what I think intimidates people most. How are they supposed to ensure that they have all the policies they should have without going overboard and burdening the organization with too many and too restrictive policies

There are a few steps you can take to answer these questions:

  • Examine existing information security policies used by other, similar organizations and open source information security policy templates such as those available at SANS. You can find these easily online. However, you should resist simply copying such policies and adopting them as your own. Just use them for ideas. Every organization is unique and security policies should always reflect the culture of the organization and be pertinent, usable and enforceable across the board.
  • In reality, you should have information security policies for all of the business processes, facilities and equipment used by the organization. A good way to find out what these are is to look at the organizations business impact analysis (BIA). This most valuable of risk management studies will include all essential business processes and equipment needed to maintain business continuity. If the organization does not have a current BIA, you may have to interview personnel from all of the different business departments to get this information. 
  • If the organization is subject to information security or privacy regulation, such as financial institutions or health care concerns, you can easily download all of the information security policies mandated by these regulations and ensure that you include them in the organization’s security policy. 
  • You should also familiarize yourself with the available information security guidance such as ISO 27002, NIST 800-35, the Critical Security Controls for Effective Cyber Defense, etc. This guidance will give you a pool of available security controls that you can apply to fit your particular security needs and organizational culture.

Once you have the outline of your security needs in front of you it is time to start writing. You should begin with broad brush stroke, high level policies first and then add detail as you go along. Remember information security “policy” really includes policies, standards, guidelines and procedures. I’ve found it a very good idea to write “policy” in just that order.

Remember to constantly refer back to your outline and to consult with the business departments and users as you go along. It will take some adjustments and rewrites to make your policy complete and useable. Once you reach that stage, however, it is just a matter of keeping your policy current. Review and amend your security policy regularly to ensure it remains useable and enforceable. That way you won’t have to go through the whole process again!

 

Thanks to John Davis for this post.

  This was cross-posted from the MSI State of Security blog. Copyright 2010 Respective Author at Infosec Island]]>
How Can ICS Cyber Security Risk be Quantified and What Does it Mean to Aurora https://www.infosecisland.com/blogview/24084-How-Can-ICS-Cyber-Security-Risk-be-Quantified-and-What-Does-it-Mean-to-Aurora.html https://www.infosecisland.com/blogview/24084-How-Can-ICS-Cyber-Security-Risk-be-Quantified-and-What-Does-it-Mean-to-Aurora.html Thu, 13 Nov 2014 13:55:28 -0600 I will be giving a lecture on ICS cyber security risk at the Fraunhofer Institute December 2nd in Germany. In preparation for the lecture, I was looking into the recent HAVEX and BlackEnergy malware attacks and how they can affect ICS cyber risk. Risk is defined as frequency times consequence. There is little information on frequency of ICS cyber attacks.

The next issue is how do you define consequence. HAVEX and BlackEnergy have been targeting selected ICS vendor HMIs that could be used to give remote access to the attackers. Even though the purpose of HAVEX and BlackEnergy appears to be exfiltrating information, there is nothing to stop the attackers from taking other actions. (Stuxnet initially was thought to be only about exfiltrating information.) It is possible that attackers could login and send commands to the computer. Once your computer is owned there's not much the attacker can't do. This brings up the question of how consequence is defined.

The Aurora event can be initiated by the remote closing and reopening of breakers by the SCADA HMI. If the attackers “own” the HMIs, there are venues for initiating Aurora events. Aurora has yet to be adequately mitigated by the utility industry. Moreover, much of the classified information on Aurora has been made public by DHS. As the information on Aurora is public and there may be unauthorized access to ICS HMI’s, I would consider this situation to be a significant risk to our critical infrastructures.

This was cross-posted from the Unfettered blog. 

Copyright 2010 Respective Author at Infosec Island]]>
How to Steal Data From an Airgapped Computer Using FM Radio Waves https://www.infosecisland.com/blogview/24083-How-to-Steal-Data-From-an-Airgapped-Computer-Using-FM-Radio-Waves-.html https://www.infosecisland.com/blogview/24083-How-to-Steal-Data-From-an-Airgapped-Computer-Using-FM-Radio-Waves-.html Wed, 12 Nov 2014 15:07:52 -0600 By: Graham Cluley

More and more organisations today have some airgapped computers, physically isolated from other systems with no Internet connection to the outside world or other networks inside their company.

Security teams may have disconnected from other networks in order to better protect them, and the data they have access to, from Internet attacks and hackers.

Of course, a computer which can’t be reached by other computers is going to be a lot harder to attack than one which is permanently plugged into the net. But that doesn’t mean it’s impossible.

Take, for instance, the case of the Stuxnet worm, which reared its ugly head in 2010. Stuxnet is thought to have caused serious damage to centrifuges at an Iranian uranium enrichment facility after infecting systems via a USB flash drive and a cocktail of Windows vulnerabilities.

Someone brought an infected USB stick into the Natanz facility and plugged it into a computer – allowing it to spread and activate its payload.

And it’s not just Iran. In the years since, we have heard of other power plants taken offline after being hit by USB-aware malware spread via sneakernet.

So, we accept that although it may be more difficult to infect isolated airgapped computers, it isn’t impossible.

But what about exfiltrating data from computers which have no connection with the outside world?

Researchers from Ben-Gurion University in Israel think they have found a way to do it, hiding data in radio emissions surreptitiously broadcast via a computer’s video display unit, and picking up the signals on nearby mobile phones.

And, to prove their point, they have released a YouTube video, demonstrating their proof-of-concept attack in action:

 

In the video, which has no sound, the researchers first demonstrate that the targeted computer has no network or Internet connection.

 

Next to it is an Android smartphone, again with no network connection, that is running special software designed to receive and interpret radio signals via its FM receiver.

Proof-of-concept malware, dubbed “AirHopper,” running on the isolated computer ingeniously transmits sensitive information (such as keystrokes) in the form of FM radio signals by manipulating the video display adaptor.

Meanwhile, AirHopper’s receiver code is running on a nearby smartphone.

“With appropriate software, compatible radio signals can be produced by a compromised computer, utilizing the electromagnetic radiation associated with the video display adapter. This combination, of a transmitter with a widely used mobile receiver, creates a potential covert channel that is not being monitored by ordinary security instrumentation.”

As the researchers revealed in their white paper, the phone receiving the data can be in another room.

Now, you may think that if AirHopper is fiddling with the targeted computer’s screen that this could be noticed by any operator in front of the device. However, the researchers say they have devised a number of techniques to disguise any visual clues that data may be being transmitted, like waiting until the monitor is turned off, waiting until a screensaver kicks in, or determining (like a screensaver does) that there has been no user interaction for a certain period of time.

It’s all quite ingenious—and although I have explained before how high frequency sound can be used to exfiltrate data from an airgapped computer, this new method could work even if a PC’s speaker has been detached.

No sound on a computer you can live with, but removing monitors seems impractical.

Of course, it’s important that no-one should panic. The technique is elaborate, and at the moment—as far as we can tell—only exists within research laboratories.

It’s important to understand the various steps that have to be taken to exfiltrate data from an airgapped computer.

Firstly, malware has to be introduced to the isolated PC—not a simple task in itself, and a potential hurdle that may prove impossible if proper defences are in place.

Secondly, a mobile device carrying the receiver software needs to be in close proximity to the targeted computer (this would require either an accomplice, or infection of an employee’s mobile device with the malware).

The data then has to be transmitted from the mobile phone itself, back to the attackers.

Finally, this may not be the most efficient way to steal a large amount of data. The AirHopper experiment showed that data could be transmitted from targeted isolated computers to mobile devices up to 7 metres (23 feet away), at a rate of 13-60 bytes per second. That’s equivalent to less than half a tweet.

Transmission rate

Despite that, it’s still easy to imagine that a determined hacker who has gone to such lengths would be happy to wait for a sizeable amount of data to be transmitted, perhaps as the isolated computers are left unattended overnight or at weekends.

If this all sounds like too much of an effort, think again. Because the researchers’ paper says although complex, the attack isn’t beyond modern attackers:

“The chain of attack is rather complicated, but is not beyond the level of skill and effort employed in modern Advanced Persistent Threats (APTs)”

Which leads us to what you should do about it, and there is a familiar piece of advice to underline: tightly control who has access to your computers, and what software they are able to install upon them, and what devices they are permitted to attach.

The AirHopper attack cannot steal any data from your airgapped computers at all, if no-one ever manages to infect them in the first place.

It will be interesting to see if others take this research and devise more methods to counter this type of attack in the future.

This was  cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>