Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Security or Checking a Box? https://www.infosecisland.com/blogview/24095-Security-or-Checking-a-Box.html https://www.infosecisland.com/blogview/24095-Security-or-Checking-a-Box.html Thu, 20 Nov 2014 12:21:00 -0600 “Better to remain silent and be thought a fool than to speak out and remove all doubt.” Abraham Lincoln

 

What is your organization interested in?  Security or checking a box?

Not surprisingly, most people answer “security” and then go on to prove with their actions and words that they are only interested in checking a box.

For all of you out there that argue ad nausea about the meaning of PCI DSS testing requirements and the requisite documentation are interested in one thing and one thing only; checking a box.  I am not talking about the few that have honest differences of opinion on a few of the requirements and how a QSA is interpreting them and assessing them.  I am talking about those of you that fight constantly with your QSA or acquiring bank on the process as a whole.

If you were to step back and listen to your arguments, you would hear someone that is splitting hairs in a vain attempt to avoid having to do something that would improve your organization’s security posture.  In essence, you want to only be judged PCI compliant, not actually be secure.

To add insult to injury, these are also typically the people that argue the most vehemently over the fact that the PCI DSS is worthless because it does not make an organization secure.  Wow!  Want to have your cake and eat it too!  Sorry, but you cannot have it both ways.

Everyone, including the Council, has been very clear that the PCI DSS is a bare minimum for security, not the “be all to end all” for securing an organization.  Organizations must go beyond the PCI DSS to actually be secure.  This where these people and their organizations get stumped because they cannot think beyond the standard.  Without a detailed road map, they are totally and utterly lost.  And heaven forbid they should pay a consultant for help.

But I am encountering a more insidious side to all of this.  As you listen to the arguments, a lot of you arguing about PCI compliance appear to have no interest in breaking a sweat and doing the actual work that is required.  More and more I find only partially implemented security tools, only partially implemented monitoring and only partially implemented controls.  And when you dig into it as we must do with the PCI assessment process, it becomes painfully obvious that when it got hard is when the progress stopped.

 

“It’s supposed to be hard. If it wasn’t hard, everyone would do it.” Jimmy Dugan – A League Of Their Own

 

Security guru Bruce Schneier was speaking at a local ISSA meeting recently and when asked about why security is not being addressed better he stated that one of the big reasons is that it is hard and complex at times to secure our technology.  And he is right, security is hard.  It is hard because of our poor planning, lack of inclusion, pick the reason and I am sure there is some truth to it.  But he went on to say that it is not going to get any easier any time soon.  Yes, we will get better tools, but the nature of what we have built and implemented will still make security hard.  We need to admit it will be hard and not sugar coat that fact to management.

Management also needs to clearly understand as well that security is not perfect.  The analogy I like to use is banks.  I point out to people the security around banks.  They have one or more vaults with time locks.  They have video cameras.  They have dye packs in teller drawers.  Yet, banks still get robbed.  But, the banks only stock their teller drawers with a minimal amount of money so the robber can only get a few thousand dollars in one robbery.  Therefore to be successful, a robber has to rob many banks to make a living which increases the likelihood they will get caught.  We need to do the same thing with information security and recognize that breaches will still occur, but because we have controls in place that minimizes the amount or type of information they can obtain.

 

“There’s a sucker born every minute.” David Hannum

 

Finally, there is the neglected human element.  It is most often neglected because security people are not people, people.  A lot of people went into information security so that they did not have to interact a lot with people – they wanted to play with the cool tools.  Read the Verizon, Trustwave, etc. breach analysis reports and time and again, the root cause of a breach comes down to human error, not a flaw in one of our cool tools.  Yet what do we do about human error?  Little to nothing.  The reason being that supposedly security awareness training does not work.  Security awareness training does not work because we try to achieve success only doing it once per year not continuously.

To prove a point, I often ask people how long it took them to get their spouse, partner or friend to change a bad habit of say putting the toilet seat down or not using a particular word or phrase.  Never in my life have I ever gotten a response of “immediately”, “days” or “months”, it has always been measured in “years”.  And you always get comments about the arguments over the constant harping about changing the habit.  So why would any rational person think that a single annual security awareness event is going to be successful in changing any human habits?  It is the continuous discussion of security awareness that results in changes in people’s habits.

Not that you have to harp or drone on the topic, but you must keep it in the forefront of people’s mind.  The discussion must be relevant and explain why a particular issue is occurring, what the threat is trying to accomplish and then what the individual needs to do to avoid becoming a victim.  If your organization operates retail outlets, explaining a banking scam to your clerks is pointless.  However, explaining that there is now a flood of fraudulent coupons being generated and how to recognize phony coupons is a skill that all retail clerks need to know.

·        Why are fraudulent coupons flooding the marketplace? Because people need to reduce expenses and they are using creative ways to accomplish that including fraudulent ways.

·        What do the fraudulent coupons do to our company? People using fraudulent coupons are stealing from our company.  When we submit fraudulent coupons to our suppliers for reimbursement, they reject them and we are forced to absorb that as a loss.

·        What can you do to minimize our losses? Here are the ways to identify a fraudulent coupon.  [Describe the characteristics of a fraudulent coupon]  When in doubt, call the store manager for assistance.

Every organization I know has more than enough issues that make writing these sorts of messages easy to come up with a topic at least once a week.  Information security personnel need to work with their organization’s Loss Prevention personnel to identify those issues and then write them up so that all employees can act to prevent becoming victims.

Those of you closet box checkers need to give it up.  You are doing your organizations a huge disservice because you are not advancing information security; you are advancing a check in a box.

This was cross-posted from the PCI Guru blog. 

 

 

 

Copyright 2010 Respective Author at Infosec Island]]>
Access Governance 101: Job Changes and Elevated Permissions https://www.infosecisland.com/blogview/24094-Access-Governance-101-Job-Changes-and-Elevated-Permissions.html https://www.infosecisland.com/blogview/24094-Access-Governance-101-Job-Changes-and-Elevated-Permissions.html Thu, 20 Nov 2014 12:07:00 -0600 By: Fernando Labastida 

Identity and Access Management (IAM) has become an increasingly important discipline, from its tentative beginnings in the early 90s to the multi-disciplinary, cloud-based process it is today. With 2.4 million Google results, IAM has gotten lots of ink.

But the growing discipline of Identity and Access Governance, an essential component of IAM projects, hasn’t seen as much coverage in the news and blogosphere. So we thought we’d focus on Access Governance for a few posts.

Today [Nov. 4] we’re launching a series called Identity and Access Governance 101, with the aim of clarifying the discipline for those new to the practice. We wanted to explain it not so much from a technology perspective, but from a business process perspective.

Today we cover how Access Governance handles job changes and privileged access.

Keep Tabs On Your Users and Their Access

How do you keep tabs on whether your employees, contractors, partners and customers continue to have appropriate access to the resources they need, and are kept from the technology they shouldn’t have access to?

Depending on the functionality and importance of your applications, databases and document folders, access should be reviewed periodically to ensure your organization is secure.

The Job Change Scenario

In today’s fluid corporate environment, your employees get promoted, make lateral moves, become contractors, quit, get laid-off, come back again, and experience every scenario in between.

These commonplace activities can sometimes be fraught with peril.

For example, in a large enterprise scenario some employees acquire privileged access or elevated permissions. These are typically domain admins: network administrators, database administrators or application owners. Like most people, they have regular access to your standard corporate applications, but they also have elevated permissions to the applications they own.

During job changes, this elevated access can be overlooked.

For example, Craig just changed job functions. In his previous job Craig was a DBA in the HR department. But now he’s an application administrator in the manufacturing department. If Craig’s DBA access is not revoked during the job change process, he’ll have lingering DBA permissions.

Could Craig shut down a database or cause even worse damage if he wanted to? Yes. Would Craig do that? You never know. People do some funky stuff.

The Job Change Access Review

changeEnter the job change access review, a key process for avoiding lingering elevated permissions.

How do you define the job change?

Each firm has its own criteria. It could be a department change, or a job core change. The employee might get a new manager, or move to a new location. It really depends on your particular situation.

But when your chosen criteria is met (and your IAG system sets off the appropriate alerts), it’s time to schedule an access review for that person.

If an employee moves from finance to manufacturing, the manufacturing manager might look at the access of the person now reporting to him and say: “Hey, why does he have this general ledger access? He doesn’t need this!”

The manager will then give the order to revoke the new employee’s general ledger access.

Elevated Access Review

The job change scenario is a great start, and is perfect for employees with standard-level access to applications. But for privileged access, why wait for somebody to change positions? 

Enter the Elevated Access Review. This is a periodic review for domain admins, and is typically scheduled more frequently than quarterly. Monthly or even weekly are appropriate. Again it depends on your organization.

How do you review elevated access? Here are a few scenarios you could choose from:

1.      Manager Review

The employee’s immediate manager is responsible for reviewing elevated permissions, ensuring his direct reports have appropriate access according to his or her job functions.

2.      Application Owner Review

Some organizations feel the manager may not know what access his direct reports should have, and so defer to the application owner. For example, if employee A has financial application access, the owner of the financial application should review employee A’s access to his applications and have the power to allow or revoke that access.

3.      Two-Step Scenario.

You might want to combine one and two above using the two-step scenario. First, the manager could say “yes, my employee should have access to the financial application.”  Then there’s an additional sign-off from the application owner. He could say “Ok, her manager said she should have access to this financial app, but as the app owner I don’t think it’s required.” He has the higher level say-so, and he can revoke her access.

It’s entirely up to you which scenario you choose for elevated access approval.

Conclusion

So far we’ve covered rules and processes for ensuring your employees have access to the appropriate technology resources, including job change triggers and regular reviews for those with elevated access.

In the next post in this series we’ll cover role-based access reviews.

This was cross-posted from the Identropy blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Operation Onymous Challenges Tor to Strengthen Its Security https://www.infosecisland.com/blogview/24093-Operation-Onymous-Challenges-Tor-to-Strengthen-Its-Security-.html https://www.infosecisland.com/blogview/24093-Operation-Onymous-Challenges-Tor-to-Strengthen-Its-Security-.html Wed, 19 Nov 2014 11:22:19 -0600 By: David Bisson 

Earlier in November, Europol, the FBI and the Department of Homeland Security coordinated a global sting against the “Dark Web” drug trade.

Codenamed “Operation Onymous,” the international legal effort arrested 17 people and shut down a number of drug and contraband Internet underground websites, including Topix, Cloud 9, Black Market and the infamous Silk Road 2.0.

In all, the operation seized more than 400 websites with the “.onion” domain, which belongs to the anonymity service Tor that grants users access to public networks without requiring the forfeiture of their privacy.

How the domains were found currently remains a mystery—a degree of uncertainty that has many Tor users worried about the security of the service’s anonymity shield.

To understand the full implications of this takedown, a little background on Tor is helpful.

AN ONION SURFS THE WEB

Tor, otherwise known as “The Onion Routing” program, was a project originally designed by the U.S. Naval Research Laboratory. Its intention was to protect governmental communications by safeguarding interlocutors’ identity.

Today, Tor has expanded into the private sector. All kinds of users, from cyber drug lords to journalists and dissidents who wish to conceal their online activities from repressive governments, allegedly employ the anonymizing service.

Put simply, onion routing encrypts users’ data that is sent through the web in multiple layers, thereby mimicking the layers of an onion, and transmits user traffic through several different computers.

The service’s functionality depends on a unique infrastructure of middle relays, bridges and end relays. Any user can supply a middle router from the comfort of their own home and not fear retaliation from law enforcement. Bridges go a level deeper, acting as private relays that are protected from those who wish to block users’ IP addresses.

End relays, by contrast, are the final relays in a chain of connections and, as such, are often targeted by law enforcement and copyright holders.

OPERATION ONYMOUS: THE DARK SIDE OF TOR

Many have celebrated Tor for its Browser Bundle package, which does not require users to download any software, and for its multi-language interface.

Additionally, human rights advocates approve of the service because of its roundabout accessibility in states that censor the web. If network firewalls block users from accessing Tor’s website, even in states like China and Iran, users can send a message to a particular email address, from which a reply message will be sent to them with installation instructions.

These benefits to users notwithstanding, Operation Onymous has revealed that Tor – once thought to be impenetrable – can successfully be infiltrated by government agencies.

To some, including Craig Young, a security researcher at Tripwire, this does not come as much of a surprise:

“The FBI has generally demonstrated in recent years that they can and will go after cybercriminals operating in the relative anonymity of the TOR network. Although the legality of some of the law enforcement tools has been called into question at times, there is no denying the effectiveness with which US law enforcement has been able to identify and shutdown illegal services provided over the dark web.”

Meanwhile, others, including those who help run the Tor Project, are still trying to figure out how law enforcement agencies located and took control of so many hidden services.

In a letter posted to users, the editors of the Tor blog propose a number of possible attack vectors that may have been used in the takedown, such as operational security shortcomings on the part of the affected websites, SQLi attack and Bitcoin deanonymization.

Regardless of the method of exploit, the fact that the international community broke into Tor is, in a larger sense, a testament to the dangers of the service’s growing popularity.

“Until recently, Tor was mainly utilized by the technically savvy and security communities,” said Valerie Thomas, Principal Consultant at Securicon. “Now that Tor is widely known, it has caught the attention of several organizations and federal agencies.”

The level of anonymity afforded by Tor, when exercised in a networked society, constitutes an unacceptable threat to those charged with defending national security. This explains why the NSA has a surveillance program called X-Keyscore that collects information on people who have used or been invited to install anonymizing services, such as Tor.

But many users are unaware of these risks to their anonymity and privacy, with most assuming that their use of Tor’s services is enough to protect them online.

According to Chris Czub, Security Research Engineer at Duo Security, that’s just not the case. “The major issue with Tor is that it can’t protect people from operational security or software error,” he explains. “This makes lay-users feel a sense of security and privacy that isn’t necessarily justified.”

The insecurities of Tor are perhaps best evident in our fluid understanding of the “Dark Web,” as John Walker, CTO of Cytelligence, suggests:

“When we use the tagline ‘Dark Web,’ we need to take care that we are not placing our subject in a box that limits its characterization to either this or that. . . .We can conceive of the Dark Web as anything from a closed environment that uses dynamic URL to share information system-to-system with the support of securely encapsulated lines, to a full blown space residing in a public cloud, to an environment of an unwitting company that has allowed unauthorised and illicit hosting to occur.”

These variable manifestations of the Dark Web mean that the same issues that plague the regular web are still issues for Tor. If attackers were to compromise a web app hosted on a Tor service, Czub explains, this could potentially lead to a breach in user data and perhaps even deanonymization.

With this in mind, Tor comes down to the issue of trust and whether users feel their privacy and anonymity are safe in the hands of others.

THE FUTURE OF ONION ROUTING

Operation Onymous, in the words of Young, “is a great example of how 20th century law enforcement tactics and undercover operations are still viable in the 21st century.”

Undoubtedly, the international community’s seizure of 400 hidden websites has rocked Tor users and advocates of web anonymity.

Even so, that doesn’t mean Tor is out for the count. In fact, those who maintain the Tor Project can learn from this experience to make its networks stronger and more secure.

As the service is open-source, one recommendation is to periodically host bug bounty competitions.Lamar Bailey, Director of Security Research and Development for Tripwire, is a strong proponent of this idea:

“It’s obvious from the recent takedowns that TOR users are very aware they’re targets for law enforcement. Starting a bug bounty program is an interesting counter measure. If issues can be fixed before they are exploited by law enforcement, it will help keep their users’ privacy more secure.”

Another option is for Tor to continue to partner with popular websites, such as Facebook, to make it easier for users to access the sites they love. This could lead to more users installing Tor, which would translate into additional bridges and relays, thereby making the service more secure for all.

Ultimately, it is the role of Tor’s users and admins to learn from Operation Onymous. As noted by Claus Houmann Cramon, an information security curator and librarian, “There shouldn’t be any need to be an OPSEC expert to be able to have a reasonable expectation of security and privacy online. We as information security experts need to build our devices and software securely by default. Once we have, we need to enforce this to prevent future attacks.”

This was cross-posted from Tripwire's The State of Security blog.  Copyright 2010 Respective Author at Infosec Island]]>
Centralization: The Hidden Trap https://www.infosecisland.com/blogview/24092-Centralization-The-Hidden-Trap.html https://www.infosecisland.com/blogview/24092-Centralization-The-Hidden-Trap.html Wed, 19 Nov 2014 10:57:18 -0600 Everything is about efficiency and economies of scale now days. Thats all we seem to care about. We build vast power generation plants and happily pay the electrical resistance price to push energy across great distances. We establish large central natural gas pipelines that carry most of the gas that is eventually distributed to our homes and factories. And we establish giant data centers that hold and process enormous amounts of our private and business information; information that if lost or altered could produce immediate adverse impacts on our everyday lives.

Centralization like this has obvious benefits. It allows us to provide more products and services while employing less people. It allows us to build and maintain less facilities and infrastructure while keeping our service levels high. It is simply more efficient and cost effective. But the cost” that is more effective” here is purely rated in dollars. How about the hidden cost” in these systems that nobody seems to talk about?

What I am referring to here is the vulnerability centralization brings to any system. It is great to pay less for electricity and to avoid some of the local blackouts we used to experience, but how many power plants and transmission towers would an enemy have to take out to cripple the whole grid? How many pipeline segments and pumping stations would an enemy have to destroy to widely interrupt gas delivery? And how many data centers would an enemy need to compromise to gain access to the bulk of our important records? The answer to these questions is: not as many as yesterday, and the number becomes smaller every year.

However, I am not advocating eschewing efficiency and economies of scale; they make life in this overcrowded world better for everyone. What I am saying is that we need to realize the dangers we are putting ourselves in and make plans and infrastructure alterations to cope with attacks and disasters when they come. These kinds of systems need to have built-in redundancies and effective disaster recovery plans if we are to avoid crisis.

Common wisdom tells us that you shouldnt put all your eggs in one basket, and Murphys Law tells us that anything that can go wrong eventually will go wrong. Lets remember these gems of wisdom. That way our progeny cannot say of us: those that ignore history are doomed to repeat it

Thanks to John Davis for this post.

This was cross-posted from the MSI State of Security blog. Copyright 2010 Respective Author at Infosec Island]]>
Launching in 2015: A Certificate Authority to Encrypt the Entire Web https://www.infosecisland.com/blogview/24091-Launching-in-2015-A-Certificate-Authority-to-Encrypt-the-Entire-Web.html https://www.infosecisland.com/blogview/24091-Launching-in-2015-A-Certificate-Authority-to-Encrypt-the-Entire-Web.html Tue, 18 Nov 2014 11:23:57 -0600 Today EFF is pleased to announce Let’s Encrypt, a new certificate authority (CA) initiative that we have put together with Mozilla, Cisco, Akamai, Identrust, and researchers at the University of Michigan that aims to clear the remaining roadblocks to transition the Web from HTTP to HTTPS.

Although the HTTP protocol has been hugely successful, it is inherently insecure. Whenever you use an HTTP website, you are always vulnerable to problems, including account hijacking and identity theft; surveillance and tracking by governmentscompanies, and both in concert; injection of malicious scripts into pages; and censorship that targets specific keywords orspecific pages on sites. The HTTPS protocol, though it is not yet flawless, is a vast improvement on all of these fronts, and we need to move to a future where every website is HTTPS by default.With a launch scheduled for summer 2015, the Let’s Encrypt CA will automatically issue and manage free certificates for any website that needs them. Switching a webserver from HTTP to HTTPS with this CA will be as easy as issuing one command, or clicking one button.

The biggest obstacle to HTTPS deployment has been the complexity, bureaucracy, and cost of the certificates that HTTPS requires. We’re all familiar with the warnings and error messages produced by misconfigured certificates. These warnings are a hint that HTTPS (and other uses of TLS/SSL) is dependent on a horrifyingly complex and often structurally dysfunctional bureaucracy for authentication.

 

example certificate warningLet's Encrypt will eliminate most kinds of erroneous certificate warnings

 

The need to obtain, install, and manage certificates from that bureaucracy is the largest reason that sites keep using HTTP instead of HTTPS. In our tests, it typically takes a web developer 1-3 hours to enable encryption for the first time. The Let’s Encrypt project is aiming to fix that by reducing setup time to 20-30 seconds. You can help test and hack on the developer preview of our Let's Encrypt agent software or watch a video of it in action here:

Let’s Encrypt will employ a number of new technologies to manage secure automated verification of domains and issuance of certificates. We will use a protocol we’re developing called ACME between web servers and the CA, which includes support for new and stronger forms of domain validation. We will also employ Internet-wide datasets of certificates, such as EFF’s own Decentralized SSL Observatory, the University of Michigan’s scans.io, and Google'sCertificate Transparency logs, to make higher-security decisions about when a certificate is safe to issue.

The Let’s Encrypt CA will be operated by a new non-profit organization called the Internet Security Research Group (ISRG). EFF helped to put together this initiative with Mozilla and the University of Michigan, and it has been joined for launch by partners including Cisco, Akamai, and Identrust.

The core team working on the Let's Encrypt CA and agent software includes James KastenSeth Schoen, and Peter Eckersley at EFF; Josh AasRichard Barnes, Kevin Dick and Eric Rescorla at Mozilla; Alex Halderman and James Kasten and the University of Michigan.

This was cross-posted from EFF's DeepLinks blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Cyber Threats in 2015: New Attack Vectors, More Severe Incidents https://www.infosecisland.com/blogview/24090-Cyber-Threats-in-2015-New-Attack-Vectors-More-Severe-Incidents.html https://www.infosecisland.com/blogview/24090-Cyber-Threats-in-2015-New-Attack-Vectors-More-Severe-Incidents.html Tue, 18 Nov 2014 11:03:47 -0600 One year ago today, Target was gearing up for Black Friday sales and projecting a strong end to the year. That was the company’s primary focus. The same could be said for Neiman Marcus and Home Depot. And no one had even heard of Heartbleed or Shellshock yet.

Needless to say, much has changed in the last year.

If 2014 ends up going down in the history books as the “Year of the Cyberattack,” then what does 2015 have in store for network administrators? We’re already started to see the predictions start to roll in, the first coming from the report, “The Invisible Becomes Visible,” by Trend Micro.

The report paints the new network security threat landscape as becoming much more broad and diverse than it has ever been, evolving beyond the advanced persistent threats (APTs) and targeted attacks that have been the favorite weapon of hackers.

Trend Micro CTO Raimund Genes told InfoSecurity that cyberattack tools now require less expertise to use and don’t cost as much. He listed “botnets for hire … downloadable tools such as password sniffers, brute-force and cryptanalysis hacking programs … [and] routing protocols analysis” as just a few of hackers’ new favorites.

Given these new threats, how can network administrators shore up their network security for 2015 and beyond?

The ‘Three-Legged Stool’ of Network Security

As network administrators build out their network security infrastructure, it’s best to focus on the so-called “three-legged stool” approach – prevention, detection and response. Network security cannot be limited to simply installing prevention measures and hoping for the best. Why? Because there is no one universal, surefire way to prevent an attack, especially as attackers diversify and escalate their efforts.

Even if network administrators are cautious to the point where they assume their network could be hacked at any minute, some endpoints could still be exploited. Or, employees might not follow network security protocol.

In the event that these prevention measures are not entirely successful, organizations need to have a plan, and that means putting in place strong detection and response protocols – these are the two other “legs” in the stool. What do they look like in practice?

In the case of VPN management, central management capabilities within the technology provide network administrators with a single view of all remote access endpoints, allowing them to quickly launch a response when an attack is detected, often by deprovisioning the vulnerable device.

With these three elements working in tandem, network administrators will be prepared and armed for any threat 2015 might bring to their network security.

This was cross-posted from the VPN HAUS blog. 

Copyright 2010 Respective Author at Infosec Island]]>
MSSP Client Onboarding – A Critical Process! https://www.infosecisland.com/blogview/24088-MSSP-Client-Onboarding--A-Critical-Process.html https://www.infosecisland.com/blogview/24088-MSSP-Client-Onboarding--A-Critical-Process.html Mon, 17 Nov 2014 10:55:54 -0600 Many MSSP relationships are doomed at the on-boarding stage when the organization first becomes a customer. Given how critical the first 2-8 weeks of your MSSP partnership are, let’s explore it a bit.

Here are a few focus areas to note (this, BTW, assumes that both sides are in full agreement about the scope of services and can quote from the SOW if woken up at 3AM):

  • Technology deployment: unless MSSP sensors are deployed and are able to capture logs, flows, packets, etc, you don’t yet have a monitoring capability. Making sure that your devices log – and sending logs to the MSSP sensor – is central to this (this also implies that you are in agreement on what log messages they need for their analysis – yes, I am talking about you, authentication success messages :-))
  • Access methods and credential sharing: extra-critical for device management contracts, no amount of SLA negotiation will help your partner apply changes faster if they do not have the passwords (this also implies that you log all remote access events by the MSSP personnel and then send these logs to …. oops!)
  • Context information transfer: lists of assets (and, especially, assets considered critical by the customer), security devices (whether managed by the MSSP or not), network diagrams, etc all make a nice information bundle to share with the MSSPpartner
  • Contacts and escalation trees: critical alerts are great, but not if the only person whose phone number was given to the MSSP is on a 3 week Caribbean vacation… Escalation and multiple current contacts are a must.
  • Process synchronization: now for the fun part: your risk assessment (maybe) andincident response (likely) processes may now be “jointly run” with your MSSP, but have you clarified the touch points, dependencies and information handoffs?

If you, the MSSP client, fail to follow through with these, the chance of success is severely diminished. Now, as my research on MSSP progresses, the amount of sad hilarity I am encountering piles on – and you don’t want to be part of that! For example, an MSSP asks a client: “To improve our alert triage, can we please get the list of your most critical assets?” The client response? “Damn, we’d like to know that too!” When asked for their incident response plan, another client sheepishly responded that they don’t have it yet, but can we please create it together – that is, only if it doesn’t cost extra…. BTW, if your MSSP never asked you aboutyour IR plans during on-boarding, run, run, run (it is much better to actually walk thru an incident scenario together with your MSSP at this stage).

In another case, a client asked an MSSP “to monitor for policy violations.” When asked for a copy of their most recent security policy, the client responded that it has not been created yet. On the other hand, a sneaky client once scheduled a pentest of their network during the MSSP onboarding period – but after their sensors were already operational. You can easily imagine the painful conversations that transpired when the MSSP failed to alert them…. Note that all of the above examples and quotes are fictitious, NOT based on real clients and are entirely made up (which is the same as fictitious anyway, right? Just wanted to make sure!)

Overall, our recent poll of MSSP clients indicated that most wished they’d spent more time on-boarding their MSSPs. Expect things to be very much in flux for at least several weeks – yourMSSP should ask a lot of questions, and so should you! While your boss may be tempted by the promises of fast service implementation, longer on-boarding often means better service for the next year. Of course, not every MSSP engagement starts with a 12-week hardcore consulting project involving 4 “top shelf” consultants, but such timeline for a large, complex monitoring and management effort is not at all offensive. In fact, one quality MSSP told me that they can deploy the service much faster than it takes their clients to actually fulfill their end of the bargain (share asset info, contacts, deploy sensors, tweak the existing processes, etc).

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Arrogance of the US Nuclear Power Industry - We Don't Want to Look at Everything https://www.infosecisland.com/blogview/24087-The-Arrogance-of-the-US-Nuclear-Power-Industry-We-Dont-Want-to-Look-at-Everything.html https://www.infosecisland.com/blogview/24087-The-Arrogance-of-the-US-Nuclear-Power-Industry-We-Dont-Want-to-Look-at-Everything.html Mon, 17 Nov 2014 10:53:41 -0600 The Nuclear Energy Institute (NEI) in support of the US nuclear utilities has filed a request for rulemaking with the Nuclear Regulatory Commission (NRC) to modify the nuclear plant cyber security rule (www.nrc.gov, Docket ID NRC-2014-0165). The gist of the draft rulemaking is NEI and the nuclear utilities feel the NRC is making the industry spend too much money by looking at too many of the systems and components in a nuclear power plant.

In today’s environment with nuclear plants being prime cyber targets, industry should be looking at more not less. There are new ICS cyber vulnerabilities being identified what seems like weekly that affect control systems including those used in nuclear power plants. The Chinese, Russians, Iranians, etc continue to cyber attack our infrastructures - nuclear plants are certainly on their list of targets.  DHS is holding cleared briefings on Havex and BlackEnergy that can affect control system HMIs in nuclear plants.

The NEI petition keeps the following in the existing rule – systems and components necessary to:

- “…prevent significant core damage and spent fuel sabotage; or

- Whose failure would cause a reactor scram.”

However, the petition wants to explicitly exclude the following categories in the existing rule:

-“safety-related and important-to-safety functions,

- security functions,

- emergency preparedness functions, including off-site communications,

- and support systems and equipment, which if compromised, would adversely impact safety, security, or emergency preparedness functions.”

The perception is the nuclear utilities want to reduce cyber security not increase it. Considering the categories they want to exclude have already contributed to core melt and nuclear plant scrams and there is so much focus on cyber security, why are NEI and the utilities doing this now?

This was cross-posted. 

Copyright 2010 Respective Author at Infosec Island]]>
Tips for Writing Good Security Policies https://www.infosecisland.com/blogview/24085-Tips-for-Writing-Good-Security-Policies.html https://www.infosecisland.com/blogview/24085-Tips-for-Writing-Good-Security-Policies.html Thu, 13 Nov 2014 13:58:19 -0600 Almost all organizations dread writing security policies. When I ask people why this process is so intimidating, the answer I get most often is that the task just seems overwhelming and they don’t know where to start. But this chore does not have to be as onerous or difficult as most people think. The key is pre-planning and taking one step at a time.

First you should outline all the policies you are going to need for your particular organization. Now this step itself is what I think intimidates people most. How are they supposed to ensure that they have all the policies they should have without going overboard and burdening the organization with too many and too restrictive policies

There are a few steps you can take to answer these questions:

  • Examine existing information security policies used by other, similar organizations and open source information security policy templates such as those available at SANS. You can find these easily online. However, you should resist simply copying such policies and adopting them as your own. Just use them for ideas. Every organization is unique and security policies should always reflect the culture of the organization and be pertinent, usable and enforceable across the board.
  • In reality, you should have information security policies for all of the business processes, facilities and equipment used by the organization. A good way to find out what these are is to look at the organizations business impact analysis (BIA). This most valuable of risk management studies will include all essential business processes and equipment needed to maintain business continuity. If the organization does not have a current BIA, you may have to interview personnel from all of the different business departments to get this information. 
  • If the organization is subject to information security or privacy regulation, such as financial institutions or health care concerns, you can easily download all of the information security policies mandated by these regulations and ensure that you include them in the organization’s security policy. 
  • You should also familiarize yourself with the available information security guidance such as ISO 27002, NIST 800-35, the Critical Security Controls for Effective Cyber Defense, etc. This guidance will give you a pool of available security controls that you can apply to fit your particular security needs and organizational culture.

Once you have the outline of your security needs in front of you it is time to start writing. You should begin with broad brush stroke, high level policies first and then add detail as you go along. Remember information security “policy” really includes policies, standards, guidelines and procedures. I’ve found it a very good idea to write “policy” in just that order.

Remember to constantly refer back to your outline and to consult with the business departments and users as you go along. It will take some adjustments and rewrites to make your policy complete and useable. Once you reach that stage, however, it is just a matter of keeping your policy current. Review and amend your security policy regularly to ensure it remains useable and enforceable. That way you won’t have to go through the whole process again!

 

Thanks to John Davis for this post.

  This was cross-posted from the MSI State of Security blog. Copyright 2010 Respective Author at Infosec Island]]>
How Can ICS Cyber Security Risk be Quantified and What Does it Mean to Aurora https://www.infosecisland.com/blogview/24084-How-Can-ICS-Cyber-Security-Risk-be-Quantified-and-What-Does-it-Mean-to-Aurora.html https://www.infosecisland.com/blogview/24084-How-Can-ICS-Cyber-Security-Risk-be-Quantified-and-What-Does-it-Mean-to-Aurora.html Thu, 13 Nov 2014 13:55:28 -0600 I will be giving a lecture on ICS cyber security risk at the Fraunhofer Institute December 2nd in Germany. In preparation for the lecture, I was looking into the recent HAVEX and BlackEnergy malware attacks and how they can affect ICS cyber risk. Risk is defined as frequency times consequence. There is little information on frequency of ICS cyber attacks.

The next issue is how do you define consequence. HAVEX and BlackEnergy have been targeting selected ICS vendor HMIs that could be used to give remote access to the attackers. Even though the purpose of HAVEX and BlackEnergy appears to be exfiltrating information, there is nothing to stop the attackers from taking other actions. (Stuxnet initially was thought to be only about exfiltrating information.) It is possible that attackers could login and send commands to the computer. Once your computer is owned there's not much the attacker can't do. This brings up the question of how consequence is defined.

The Aurora event can be initiated by the remote closing and reopening of breakers by the SCADA HMI. If the attackers “own” the HMIs, there are venues for initiating Aurora events. Aurora has yet to be adequately mitigated by the utility industry. Moreover, much of the classified information on Aurora has been made public by DHS. As the information on Aurora is public and there may be unauthorized access to ICS HMI’s, I would consider this situation to be a significant risk to our critical infrastructures.

This was cross-posted from the Unfettered blog. 

Copyright 2010 Respective Author at Infosec Island]]>
How to Steal Data From an Airgapped Computer Using FM Radio Waves https://www.infosecisland.com/blogview/24083-How-to-Steal-Data-From-an-Airgapped-Computer-Using-FM-Radio-Waves-.html https://www.infosecisland.com/blogview/24083-How-to-Steal-Data-From-an-Airgapped-Computer-Using-FM-Radio-Waves-.html Wed, 12 Nov 2014 15:07:52 -0600 By: Graham Cluley

More and more organisations today have some airgapped computers, physically isolated from other systems with no Internet connection to the outside world or other networks inside their company.

Security teams may have disconnected from other networks in order to better protect them, and the data they have access to, from Internet attacks and hackers.

Of course, a computer which can’t be reached by other computers is going to be a lot harder to attack than one which is permanently plugged into the net. But that doesn’t mean it’s impossible.

Take, for instance, the case of the Stuxnet worm, which reared its ugly head in 2010. Stuxnet is thought to have caused serious damage to centrifuges at an Iranian uranium enrichment facility after infecting systems via a USB flash drive and a cocktail of Windows vulnerabilities.

Someone brought an infected USB stick into the Natanz facility and plugged it into a computer – allowing it to spread and activate its payload.

And it’s not just Iran. In the years since, we have heard of other power plants taken offline after being hit by USB-aware malware spread via sneakernet.

So, we accept that although it may be more difficult to infect isolated airgapped computers, it isn’t impossible.

But what about exfiltrating data from computers which have no connection with the outside world?

Researchers from Ben-Gurion University in Israel think they have found a way to do it, hiding data in radio emissions surreptitiously broadcast via a computer’s video display unit, and picking up the signals on nearby mobile phones.

And, to prove their point, they have released a YouTube video, demonstrating their proof-of-concept attack in action:

 

In the video, which has no sound, the researchers first demonstrate that the targeted computer has no network or Internet connection.

 

Next to it is an Android smartphone, again with no network connection, that is running special software designed to receive and interpret radio signals via its FM receiver.

Proof-of-concept malware, dubbed “AirHopper,” running on the isolated computer ingeniously transmits sensitive information (such as keystrokes) in the form of FM radio signals by manipulating the video display adaptor.

Meanwhile, AirHopper’s receiver code is running on a nearby smartphone.

“With appropriate software, compatible radio signals can be produced by a compromised computer, utilizing the electromagnetic radiation associated with the video display adapter. This combination, of a transmitter with a widely used mobile receiver, creates a potential covert channel that is not being monitored by ordinary security instrumentation.”

As the researchers revealed in their white paper, the phone receiving the data can be in another room.

Now, you may think that if AirHopper is fiddling with the targeted computer’s screen that this could be noticed by any operator in front of the device. However, the researchers say they have devised a number of techniques to disguise any visual clues that data may be being transmitted, like waiting until the monitor is turned off, waiting until a screensaver kicks in, or determining (like a screensaver does) that there has been no user interaction for a certain period of time.

It’s all quite ingenious—and although I have explained before how high frequency sound can be used to exfiltrate data from an airgapped computer, this new method could work even if a PC’s speaker has been detached.

No sound on a computer you can live with, but removing monitors seems impractical.

Of course, it’s important that no-one should panic. The technique is elaborate, and at the moment—as far as we can tell—only exists within research laboratories.

It’s important to understand the various steps that have to be taken to exfiltrate data from an airgapped computer.

Firstly, malware has to be introduced to the isolated PC—not a simple task in itself, and a potential hurdle that may prove impossible if proper defences are in place.

Secondly, a mobile device carrying the receiver software needs to be in close proximity to the targeted computer (this would require either an accomplice, or infection of an employee’s mobile device with the malware).

The data then has to be transmitted from the mobile phone itself, back to the attackers.

Finally, this may not be the most efficient way to steal a large amount of data. The AirHopper experiment showed that data could be transmitted from targeted isolated computers to mobile devices up to 7 metres (23 feet away), at a rate of 13-60 bytes per second. That’s equivalent to less than half a tweet.

Transmission rate

Despite that, it’s still easy to imagine that a determined hacker who has gone to such lengths would be happy to wait for a sizeable amount of data to be transmitted, perhaps as the isolated computers are left unattended overnight or at weekends.

If this all sounds like too much of an effort, think again. Because the researchers’ paper says although complex, the attack isn’t beyond modern attackers:

“The chain of attack is rather complicated, but is not beyond the level of skill and effort employed in modern Advanced Persistent Threats (APTs)”

Which leads us to what you should do about it, and there is a familiar piece of advice to underline: tightly control who has access to your computers, and what software they are able to install upon them, and what devices they are permitted to attach.

The AirHopper attack cannot steal any data from your airgapped computers at all, if no-one ever manages to infect them in the first place.

It will be interesting to see if others take this research and devise more methods to counter this type of attack in the future.

This was  cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Three Danger Signs I Look for when Scoping Risk Assessments https://www.infosecisland.com/blogview/24082-Three-Danger-Signs-I-Look-for-when-Scoping-Risk-Assessments.html https://www.infosecisland.com/blogview/24082-Three-Danger-Signs-I-Look-for-when-Scoping-Risk-Assessments.html Wed, 12 Nov 2014 09:10:11 -0600 Scoping an enterprise-level risk assessment can be a real guessing game. One of the main problems is that it’s much more difficult and time consuming to do competent risk assessments of organizations with shoddy, disorganized information security programs than it is organizations with complete, well organized information security programs. There are many reasons why this is true, but generally it is because attaining accurate information is more difficult and because one must dig more deeply to ascertain the truth. So when I want to quickly judge the state of an organization’s information security program, I look for “danger” signs in three areas.

First, I’ll find out what kinds of network security assessments the organization undertakes. Is external network security assessment limited to vulnerability studies, or are penetration testing and social engineering exercises also performed on occasion? Does the organization also perform regular vulnerability assessments of the internal network? Is internal penetration testing also done? How about software application security testing? Are configuration and network architecture security reviews ever done?

Second, I look to see how complete and well maintained their written information security program is. Does the organization have a complete set of written information security policies that cover all of the business processes, IT processes and equipment used by the organization? Are there detailed network diagrams, inventories and data flow maps in place? Does the organization have written vendor management, incident response and business continuity plans? Are there written procedures in place for all of the above? Are all of these documents updated and refined on a regular basis? 

Third, I’ll look at the organization’s security awareness and training program. Does the organization provide security training to all personnel on a recurring basis? Is this training “real world”? Are security awareness reminders generously provided throughout the year? If asked, will general employees be able to tell you what their information security responsibilities are? Do they know how to keep their work areas, laptops and passwords safe? Do they know how to recognize and resist social engineering tricks like phishing emails? Do they know how to recognize and report a security incident, and do they know their responsibilities in case a disaster of some kind occurs?

I’ve found that if the answer to all of these questions is “yes”, you will have a pretty easy time conducting a thorough risk assessment of the organization in question. All of the information you need will be readily available and employees will be knowledgeable and cooperative. Conversely I’ve found that if the answer to most (or even some) of these questions is “no” you are going to have more problems and delays to deal with. And if the answers to all of these questions is “no”, you should really build in plenty of extra time for the assessment. You will need it!

Thanks to John Davis for this post.

This was cross-posted from the MSI State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
ISPs Removing Their Customers' Email Encryption https://www.infosecisland.com/blogview/24081-ISPs-Removing-Their-Customers-Email-Encryption.html https://www.infosecisland.com/blogview/24081-ISPs-Removing-Their-Customers-Email-Encryption.html Tue, 11 Nov 2014 12:46:00 -0600 Recently, Verizon was caught tampering with its customer's web requests to inject a tracking super-cookie. Another network-tampering threat to user safety has come to light from other providers: email encryption downgrade attacks. In recent months, researchers have reportedISPs in the US and Thailand intercepting their customers' data to strip a security flag—called STARTTLS—from email traffic. The STARTTLS flag is an essential security and privacy protection used by an email server to request encryption when talking to another server or client.1

By stripping out this flag, these ISPs prevent the email servers from successfully encrypting their conversation, and by default the servers will proceed to send email unencrypted. Some firewalls, including Cisco's PIX/ASA firewall do this in order to monitor for spam originating from within their network and prevent it from being sent. Unfortunately, this causes collateral damage: the sending server will proceed to transmit plaintext email over the public Internet, where it is subject to eavesdropping and interception.

This type of STARTTLS stripping attack has mostly gone unnoticed because it tends to be applied to residential networks, where it is uncommon to run an email server2. STARTTLS was also relatively uncommon until late 2013, when EFF started rating companies on whether they used it. Since then, many of the biggest email providers implemented STARTTLS to protect their customers. We continue to strongly encourage all providers to implement STARTTLS for both outbound and inbound email. Google's Safer email transparency report and starttls.infoare good resources for checking whether a particular provider does.

Several Standards for Email Encryption

The SMTP protocol, the underpinning of email, was not originally designed with security in mind. But people quickly started using it for everything from shopping lists and love letters to medical advice and investigative reporting, and soon realized their mail needed to be protected from prying eyes. In 1991, Phil Zimmerman implemented PGP, an end-to-end email encryption protocol that is still in use today. Adoption of PGP has been slow because of its highly technical interface and difficult key management. S/MIME, with similar properties as PGP, was developed in 1995. And in 2002, STARTTLS for email was defined by RFC 3207.

While PGP and S/MIME are end-to-end encryption, STARTTLS is server-to-server. That means that the body of an email protected with, e.g. PGP, can only be read by its intended recipient, while email protected with STARTTLS can be read by the owners of the sending server and the recipient server, plus anyone else who hacks or subpoenas access to those servers. However, STARTTLS has three big advantages: First, it protects important metadata (subject lines and To:/From/CC: fields) that PGP and S/MIME do not. Second, mail server operators can implement STARTTLS without requiring users to change their behavior at all. And third, a well-configured email server with STARTTLS can provide Forward Secrecy for emails. The two technologies are entirely compatible and reinforce each other. The most secure and private approach is to use PGP or S/MIME with a mail service that uses STARTTLS for server-to-server communication.

There are several weak points in the STARTTLS protocol, however. The first weakness is that the flag indicating that a server supports STARTTLS is not itself encrypted, and is therefore subject to tampering, which can prevent that server from establishing an encrypted connection. That type of tampering is exactly what we see today. EFF is working on a set of improvements to STARTTLS, called STARTTLS Everywhere, that will make server-to-server encryption more robust by requiring encryption for servers that are already known to support it.

It is important that ISPs immediately stop this unauthorized removal of their customers' security measures. ISPs act as trusted gateways to the global Internet and it is a violation of that trust to intercept or modify client traffic, regardless of what protocol their customers are using. It is a double violation when such modification disables security measures their customers use to protect themselves.

This was cross-posted from EFF's DeepLinks blog. 

Copyright 2010 Respective Author at Infosec Island]]>
First Victims of the Stuxnet Worm Revealed https://www.infosecisland.com/blogview/24079-First-Victims-of-the-Stuxnet-Worm-Revealed.html https://www.infosecisland.com/blogview/24079-First-Victims-of-the-Stuxnet-Worm-Revealed.html Tue, 11 Nov 2014 09:24:00 -0600 Kaspersky Lab today announced that after analyzing more than 2,000 Stuxnet files collected over a two-year period, it can identify the first victims of the Stuxnet worm. After Stuxnet was discovered over four years ago as one of the most sophisticated and dangerous malicious programs, Kaspersky Lab researchers can now provide insight into the question: what were the goals of the Stuxnet operation?

Initially security researchers had no doubt that the whole attack had a targeted nature. The code of the Stuxnet worm looked professional and exclusive; there was evidence that extremely expensive zero-day vulnerabilities were used. However, it wasn’t yet known what kind of organizations were attacked first and how the malware ultimately made it right through to the uranium enrichment centrifuges in the particular top secret facilities.

Kaspersky Lab analysis sheds light on these questions. All five of the organizations that were initially attacked are working in the ICS area in Iran, developing ICS or supplying materials and parts. One of the more intriguing organizations was the one attacked fifth, since among other products for industrial automation, it produces uranium enrichment centrifuges. This is precisely the kind of equipment that is believed to be the main target of Stuxnet.

Apparently, the attackers expected that these organizations would exchange data with their clients – such as uranium enrichment facilities – and this would make it possible to get the malware inside these target facilities. The outcome suggests that the plan was indeed successful.

“Analyzing the professional activities of the first organizations to fall victim to Stuxnet gives us a better understanding of how the whole operation was planned. At the end of the day this is an example of a supply-chain attack vector, where the malware is delivered to the target organization indirectly via networks of partners that the target organization may work with,” said Alexander Gostev, chief security expert, Kaspersky Lab.

Kaspersky Lab experts made another interesting discovery: the Stuxnet worm did not only spread via infected USB memory sticks plugged into PCs. That was the initial theory, and it explained how the malware could sneak into a place with no direct Internet connection. However, data gathered while analyzing the very first attack showed that the first worm’s sample (Stuxnet.a) was compiled just hours before it appeared on a PC in the first attacked organization. This tight timetable makes it hard to imagine that an attacker compiled the sample, put it on a USB memory stick and delivered it to the target organization in just a few hours. It is reasonable to assume that in this particular case the people behind Stuxnet used other techniques instead of a USB infection.

The latest technical information about some previously unknown aspects of the Stuxnet attack can be read on Securelist and journalist Kim Zetter’s new book, “Countdown to Zero Day.” The book includes previously undisclosed information about Stuxnet.

Copyright 2010 Respective Author at Infosec Island]]>
7 Security Threats You May Have Overlooked https://www.infosecisland.com/blogview/24078-7-Security-Threats-You-May-Have-Overlooked.html https://www.infosecisland.com/blogview/24078-7-Security-Threats-You-May-Have-Overlooked.html Tue, 11 Nov 2014 09:21:43 -0600 If there’s been a silver lining to the string of devastating cyberattacks against some of the biggest organizations in the world over the last year, it’s that the list of “what not to do” has continued to grow, putting other companies on notice.

If you use a third-party vendor, for example, make sure their networks are just as secure as your own. When there are known security vulnerabilities, reconsider using end of life operating systems like Windows XP on your devices.

These are some of the most prominent recent lessons, but there are plenty of other threats to network security lurking just below the surface. And these are the vulnerabilities that attackers will look to exploit. After all, why would they target a well-defended vector when there may be an easier point-of-entry somewhere else? That would be like a burglar trying to break down a locked door, instead of checking first to see if maybe a window was left cracked open.

In today’s business environment, the list of overlooked network security threats is endless. Information security professionals are modern-day gladiators, tasked with defending corporate data and networks against both known and unknown threats, but no matter how skilled they are, there will always be new threats to their networks. Here are seven to think about:

  • 1. Rogue Employees
  • 2. Delayed Device Deprovisioning
  • 3. A Single, Vulnerable Security Vendor
  • 4. Out of Date Software
  • 5. Failure to Adapt to New Technology
  • 6. Security Solutions and Policy Misalignment
  • 7. Shadow IT

Most working environments would be lucky to be vulnerable to only one of these. The reality is, these threats are so pervasive that many information security professionals are bound to face multiple iterations of each, all simultaneously. They’re fighting an ongoing war on several fronts, in which the enemy’s resources are never fully depleted. And in some ways, the enemy continues to gain the upper hand – the average data breach costs about $145 per compromised record, up 9 percent from two years ago.

Yet, it’s not a losing battle. Information security professionals can emerge victorious. The best approach, after uncovering the threats, is to develop and execute a sound approach to network security, as well as enforcement of these policies. Security, flexibility and ease of management all have to work in sync to maximize success. It’s how you train your employees. It’s the technology you choose to adopt. It’s the processes that tie all of your security initiatives together.

So if you’re an information security professional, don’t be afraid to find and eliminate these threats.

Go ahead, be a hero.

Copyright 2010 Respective Author at Infosec Island]]>
Preventing and Recovering From Cybercrime https://www.infosecisland.com/blogview/24077-Preventing-and-Recovering-From-Cybercrime-.html https://www.infosecisland.com/blogview/24077-Preventing-and-Recovering-From-Cybercrime-.html Mon, 10 Nov 2014 11:32:08 -0600 Cybercrime is considered one the most dangerous threats for the development of any state; it has a serious impact on every aspect of the growth of a country. Government entities, non-profit organizations, private companies and citizens are all potential targets of the cyber criminal syndicate.

The “cybercrime industry” operates exactly as legitimate businesses working on a global scale, with security researchers estimating the overall amount of losses to be quantified in the order of billions of dollars each year. In respect to other sectors, it has the capability to quickly react to new business opportunities, benefiting from the global crisis that – in many contexts – caused a significant reduction in spending on information security.

The prevention of cyber criminal activities is the most critical aspect in the fight against cybercrime. It’s mainly based on the concepts of awareness and information sharing. A proper security posture is the best defense against cybercrime. Every single user of technology must be aware of the risks of exposure to cyber threats, and should be educated about the best practices to adopt in order to reduce their “attack surface” and mitigate the risks.

Education and training are essential to create a culture of security that assumes a fundamental role in the workplace. Every member of an organization must be involved in the definition and deployment of a security policy and must be informed on the tactics, techniques and procedures (TTPs) belonging to the cyber criminal ecosystem.

Prevention means to secure every single resource involved in the business processes, including personnel and IT infrastructure. Every digital asset and network component must be examined through a continuous and an evolving assessment. Government entities and private companies must cooperate to identify the cyber threats and their actions—a challenging task that could be achieved through the information sharing between law enforcement, intelligence agencies and private industry.

Fortunately, like any other phenomenon, criminal activities can be characterized by specific patterns following trends, more or less strictly. Based on this consideration, it is possible to adopt an efficient prevention strategy, implementing processes of threat intelligence analysis.

Security must be addressed with a layered approach, ranging from the “security by design” in the design of any digital asset, to the use of a sophisticated predictive system for the elaboration of forecasts on criminal events.

Additionally, sharing threat information is another fundamental pillar for prevention, allowing organizations and private users to access data related to the cyber menaces and to the threat actors behind them.

At the last INTERPOL-Europol conference in October, security experts and law enforcement officers highlighted the four fundamentals in combating cybercrime as:

  1. Prevention
  2. Information Exchange
  3. Investigation
  4. Capacity Building

Executive Director of the INTERPOL Global Complex for Innovation (IGCI) Noboru Nakatani and head of Europol’s European Cybercrime Centre (EC3) Troels Oerting closed the conference, acknowledging the engagement and input from delegates had served to increase understanding and encourage greater interaction between the various sectors involved.

conf

INTERPOL-Europol conference, Singapore, October 2014

In September 2014, Troels Oerting announced the born of the Joint Cybercrime Action Taskforce (J-CAT) with the following statements that remark the necessity of an efficient collaboration between the entities involved, not excluding the Internet users.

“Today is a good day for those fighting cybercrime in Europe and beyond. For the first time in modern police history a multi-lateral permanent cybercrime taskforce has been established in Europe to coordinate investigations against top cybercriminal networks.

The Joint Cybercrime Action Taskforce will operate from secure offices in Europol’s HQ assisted by experts and analysts from the European Cybercrime Centre. The aim is not purely strategic, but also very operational. The goal is to prevent cybercrime, to disrupt it, catch crooks and seize their illegal profits.

This is a first step in a long walk towards an open, transparent, free but also safe Internet. The goal cannot be reached by law enforcement alone, but will require a consolidated effort from many stakeholders in our global village.”

Prevention activities must be integrated by an effective incident response activity and by a recovery strategy to mitigate the effects of cyber incidents.

Once an event is occurring, it is crucial to restore the operation of the affected organization and IT systems. Recovery from cybercrime is composed of the overall activities associated with repairing and remediation of the impacted systems and processes. Typically, recovery includes the restoration of damaged/compromised data and any other IT assets.

grpahs

Percentage cost by activities to resolve a cyber attack.

According to the data proposed in the last report issued by the Ponemon Institute, “2014 Global Report on the Cost of Cyber Crime”, recovery is one of the most costly internal activities. On an annualized basis, detection and recovery costs combined account for 53 percent of the total internal activity cost.

An effective incident response procedure includes the following steps:

  • Identification of the threat agent which hit the infrastructure.
  • Containment of the threat, preventing it from moving laterally within the targeted infrastructure.
  • Forensic investigation to identify the affected systems and the way the threat agent has penetrated the computer system.
  • Remediate/Recover by restoring IT infrastructure back online and in production once forensics investigation are complete.
  • Report and share threat data to higher management and share the data on the incident through dedicated platforms that allow rapid sharing of threat data with law enforcement and other companies.

Unfortunately, the process described is rarely followed. Up until now, the containment and remediation process has been a primary manual human process, that make it non-responsive and inefficient.

We must be conscious that is quite impossible to recognize every cyber criminal activity before it affects the targeted entities. For this reason, it is crucial to have a mature approach to cyber security that emphasizes the aspects of early detection and recovery.

An efficient incident response plan, for example, could improve the resilience of the system to the cyber attacks and allow a quick recovery from an incident.

The processes described on both aspects of prevention and recovery has to be improved by any entity that uses a digital asset or a system exposed on the Internet. Security needs an improvement approach that will preserve every single ring of the security chain.

Never let your guard down, cybercrime never sleeps!

This was cross-posted from Tripwire's The State of Security blog.

 

 

Copyright 2010 Respective Author at Infosec Island]]>