Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Connecting Bellwether Metrics to the Business https://www.infosecisland.com/blogview/23889-Connecting-Bellwether-Metrics-to-the-Business-.html https://www.infosecisland.com/blogview/23889-Connecting-Bellwether-Metrics-to-the-Business-.html Mon, 28 Jul 2014 14:07:52 -0500 By: Katherine Brocklehurst 

In my last post, we discussed the basics of Benchmark and the metrics that make it efficient. In my concluding post, we’ll discuss what Benchmark can teach organizations, the importance of Bellwether metrics and how vulnerability scanning can benefit your business.

WHAT CAN COMPANIES LEARN FROM BENCHMARK?

The key benefits of Benchmark help professionals:

  • Know how well they’re doing individually in comparison to their peers
  • Align security investments with business goals
  • Optimize security
  • Share concrete evidence with the C-suite and Boards

Did you know that those organizations who lead their industry tend to scan more often? This demonstrates that there is a natural correlation between scanning and reducing Common Vulnerability Scan Scoring System (CVSS) severity. In other words, simply by scanning more frequently, most of the best organizations notice a reduction in the count of their high and medium severity CVSS scores—hence, an underlying improvement in their overall security posture.

However, even with scan frequency being a leading indicator for reducing CVSS severity, the C-suite will probably not find CVSS scoring and trends useful to how they view the business. Most CEOs, CIOs and Boards of Directors find a comparison with key competitors to be very compelling and informative.

Without a tool like Benchmark, this data just isn’t available. Among many other metrics for comparison purposes, such as patch management, anti-virus, configuration audit, etc., Benchmark provides statistics on vulnerability management, failed login attempts, average host risk score, patching frequency, and provides an in-depth look at the security performance of industry leaders. For the purpose of building a competitive comparison and illustrating competitive advantage in some industries, the C-suite can be helped to understand security in context of the business.

WHAT ARE BELLWETHER METRICS?

A Bellwether is an indication or prediction. For example, one could say that college campuses are often “a bellwether of change.” In the financial sector, housing is often considered a bellwether metric. A sustainable and growing housing market indicates an improving economy and vice versa.

Benchmark facilitates organizations who want to define and track their own Bellwether metrics—comparing only with their own internal goals and trends. In Benchmark, vulnerability scanning frequency is a Bellwether metric because consistent scanning tends to lower CVSS severity scores. An organization could start with an internal goal to scan every 30 days and track what happens to their CVSS severity trends. They can begin to compare week over week, month over month, etc., and hopefully, these trends will demonstrate improvement in the overall security posture in the bellwether metric of vulnerability scan frequency.

Other key bellwether metrics for your organization may be:

  • How frequently and thoroughly do you patch?
  • What type of configuration auditing do you do on critical systems?
  • What is your password hygiene policy and how frequently are password changes enforced?
  • How often do you update your AV or IDS/IPS signatures?
  • What percentage of login attempts fail?
WHY ARE AVERAGE HOST RISK SCORE AND AVERAGE DAYS SINCE LAST SCAN GOOD VULNERABILITY MANAGEMENT METRICS?

Some metrics tend to move together and correlation is the key. If the average days since last scan are low, research has shown that your host risk scores tend to be low, as well. Similar connections can be made on failed login attempts; a rise in failed login attempts could be an indicator of brute force threats in your environment. Without some indicator of failed login attempts, your organization may be blind to this commonly used breach method.

Let’s take it further. The annual Verizon Data Breach Investigations Report, as well as the Mandiant M-Trends report both show that breaches can happen in seconds-to-minutes, whereas detection can take weeks to months.

Mandiant’s April, 2014 M-Trends report indicates 8 months (on average 229 days) before organizations are able to detect a breach. The U.S. Government has determined that in-depth continuous monitoring (often referred to as CM or Continuous Diagnostics and Mitigation [CDM]) decreases the time to detect breaches and thereby, can reduce the likelihood of serious damage. Vulnerability scanning, as well as other security measures, such as security configuration audit and management, log and event management, etc. all contribute to methods used in continuous monitoring.

The connections between CDM and data breach damage are aligned and correlating security metrics and are good for organizations to track to begin to know their security posture, calibrate and trend.

HOW IMPORTANT IS FREQUENT VULNERABILITY SCANNING?

Frequent vulnerability scanning is incredibly useful to reducing threats within your environment. Key industry standards have noted that frequent vulnerability scanning brings down high CVSS risk scoring and significantly aids in preventing and detecting breaches early.

Many industry standards call for frequent vulnerability scanning. The Council on CyberSecurity’s 20 Critical Security Controls recommends scanning every 7 days for vulnerabilities. Even PCI DSS 3.0 requires regular, frequent vulnerability assessment.

If you’re not scanning for vulnerabilities, what are you doing?

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Dilemma of PCI Scoping - Part 1 https://www.infosecisland.com/blogview/23888-The-Dilemma-of-PCI-Scoping-Part-1.html https://www.infosecisland.com/blogview/23888-The-Dilemma-of-PCI-Scoping-Part-1.html Mon, 28 Jul 2014 12:00:04 -0500 Based on the email comments of late, there are apparently a lot of you out there that really do not like the Open PCI Scoping Toolkit.  I am not sure exactly what post where I mentioned the Toolkit got you all wound up, but I have definitely hit a nerve.  From the comments in these messages, it is painfully obvious that the reason the SIG failed was that none of us are in agreement about how much risk we are willing to accept.  And that is why no PCI assessment is ever the same because organizations and even QSAs from the same QSAC can have different levels of risk tolerance.

I, too, have to admit that I think the Toolkit needs some work, but it is the best framework we have to start a discussion on the topic.  And that is the problem, the topic.  Until the Toolkit appeared, scoping discussions had no real framework and everyone had their own definitions.  And as I have pointed out before, while there are a lot of people out there that might not know the nuances of the PCI DSS, it seems that everyone “knows” what is in scope and what is out of scope.

As a result, QSAs have found out through the “School of Hard Knocks” that everyone has their own view of scope and there was no good guide to explain how or why to draw the line let alone discuss the topic civilly in some cases.  I view the Toolkit as the bare minimum.  If an organization wants to get even more restrictive and have more categories, great, that is their prerogative.  However, if they want to go less than the Toolkit, in my very humble opinion, they can do it without me.  The bottom line is, regardless of whether you are using the Toolkit or have your own approach, document the definitions of your categories and provide examples so that everyone can understand your rationale and then discuss the impacts on your organization’s PCI scope.  Without such a document, we are not going to have productive discussions on scope.  That is why I lean toward the Toolkit, it gives me a starting point to get a productive discussion started.

We seem to all be able to agree on the Category 1 and 3 systems, because those are clear and easy to identify.  Category 1 systems are always in the cardholder data environment (CDE) because they directly process, store or transmit cardholder data or define the CDE and are therefore always in-scope.  Category 3 systems never, ever process, store or transmit cardholder data and are therefore always out of scope.

It’s those pesky Category 2 systems (the ones that connect in some way to/from the CDE) that get everyone’s undies in a bunch.  The group that developed the Toolkit did their best to break them out in ways that made sense but were still understandable and easy to use.  The more that I have thought about it, the more I think they came up with the best compromise. In my opinion, if you start adding any more categories or sub-categories to the existing definitions you will lose almost everyone due to complexity, including security people.  However, I also don’t believe that simplifying Category 2 is an answer either.

But if the discussion about Category 2 is tough, the fact that the Toolkit allows for Category 3 systems to exist on networks with Category 2 systems sends some security purists right over a cliff.  Their rationale is that Category 3 systems could be easily attacked and therefore allows a beachhead to compromising Category 2 systems.  While this is true, the idea of totally isolating Category 2 systems is not realistic for most organizations because of the ramifications of such a decision.

Why Isolation Is Not An Option

Security purists seem to think isolation of the CDE is the answer.  From an outsourcing perspective, that would provide isolation.  But in my experience, even outsourcing is not as isolated as one would think.  Here is why I think that isolation does not work whether doing it internally or through outsourcing.

Isolation means physically and logically separate directory systems with no trust relationships between the isolated environment and normal production.  I have seen all sorts of technical gymnastics to secure directory systems inside the CDE that can still leave too many holes in firewalls so that the trust relationship can exist.  If you are truly serious about isolation, then you need to have true isolation and that means physically and logically separate directory systems.  This also means duplication of credential management and introducing the possibility of errors when provisioning accounts.

The idea of leveraging your existing solutions for network and application management must be rethought as well.  This means separate security event and information management (SEIM) solutions, separate network management and monitoring, separate application management and monitoring, etc.  I think you get the idea.

Of course separate firewalls, routers, switches, intrusion detection/prevention, load balancers and other infrastructure are also going to be needed.  If you use RADIUS or TACACS+ for authentication, you will have to have separate systems for authentication to the infrastructure as well.  You will also need separate DNS and DHCP servers if you intend to provide those services inside the CDE.  Of course all of this duplicated infrastructure adds to the likelihood that mistakes will be made in configuration changes that could result in a breach of that isolation.

There are no “out of band” or virtual terminal access into your pristine isolated environment.  So you will need to provide separate PCs for operations and network personnel so that they have access to the isolated environment and then another physically separate system to your organization’s normal work environment.  Internal users with access to cardholder data (CHD) will also be required to have physically separate PCs for accessing the CDE.  This will also mean ensuring security of network switches inside the CDE by using MAC filtering or “sticky” MAC approaches to ensure that only the PCs that should have access to the CDE do have access.  And of course wireless networking is totally out of the question.

But wait, you will also have to invest in some sort of portable media solution so that you can get data from the isolated environment to the “normal” production environment and vice versa.  No connected databases or application integration because that will require holes into and out of the isolated environment.  This is where outsourcing for isolation also comes up short.  But without application and data integration, the economies of scale shrink almost exponentially as more and more data must be manually moved between environments.  This drives the cost of isolation almost through the roof and typically makes isolation too expensive for most organizations.

Various government entities have all tried this approach with mixed results as far as breaches are concerned.  So in practice, the isolation approach will still leave your organization with some amount of risk that must be managed.

So if isolation is not the answer what is the answer?  In Part 2 I’ll discuss what I think works.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
Cyphort Detects Surge in Ad Network Infections, a.k.a. “Malvertising” https://www.infosecisland.com/blogview/23887-Cyphort-Detects-Surge-in-Ad-Network-Infections-aka-Malvertising.html https://www.infosecisland.com/blogview/23887-Cyphort-Detects-Surge-in-Ad-Network-Infections-aka-Malvertising.html Thu, 24 Jul 2014 13:11:32 -0500 By: McEnroe Navaraj, security researcher at Cyphort

We recently noticed a surge where exploit packs are served from DMO (Destination Marketing Organization) websites using an Ad network called simpleviewinc.com during the July 4th long weekend. Cyphort Labs reached out to Simpleviewinc.com on July 2, but as of today, we have not received a response or acknowledgement.

Serving malware/exploit using Ad network is a common problem in recent years, and threat actors have special interest on DMO Ad networks during summer holidays and long weekend holidays because more users are looking for travel information during those times, providing a large audience for exploitation. It is a serious enough issue that the US Senate discussed the hazards of this form of malware delivery mechanism and its implications for consumer security in a recent report

With the increasing complexity of Ad syndication and dynamic content creation, we anticipate more incidents of infection delivered through Ad networks. We strongly encourage Ad network providers take steps to enhance their security monitoring on the Ads content in order to build a more secure ecosystem for all. If you have any interaction with Ad Network or DMO sites, we encourage you to read and share this post, and if anyone does business with Simpleviewinc.com, whether they respond to us or not (and we hope they do), encourage them to address our findings.

Here’s how the attack works:

Each tourist destination is promoted by a DMO. Mostly it is a Government organization or Government subsidized organization. Most of the content to the DMO website is provided by the backend providers like Destination Travel Network (DTN)


DMO-1

We analyzed a few of the incidents where malicious Ad injected to the DMO websites and other leisure activity websites. The exploit delivery pattern is common across all the injections. In all these incidents, we noticed that the actors used one single central server to deliver exploits from his “cluster of domains”. We were able to correlate this “pattern” with other non-leisure website infections too. The actors have very good control over various Ad networks. Some of the domains from Italy/UK also served exploits from his “cluster of domains”.


DMO-2

List of DMO’s served malware around the July 4 holiday weekend:


Screen Shot 2014-07-21 at 9.18.40 PM

List of DMO’s uses Simpleviewinc’s Ad Servers:


  1. www.seemonterey.com

  2. www.visittucson.org

  3. www.visitmyrtlebeach.com

  4. www.southshorecva.com

  5. www.tourismvictoria.com

  6. www.visitokc.com

  7. www.catchdesmoines.com

  8. www.denver.org

  9. www.fortworth.com

  10. www.gowichita.com

  11. www.maconga.org

  12. www.thisiscleveland.com

  13. www.tourismrichmond.com

  14. www.valleyforge.org

  15. www.visitaggieland.com

  16. www.visitdallas.com

  17. www.visitestespark.com

  18. www.visitgreenvillesc.com

  19. www.visithamiltoncounty.com

  20. www.visitpittsburgh.com

  21. www.visitrichmondva.com

  22. www.visitrochester.com

  23. www.visitsaltlake.com

  24. www.visittucson.org


So it is very likely that a number of the above DMO websites also have served the exploits around the same time. List of other websites that are affected in the same infection campaign:


Screen Shot 2014-07-21 at 9.21.01 PM

We believe the actors behind these infection sites are from the same group. They share a common infection pattern and their infection chain uses the same servers.


Technical Details:


The exploit pack is fingerprinting JAVA/PDF/Flash versions and delivers exploits based on the vulnerable applications. It delivers multiple exploits for all the vulnerable applications in attempt to maximize the chance of infection. It is built from the Nuclear Pack exploit kit.


www.seemonterey.com infection chain:


DMO-3

www.visittucson.org infection chain:


DMO-4

    It infects the machine with following application versions:

  1. JRE 6

  2. JRE 7u17 and less

  3. JRE 7u21

  4. Flash 11.9.900.170

  5. Flash 12.0.0.38

  6. Flash 12.0.0.43

  7. Flash 13.0.0.206

  8. Adobe Reader 8

  9. Adobe Reader 9.3

  10. IE 8/9/10

The vulnerabilities it tries to exploit include:

Java - CVE-2013-2465 and others

SWF - CVE-2014-0515

PDF - CVE-2010-0188

IE     - CVE-2013-2551

The hashes of Droppers:

  • 1937039ABC019DE0A7AB9FEC2A89AE29

  • E1768CE2A08FD4116A16961E5158E284   (Win32.Cidox)

As of writing, both of these droppers from exploit chain are detected by AV vendors.

The sample dropped through www.visittucson.org (MD5: E1768CE2A08FD4116A16961E5158E284) is a rootkit that overwrites the MBR and NTFS loader. Once executed it overwrites part of NTFS loader and reboots the machine and loads a driver to control various processes. We see a similar behavior as mentioned in this blog. This payload decodes a “shellcode” from resource section into memory and executes it.


DMO-5

 

Decoded using following operation:

DMO-6

This “shellcode” uses process hollowing technique to create another process to do the malicious activities.

00410B37   50             PUSH EAX                                 ; UNICODE "C:\sample\exe.exe" 00410B38   53             PUSH EBX 00410B39   FF95 2CFEFFFF   CALL DWORD PTR SS:[EBP-1D4]             ; kernel32.CreateProcessW …. 00410B58   FFB5 48FEFFFF   PUSH DWORD PTR SS:[EBP-1B8] 00410B5E   FF95 3CFEFFFF   CALL DWORD PTR SS:[EBP-1C4]             ; kernel32.GetThreadContext

It copies data to remote process using writeprocessmemory

DMO-7

It copies itself to suspended process using writeprocessmemory

00410C25   FF95 54FEFFFF   CALL DWORD PTR SS:[EBP-1AC]             ; kernel32.WriteProcessMemory

It uses SetThreadContext and ResumeThread to start new processes.


DMO-8

The hash of the second process/file is b0ee70b4c5f46fd61aa7d5e35feac801. It overwrites MBR/NTFS loader.


DMO-9DMO-10

Again: With the increasing complexity of Ad syndication and dynamic content creation, we anticipate more incidents of infection delivered through Ad networks. We strongly encourage Ad network providers take steps to enhance their security monitoring on the Ads content in order to build a more secure ecosystem for all.

I like to thank Abhijit Mohanta and other Cyphort Labs colleagues for helping me in analyzing this campaign.

This was cross-posted from the Cyphort blog.

Copyright 2010 Respective Author at Infosec Island]]>
Israeli Military and Hamas trade Hacking Attacks https://www.infosecisland.com/blogview/23886-Israeli-Military-and-Hamas-trade-Hacking-Attacks.html https://www.infosecisland.com/blogview/23886-Israeli-Military-and-Hamas-trade-Hacking-Attacks.html Thu, 24 Jul 2014 12:30:31 -0500 As Israeli ground forces push into Gaza to remove militant Islamic troops and missiles, hacking teams from both sides ply their trades. Reports of multiple site hacking, denial of service attacks and defacement are being reported by both camps in this struggle for Israel’s right to exist.

Israeli Attacks on Hamas

According to the The Jerusalem Post, Israeli hackers have either taken down or blocked numerous Hamas and Palestinian websites. Though several seem back up now, these included:

·         Qudstv.ps, Hamas’ official website

·         Felesteen.ps

·         Gaza Alan

·         Shehab.ps

Visiting Al Qassam’s website today will reveal the screenshot taken above with the message, “This website is subjected to intense attacks“.

Hamas and Hacktivist attacks on Israel

Pro-Hamas hackers and militant Islamic hacktivist groups have targeted several military and civilian Israeli sites. Earlier this month the IDF’s Twitter page was compromised by the Syrian Electronic Army.

This week, part of Israel’s channel 10 TV was hacked by the Hamas military wing causing some viewers to see a pro-Hamas message as seen below:

http://www.youtube.com/watch?v=qD1R2v3CAyQ

And on the lighter side, Dominoes Israel  Facebook page was hacked last week with posts claiming, “Today will strike deep in Israel, Tel Aviv, Haifa, Jerusalem, Ashkelon, Ashdod more than 2000 rockets. We’ll start at 7. Counting back towards the end of Israel … Be warned!“

Which brought an Israeli response of, “Hey, please reserve a missile for me with jalapenos, green olives, extra cheese, and mushrooms. You have my address. Tell the delivery boy to activate the alarm when it is arriving, so I know to put my pants on.“

It would seem that even Domino’s pizza took it light hearted, after they regained control of the page they apparently posted a picture of a Hamas militant with the caption, “You cannot defeat….The Israeli hunger for pizza!“

Pro-Hamas hackers haven’t just focused on Israeli targets, a Synagogue in Philadelphia was also just hacked.

Conclusion

The violence and loss of life as Israel struggles against militant Islamic aggression is a tragedy. Further dividing of Israel will not bring peace, all it has done so far is to provide new rocket launching areas for militant rockets. Providing a “Two State” solution is also not the answer, as it was already tried and failed. The British Mandate for Palestine and Trans-Jordan provided a “Two State” solution (with Arab leaders blessing) in the 1920’s. With this deal Palestine was divided between Jews and Muslims with Arabs getting 80% of the land and Israel only 20%.

This did not bring peace as militant Islamists demand even more land from Israel, even though they were given 80% of the territory for Muslim Palestinians to live. The truth is, they do not want Israel’s land – they do not want Israel to exist.

As  Golda Meir once said, “Peace will come when the Arabs will love their children more than they hate us.”

This was cross-posted from the Cyber Arms blog.

Copyright 2010 Respective Author at Infosec Island]]>
Security and the Internet of Things https://www.infosecisland.com/blogview/23885-Security-and-the-Internet-of-Things.html https://www.infosecisland.com/blogview/23885-Security-and-the-Internet-of-Things.html Thu, 24 Jul 2014 00:27:00 -0500 Cyber-attacks continue to become more innovative and sophisticated. Unfortunately, while organizations are developing new security mechanisms, cybercriminals are cultivating new techniques to circumvent them. Along with the growth in the sophistication of cyber-attacks, so has our dependence on the Internet and technology.

The Internet of Things

The day when practically every electronic device will be connected to the Internet is not that far away. According to Cisco, there are approximately 15 billion connected devices worldwide and Dell forecasts that we may see upwards 70 billion connected devices by 2020 -- meaning 10 devices per human, talking to each other and sending out messages.

The Internet of Things (IoT) sensation holds the potential to empower and advance nearly each and every individual and business. In today’s global society, we’re always on and we’re always getting data sources from a variety of different sources. This is the heart of the IoT. Everything is connected and speaking to each other. Warming our cars on a cold morning, regulating thermostats in our homes and determining what your husband took from the refrigerator during his midnight snack, will all be carried out from mobile devices.

Moving forward, IoT devices will help businesses track remote assets and integrate them into new and existing processes. They will also provide real-time information on asset status, location and functionality that will improve asset utilization and productivity and aid decision making. But, the security threats of the IoT are broad and potentially devastating and organizations must ensure that technology for both consumers and companies adhere to high standards of safety and security.

The IoT at Home…and at Work

With the growth of the IoT, we’re seeing the creation of tremendous opportunities for enterprises to develop new services and products that will offer increased convenience and satisfaction to their consumers. The rise of objects that connect themselves to the Internet is releasing an outpouring of new opportunities for data gathering, predictive analytics and IT automation.

The rapid uptake of Bring Your Own Device (BYOD)is increasing an already high demand for mobile applications for both work and home. To meet this increased demand, developers working under intense pressure, and on paper-thin profit margins, are sacrificing security and thorough testing in favor of speed of delivery and the lowest cost. This will result in poor quality products that can be more easily hijacked by criminals or hacktivists.

The information that individuals store on mobile devices already makes them attractive targets for hackers, specifically “for fun” hackers, and criminals. At the same time the amount of apps people download to their personal and work devices will continue to grow. But do the apps access more information than necessary and perform as expected? Worst case scenario, apps can be infected with malware that steals the user’s information – tens of thousands of smartphones are thought to be infected with one particular type of malware alone. This will only worsen as hackers and malware providers switch their attention to the hyper-connected landscape of mobile devices.

With Potential Comes Risk

As I’ve said, the IoT has great potential for the consumer as well as for businesses. While the IoT is still in its infancy, we have a chance to build in new approaches to security if we start preparing now. Security teams should take the initiative to research security best practices to secure these emerging devices, and be prepared to update their security policies as even more interconnected devices make their way onto enterprise networks.

Enterprises with the appropriate expertise, leadership, policy and strategy in place will be agile enough to respond to the inevitable security lapses. Those who do not closely monitor the growth of the IoT may find themselves on the outside looking in.

About the Author: Steve Durbin is managing director of the Information Security Forum (ISF). His main areas of focus include the emerging security threat landscape, cyber security, BYOD, the cloud, and social media across both the corporate and personal environments. Previously, he was senior vice president at Gartner. 

Copyright 2010 Respective Author at Infosec Island]]>
EBS Encryption: Enhancing the Amazon Web Services Offering with Key Management https://www.infosecisland.com/blogview/23882-EBS-Encryption-Enhancing-the-Amazon-Web-Services-Offering-with-Key-Management.html https://www.infosecisland.com/blogview/23882-EBS-Encryption-Enhancing-the-Amazon-Web-Services-Offering-with-Key-Management.html Wed, 23 Jul 2014 16:22:31 -0500 Amazon Web Services is making great strides in securing its customers' stored data, their “data at rest.” We have seen two recent announcements:

  • Amazon announced S3 Server-Side Encryption with Customer-provided keys (which goes by the not-quite catchy acronym SSE-C). Previously, a user could tell S3 to encrypt data as soon as the data is stored, but Amazon managed the encryption keys and they were never exposed to the customer. With this new feature, users can specify those keys and Amazon will use them when “touching” the data, but will not keep the keys.
  • In another blog post, Amazon announced that Elastic Block Store (EBS) volumes can now be encrypted, but cryptographic keys are managed by Amazon.

Both of these announcements make it easier to encrypt data at rest, improving the security of cloud applications. But something is clearly missing from the second announcement, and the press was quick to point it out: in two words – key management.

Many, perhaps most, AWS customers use EBS volumes to store sensitive data: databases, image files, what have you. With Amazon's new solution, customers will be able to encrypt this data. But the encryption keys will be persisted somewhere on Amazon's infrastructure. This creates a couple of irresistible targets for hackers.

  • One point worthy of attention is a bit of a doomsday scenario. The AWS key storage is a “single point of secrets” holding keys for all customers, for the duration of their disk volumes' lifetime. If someone, a rogue AWS insider or a hacker, could get access to the key storage, the results would be catastrophic. They would be able to decrypt any of the encrypted EBS volumes, of any Amazon customer!
  • Another, less sweeping but perhaps more likely scenario, is if the attacker obtains credentials to a customer account, and is able to snapshot an EBS disk and attach it to a new EC2 instance. Despite the EBS disk being encrypted, once attached to an EC2 instance it can be copied out in the plain: the instance will be automatically provisioned by AWS with the decryption keys. This scenario can be surprising, since many customers believe that encryption should protect them from such a simple attack.

In contrast, when customer-side key management is supported, the customer can decide how to protect their data encryption keys based on customer-specific risk assessment. If needed, different keys can be protected differently. For example, some highly sensitive keys may be kept off-line when not in use.

Customers can decide whether to use hardware-based key management solutions (Hardware Security Modules, HSMs) or whether they prefer pure software approaches that rely on cryptographic techniques to secure the keys. Also some interesting new mathematical approaches, such as Homomorphic Key Management or Split Key Encryption, are becoming available. Customers can establish access control policies that fit the way they are doing business. Keys can be farmed out to specific users, user groups or indeed to customers of Amazon's customers.

Amazon Web Services have clearly gotten the message that customers require more control of their encryption keys, and added this capability into the S3 infrastructure. In fact the S3 solution is extremely easy to use, and can be integrated with key management solutions in a matter of minutes. So we can definitely hope AWS will move in the same direction with EBS.

Full disk encryption is becoming more and more popular in cloud settings, and some of the smaller clouds like Google Compute Engine have supported it for a while. Amazon is a bit late to this game, and should lead the way in enabling customer control of encryption keys. Some customers will never move sensitive data to the cloud. At the other extreme there are cloud customers who would prefer to leave everything to the cloud provider, even at the cost of reduced security and loss of control. But surveys show that the majority of security-aware customers are somewhere in the middle, they would like to get the benefits of a well-managed cloud infrastructure, along with the flexibility of managing their own data security.

Copyright 2010 Respective Author at Infosec Island]]>
White House Website Includes Unique Non-Cookie Tracker, Conflicts With Privacy Policy https://www.infosecisland.com/blogview/23884-White-House-Website-Includes-Unique-Non-Cookie-Tracker-Conflicts-With-Privacy-Policy.html https://www.infosecisland.com/blogview/23884-White-House-Website-Includes-Unique-Non-Cookie-Tracker-Conflicts-With-Privacy-Policy.html Wed, 23 Jul 2014 13:59:12 -0500 Yesterday, ProPublica reported on new research by a team at KU Leuven and Princeton oncanvas fingerprinting. One of the most intrusive users of the technology is a company called AddThis, who by are employing it in “shadowing visitors to thousands of top websites, from WhiteHouse.gov to YouPorn.com.” Canvas fingerprinting allows sites to get even more identifying information than we had previously warned about with our Panopticlick fingerprinting experiment.

Canvas fingerprinting exploits the fact that different browsers have slightly different algorithms, parameters, and hardware for turning text into pictures on your screen (or more specifically, into an HTML 5 canvas object that the tracker can read1). According to theresearch by Gunes Acar, et al.,AddThis draws a hidden image containing the unusual phrase “Cwm fjordbank glyphs vext quiz” and observed the way the pixels would turn out differently on different systems.

While YouPorn quickly removed AddThis after the report was published, the White House website still contains AddThis code.  Some White House pages obviously include the AddThis button, such as the White House Blog, and a link to the AddThis privacy policy.

Other pages, like the White House’s own Privacy Policy, load javascript from AddThis, but do not otherwise indicate that AddThis is present. To pick the most ironic example, if you go to the page for the White House policy for third-party cookies, it loads the “addthis_widget.js.” This script, in turn, references “core143.js,” which has a “canvas” function and the tell-tale “Cwm fjordbank glyphs vext quiz” phrase.

The White House cookie policy notes that, “as of April 18, 2014, content or functionality from the following third parties may be present on some WhiteHouse.gov pages,” listing AddThis.  While it does not identify which pages, we have yet to find one without AddThis, whether open or hidden.

On the same page that is loading the AddThis scripts, the White House third-party cookie policy makes a promise: “We do not knowingly use third-party tools that place a multi-session cookie prior to the user interacting with the tool.” There is no indication that the White House knew about this function before yesterday's report.

Nevertheless, the canvas fingerprint goes against the White House policy. It may not be a traditional cookie, but it fills the same function as a multi-session cookie, allowing the tracking of unique computers across the web. While the AddThis privacy policy does not mention the canvas fingerprint by that name, it notes that it sometimes places “web beacons” on pages, which would load prior to the user interacting with the AddThis button.

The main distinction is that the canvas fingerprint can’t be blocked by cookie management techniques, or erased with your other cookies. This is inconsistent with the White House’spromise that “Visitors can control aspects of website measurement and customization technologies used on WhiteHouse.gov.” The website’s How To instructions are no help, because they are limited to traditional cookies and flash cookies.  AddThis’ opt out is no more helpful, as it only prevents targeting, not tracking: “The opt-out cookie tells us not to use your information for delivering relevant online advertisements.”

The White House is far from alone. According to the researchers, over 5,000 sites include the canvas fingerprinting, with the vast majority from AddThis.

What You Can Do to Protect Yourself From Canvas

Fortunately, some solutions are available. You can block trackers like AddThis using an algorithmic tool such as EFF’s Privacy Badger, or a list-based one like Disconnect. Or if you're a fairly knowledgeable user and are willing to do some extra work, you can use a manually controlled script blocker such as No Script to only run JavaScript from domains you trust.

This was cross-posted from EFF's DeepLinks blog. Copyright 2010 Respective Author at Infosec Island]]>
Crypto Locker Down, But NOT Out https://www.infosecisland.com/blogview/23883-Crypto-Locker-Down-But-NOT-Out.html https://www.infosecisland.com/blogview/23883-Crypto-Locker-Down-But-NOT-Out.html Wed, 23 Jul 2014 10:32:54 -0500 So, the US govt and law enforcement claim to have managed the disruption of crypto locker. And officials are either touting it as a total victory or a more realistic slowdown of the criminals leveraging the malware and botnets.

Even as the govt was touting their takedown, threat intelligence companies around the world (including MSI), were already noticing that the attackers were mutating, adapting and re-building a new platform to continue their attacks. The attackers involved aren’t likely to stay down for long, especially given how lucrative the crypto locker malware has been. Many estimates exist for the number of infections, and the amount of payments received, but most of them are, in a word, staggering. With that much money on the line, you can expect a return of the nastiness and you can expect it rather quickly.

Takedowns are effective for short term management of specific threats, and they make great PR, but they do little, in most cases, to actually turn the tide. The criminals, who often escape prosecution or real penalties, usually just re-focus and rebuild. 

This is just another reminder that even older malware remains a profit center. Mutations, variants and enhancements can turn old problems like Zeus, back into new problems. Expect that with crypto locker and its ilk. This is not a problem that is likely to go away soon and not a problem that a simple takedown can solve.

This was cross-posted from the MSI's State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Unisys Ponemon study – Is It Actually Relevant to ICSs https://www.infosecisland.com/blogview/23881-The-Unisys-Ponemon-study--Is-It-Actually-Relevant-to-ICSs.html https://www.infosecisland.com/blogview/23881-The-Unisys-Ponemon-study--Is-It-Actually-Relevant-to-ICSs.html Tue, 22 Jul 2014 11:00:18 -0500 Unisys sponsored a report by the Ponemon Institute: “Critical Infrastructure: Security Preparedness and Maturity”. The front of the report shows control systems in a process facility. Consequently, the implication is this report is addressing control systems.

It is important to understand the validity of the observations and conclusions as this report is being widely quoted. The report states that 57% of the respondents felt that ICS/SCADA were more at risk and 67% claim that they had cyber compromises over the past year with either confidential information or disruption to operations. Yet from Pie Chart 2, at most 20% of the respondents were directly responsible for control systems. Many of the questions that were asked do not make sense for ICSs and it is also not clear to me how a number of the questions can have answers that total more than 100%. It also is not clear how many of the SCADA/ICS networks were even being monitored? If there were disruption to Operations, the impacts should be obvious with potential physical damage.

To me, the real question is if these are Corporate network issues not control system issues. Some of the questions strongly imply that control system networks have been connected to Corporate networks. For example, why ask questions about e-mail servers? The way some of the questions were asked leads me to believe that the IT organizations may be responsible for some of the control system compromises. Certainly the issue of “maturity” needs to be asked in a different way – how mature are these Corporate organizations in what they are doing TO the ICSs.

This is the second Ponemon report dealing with critical infrastructure that did not have significant ICS input. Consequently, I have discussed my concerns with Larry Ponemon about the need for a report on ICS that has significant ICS involvement and asks the appropriate questions for ICS cyber security.

This was cross-posted from the Unfettered blog.

Copyright 2010 Respective Author at Infosec Island]]>
Black Hat Conference Talk on How to Break Tor Cancelled https://www.infosecisland.com/blogview/23880-Black-Hat-Conference-Talk-on-How-to-Break-Tor-Cancelled.html https://www.infosecisland.com/blogview/23880-Black-Hat-Conference-Talk-on-How-to-Break-Tor-Cancelled.html Tue, 22 Jul 2014 10:52:01 -0500 Organizers of the Black Hat security conference that's scheduled to take place next month in Las Vegas announced that a presentation detailing how the Tor network's users can be de-anonymized has been cancelled.

Michael McCord and Alexander Volynkin, both researchers at Carnegie Mellon University's CERT, should have held a talk titled "Have to be the NSA to Break Tor: Deanonymizing Users on a Budget." The abstract of the presentation, which has been removed from the official Black Hat website, revealed that the researchers have found a way to break the anonymity network by "exploiting fundamental flaws in Tor design and implementation." The experts claim to be able to identify the IP addresses of Tor users and even uncover the location of hidden services with an investment of less than $3,000.

"In our analysis, we've discovered that a persistent adversary with a handful of powerful servers and a couple gigabit links can de-anonymize hundreds of thousands Tor clients and thousands of hidden services within a couple of months," the researchers said in the abstract of their presentation.

However, according to the event's organizers, they had to remove the briefing from their schedule after the legal counsel for the Software Engineering Institute (SEI) and Carnegie Mellon University informed them that "Mr. Volynkin will not be able to speak at the conference since the materials that he would be speaking about have not yet approved by CMU/SEI for public release."

Roger Dingledine, one of the original developers of the Tor Project, clarified on Monday that the organization doesn't have anything to do with the decision to cancel the talk.

"We did not ask Black Hat or CERT to cancel the talk. We did (and still do) have questions for the presenter and for CERT about some aspects of the research, but we had no idea the talk would be pulled before the announcement was made," Dingledine said. "In response to our questions, we were informally shown some materials. We never received slides or any description of what would be presented in the talk itself beyond what was available on the Black Hat Webpage."

Dingledine also took the opportunity to encourage researchers who find vulnerabilities in Tor to disclose them responsibly.

"Researchers who have told us about bugs in the past have found us pretty helpful in fixing issues, and generally positive to work with," he explained. 

About the Author: Eduard Kovacs is a reporter for SecurityWeek

Copyright 2010 Respective Author at Infosec Island]]>
Keeping it Simple - Part 1 https://www.infosecisland.com/blogview/23878-Keeping-it-Simple-Part-1.html https://www.infosecisland.com/blogview/23878-Keeping-it-Simple-Part-1.html Mon, 21 Jul 2014 13:16:27 -0500 Apparently, I struck a nerve with small business people trying to comply with PCI.  In an ideal world, most merchants would be filling out SAQ A, but we do not live in an ideal world.  As a result, I have collected some ideas on how merchants can make their lives easier.

Do Not Store Cardholder Data

It sounds simple, but it amazes me how many small businesses are storing cardholder data (CHD).  In most cases, it is not like they wanted to store CHD, but the people in charge just did not ask vendors that one key question, “Does your solution store cardholder data?”  If a vendor answers “Yes”, then you should continue your search for a solution that does not store CHD.

Even when the question is asked of vendors, you may not get a clear answer.  That is not necessarily because the vendor is trying to hide something, but more likely because the salespeople have never been asked this question before.  As a result, do not be surprised if the initial answer is, “I’ll have to get back to you on that.”  If you never get an answer or the answer is not clear, then you should move on to a different vendor that does provide answers to such questions.

If your organization cannot find a solution that does not store CHD, then at least you are going into a solution with your eyes open.  However, in today’s payment processing application environment, most vendors are doing all that they can to avoid storing CHD.  If the vendors you are looking at for solutions are still storing CHD, then you may need to get creative to avoid storing CHD.

That said, even merchants that only use points of interaction (POI) such as card terminals can also end up with CHD being stored.  I have encountered a number of POIs that were delivered from the processor configured such that the POI was storing full PAN.  Apparently, some processors feel it is the responsibility of the merchant to configure the POI securely even though no such instructions were provided indicating that fact.  As a result, you should contact your processor and have them walk you through the configuration of the POI to ensure that it is not storing the PAN or any other sensitive information.

Then there are the smartphone and tablet solutions from Square, Intuit and a whole host of other mobile solution providers.  While the PCI SSC has indicated that such solutions will never be considered PCI compliant, mobile POIs continue to proliferate with small businesses.  The problem with most of these solutions is when a card will not work through the swipe/dip and the CHD is manually keyed into the device.  It is at that point when the smartphone/tablet keyboard logger software captures the CHD and it will remain in the device until it is overwritten which can be three to six months down the road.  In the case of EMV, the device can capture the PIN if it is entered through the screen thanks to the built in keyboard logger.  As a result, most EMV solutions use a signature and not a PIN.  The reason Square, Intuit and the like get away with peddling these non-compliant POI solutions is that they also serve as the merchant’s acquiring bank and are accepting the risk of the merchant using a non-compliant POI.

The bottom line here is that merchants need to understand these risks and then make appropriate decisions on what risks they are will to accept in regards to the explicit or implicit storage of CHD.

Mobile Payment Processing

The key thing to know about these solutions is that the PCI Security Standards Council has publicly stated that these solutions will never be considered PCI compliant.  Yes, you heard that right; they will never be PCI compliant.  That is mostly because of the PCI PTS standard regarding the security of the point of interaction (POI) for PIN entry and the fact that smartphones and tablets have built in keyboard loggers that record everything entered into these devices.  There are secure solutions such as the Verifone PAYware line of products.  However, these products only use the mobile device as a display.  No cardholder data is allowed to be entered into the mobile device.

So why are these solutions even available if they are not PCI compliant?  It is because a number of the card brands have invested in the companies producing these solutions.  As a result, the card brands have a vested interest in allowing them to exist.  And since the companies offering the solutions are also acting as the acquiring bank for the merchant, they explicitly accept the risk that these solutions present.  That is the beauty of the PCI standards, if a merchant’s acquiring bank approves of something, then the merchant is allowed to do it.  However, very few merchants using these solutions understand the risk these solutions present to them.

First is the risk presented by the swipe/dip device.  Some of these devices encrypt the data at the swipe/dip but not all.  As a result, you should ask the organization if their swipe/dip device encrypts the information.  If it does encrypt, then even if the smartphone/tablet comes in contact with the information, it cannot read it.  If it is not encrypted, I would move on to the next mobile payments solution provider.

The second risk presented is the smartphone/tablet keyboard logger.  This feature is what allows your mobile device to guess what you want to type, what songs you like and a whole host of convenience features.  However, these keyboard loggers also remember anything typed into them such as primary account numbers (PAN), driver’s license numbers and any other sensitive information they can come into contact.  They can remember this information as long as it is not overwritten in the device’s memory.  Depending on how much memory a device has, this can be anywhere from weeks to months.  One study a few years back found that information could be found on mobile devices for as long as six months and an average of three months.

While encrypting the data at the swipe/dip will remove the risk that the keyboard logger has CHD, if you manually key the PAN into the device, then the keyboard logger will record it.  As a result, if you are having a high failure rate with swiping/dipping cards, you will have a lot of PANs contained in your device.

The bottom line is that if you ever lose your mobile device or your trade it in, you risk exposing CHD if you do not properly wipe the device.  It is not that these solutions should not be used, but the purveyors of these solutions should be more forthcoming in the risks of using such solutions so that merchants can make informed decisions beyond the cheap interchange fees.

There are more things merchants can do to keep it simple and I will discuss those topics in a future post.

This was cross-posted from the PCI Guru blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Five Stages of Vulnerability Management https://www.infosecisland.com/blogview/23877-The-Five-Stages-of-Vulnerability-Management-.html https://www.infosecisland.com/blogview/23877-The-Five-Stages-of-Vulnerability-Management-.html Mon, 21 Jul 2014 09:12:37 -0500 By: Irfahn Khimji

The key to having a good information security program within your organization is having a good vulnerability management program. Most, if not all, regulatory policies and information security frameworks advise having a strong vulnerability management program as one of first things an organization should do when building its information security program.

The Council on Cyber Security specifically lists it as number four in the Top 20 Critical Security Controls.

Over the years, I’ve seen a variety of different vulnerability management programs and worked with many companies with various levels of maturation in their VM programs. This post will outline the five stages of maturity based on the Capability Maturity Model (CMM) and give you an idea as to how to take your organization to the next level of maturity.

WHAT IS THE CAPABILITY MATURITY MODEL?

The CMM is a model that helps develop and refine a process in an incremental and definable method. More information on the model can be found here. The five stages of the CMM are:

 

CMMI Staged Approach

Source: http://www.tutorialspoint.com/cmmi/cmmi-maturity-levels.htm

STAGE 1: INITIAL

In the ‘Initial’ stage of a vulnerability management program there are generally minimal processes and procedures, if any. The vulnerability scans are done by a third-party vendor as part of a penetration test or part of an external scan. These scans are typically done from one to four times per year at the request of an auditor or a regulatory requirement.

The vendor who does the audit will provide a report of the vulnerabilities within the organization. The organization will typically remediate any ‘Critical’ or ‘High’ risks to ensure that they remain compliant. The remaining information gets filed away once a passing grade has been given.

I recently wrote a post on how security is not just a check box anymore. If you are still in this stage, you are a prime target for an attacker. It would be wise to begin maturing your program if you haven’t started already.

STAGE 2: MANAGED

In the ‘Managed’ stage of a vulnerability management program the vulnerability scanning is brought in-house. The organization defines a set of procedures for vulnerability scanning. It would purchase a vulnerability management solution and begin to scan on a weekly or monthly basis. Unauthenticated vulnerability scans are run and the security administrators begin to see vulnerabilities from an exterior perspective.

Most organizations I see in this stage do not have support from its upper management, leaving them with a limited budget. This results in purchasing a relatively cheap solution or using a free open source vulnerability scanner. While the lower-end solutions do provide a basic scan, they are limited in the reliability of their data collection, business context and automation.

Using a lower-end solution could prove to be problematic in a couple of different ways. The first is in the accuracy and prioritization of your vulnerability reporting. If you begin to send reports to your system administrators with a bunch of false positives, you will immediately lose their trust. They, like everyone else these days, are very busy and want to make sure they are maximizing their time effectively. Having the trust of the system administrators is a crucial component of an effective vulnerability management program.

The second problem is that even if you verify that the vulnerabilities are in fact vulnerable, how do you prioritize which ones they should fix first? Most solutions offer a High, Medium, Low or a 1-10 score. With the limited resources system administrators have, they realistically can only fix a few vulnerabilities at a time. How do they know which 10 is their most 10 or which High is the most High? Without appropriate prioritization, this can be a daunting task.

Luckily, this section isn’t all doom and gloom! If you’re looking for a great way to start a reliable and actionable vulnerability management program, we at Tripwire offer a small version of our Enterprise level scanner for free. Check out Secure Scan if you haven’t already. It’s not a free trial, but a free license for up to 100 IPs!

STAGE 3: DEFINED

In the ‘Defined’ stage of a vulnerability management program the processes and procedures are well-characterized and understood throughout the organization. The information security team has support from their executive management, as well as trust from the system administrators.

At this point, the information security team has proven that the vulnerability management solution they chose is reliable and safe for scanning on the organization’s network. Authenticated vulnerability scans are run on a daily or weekly basis with audience-specific reports being delivered to various levels in the organization. The system administrators receive specific vulnerability reports, while management receives vulnerability risk trending reports.

Vulnerability management state data is shared with the rest of the information security ecosystem to provide actionable intelligence for the information security team.

The majority of organizations I’ve seen are somewhere between the ‘Managed’ and the ‘Defined’ stage. As I noted above, a very common problem is gaining the trust of the system administrators. If the solution that was initially chosen did not meet the requirements of the organization, it can be very difficult to regain their trust.

STAGE 4:  QUANTITATIVELY MANAGED

In the ‘Quantitatively Managed’ stage of a vulnerability management program, the specific attributes of the program are quantifiable and metrics are provided to the management team. The following is a summary of the automation metrics recommended by the Council on Cyber Security:

  1. What is the percentage of the organization’s business systems that have not recently been scanned by the organization’s vulnerability management system?
  2. What is the average vulnerability score of each of the organization’s business systems?
  3. What is the total vulnerability score of each of the organization’s business systems?
  4. How long does it take, on average, to completely deploy operating system software updates to a business system?
  5. How long does it take, on average, to completely deploy application software updates to a business system?

These metrics can be viewed holistically as an organization or broken down by the various business units to see which business units are reducing their risk and which are lagging behind.

STAGE 5: OPTIMIZING

Lastly, in the ‘Optimizing’ stage, the metrics defined in the previous stage are targeted for improvement. Optimizing each of the metrics will ensure that the vulnerability management program continuously reduces the attack surface of the organization. The information security team should work together with the management team to set attainable targets for the vulnerability management program. Once those targets are met consistently, new and more aggressive targets can be set with the goal of continuous process improvement.

CONCLUSION

As one of the top four of the Top 20 Critical Security Controls, vulnerability management is one of the first things that should be implemented in a successful information security program. Ensuring the ongoing maturation of your vulnerability management program is a key to reducing the attack surface of your organization. If we can each reduce the surface the attackers have to work with, we can make this world more secure, one network at a time!

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Cached Domain Credentials in Vista/7 (AKA Why Full Drive Encryption is Important) https://www.infosecisland.com/blogview/23875-Cached-Domain-Credentials-in-Vista7-AKA-Why-Full-Drive-Encryption-is-Important.html https://www.infosecisland.com/blogview/23875-Cached-Domain-Credentials-in-Vista7-AKA-Why-Full-Drive-Encryption-is-Important.html Thu, 17 Jul 2014 11:09:00 -0500 By: Ronnie Flathers 

Recently, I was conducting a security policy audit of a mid-size tech company and asked if they were using any form of disk encryption on their employee’s workstations. They were not, however they pointed me to a policy document that required all “sensitive” files to be stored in an encrypted folder on the User’s desktop. They assumed that this was adequate protection against the files being recovered should the laptop be lost or stolen.

Unfortunately, this is not the case. Without full disk encryption (like BitLocker), sensitive system files will always be available to an attacker, and credentials can be compromised. Since Windows file encryption is based on user credentials (either local or AD), once these creds are compromised, an attacker would have full access to all “encrypted” files on the system. I will outline an attack scenario below to stress the importance of full drive encryption.

BACKGROUND

If you are not familiar, Windows has a built in file encryption function called Encrypting File System (EFS) that has been around since Windows 2000. If you right click on a file or folder and go to Properties->Advanced you can check a box called “Encrypt contents to secure data”. When this box is checked, Windows will encrypt the folder and its contents using EFS, and the folder or file will appear green in Explorer to indicate that it is protected:

Encrypted Directory

 

Now only that user will be able to open the file. Even Administrators will be denied from viewing it. Here a Domain Admin (‘God’) is attempting to open the encrypted file that was created by a normal user (‘nharpsis’):

secret_denied_god

 

 

According to Microsoft’s TechNet article on EFS, “When files are encrypted, their data is protected even if an attacker has full access to the computer’s data storage.” Unfortunately, this is not quite true. The encrypted file above (“secret.txt”) will be decrypted automatically and viewable whenever ‘nharpsis’ logs in to the machine. Therefore to view the files, an attacker only needs to compromise the ‘nharpsis’ account.

THE ATTACK

In this attack scenario, we will assume that a laptop has been lost or stolen and is powered off. There are plenty of ways to mount an online attack against Windows or extract credentials and secret keys straight from memory. Tools like mimikatz or theVolatility Framework excel at these attacks.

For a purely offline attack, we will boot from a live Kali Linux image and mount the Windows hard drive. As you can see, even though we have mounted the Windows partition and have read/write access to it, we are unable to view files encrypted with EFS:

Permission Denied - Kali

Yes you read that right. We are root and we are seeing a “Permission denied”.

Commercial forensic tools like EnCase have functionality to decrypt EFS, but even they require the username and password of the user who encrypted it. So the first step will be to recover Ned Harpsis’s credentials.

Dumping Credentials

There are numerous ways to recover or bypass local accounts on a windows machine. SAMDUMP2 and ‘chntpw’ are included with Kali Linux and do a nice job of dumping NTLM hashes and resetting account passwords, respectively. However, in this instance, and the instance of the company I was auditing, these machines are part of a domain and AD credentials are used to log in.

Windows caches domain credentials locally to facilitate logging in when the Domain Controller is unreachable. This is how you can log in to your company laptop when traveling or on a different network. If any domain user, including admins, have logged in to this machine, his/her username and a hash of his password will be stored in one of the registry hives.

Kali Linux includes the tool ‘cachedump’ which is intended to be used just for this purpose. Cachedump is part of a larger suite of awesome Python tools called ‘creddump’ that is available in a public svn repo:https://code.google.com/p/creddump/

Unfortunately, creddump has not been updated in several years, and you will quickly realize when you try to run it that it does not work on Windows 7:

Cachedump Fail

This is a known issue and is discussed on the official Google Code project.

As a user pointed out, the issue persisted over to the Volatility project and an issue was raised there as well. A helpful user released a patch file for the cachedump program to work with Windows 7 and Vista.

After applying the patches and fixes I found online, as well as some minor adjustments for my own sanity, I got creddump working on my local Kali machine.

For convenience’s sake, I have forked the original Google Code project and applied the patches and adjustments. You can find the updated and working version of creddump on the Neohapsis Github:

https://github.com/Neohapsis/creddump7

 

Now that I had a working version of the program, it was just a matter of getting it on to my booted Kali instance and running it against the mounted Windows partition:

Creddump in action

Bingo! We have recovered two hashed passwords: one for ‘nharpsis’, the user who encrypted the initial file, and ‘god’, a Domain Admin who had previously logged in to the system.

Cracking the Hashes

Unlike locally stored credentials, these are not NT hashes. Instead, they are in a format known as ‘Domain Cache Credentials 2′ or ‘mscash2′, which uses PBKDF2 to derive the hashes. Unfortunately, PBKDF2 is a computation heavy function, which significantly slows down the cracking process.

Both John and oclHashcat support the ‘mscash2′ format. When using John, I recommend just sticking to a relatively short wordlist and not to pure bruteforce it.

If you want to attempt to use a large wordlist with some transformative rules or run pure bruteforce, use a GPU cracker with oclHashcat and still be prepared to wait a while.

To prove that cracking works, I used a wordlist I knew contained the plaintext passwords. Here’s John cracking the domain hashes:

Cracked with John

Note the format is “mscash2″. The Domain Admin’s password is “g0d”, and nharpsis’s password is “Welcome1!”

I also extracted the hashes and ran them on our powerful GPU cracking box here at Neohapsis. For oclHashcat, each line must be in the format ‘hash:username’, and the code for mscash2 is ‘-m 2100′:

oclHashcat_cracked

Accessing the encrypted files

Now that we have the password for the user ‘nharpsis’, the simplest way to retrieve the encrypted file is just to boot the laptop back into Windows and log in as ‘nharpsis’. Once you are logged in, Windows kindly decrypts the files for you, and we can just open them up:

secret_open

Summary

As you can see, if an attacker has physical access to the hard drive, EFS is only as strong as the users login password. Given this is a purely offline attack, an attacker has unlimited time to crack the password and then access the sensitive information.

So what can you do? Enforce full drive encryption. When BitLocker is enabled, everything in the drive is encrypted, including the location of the cached credentials. Yes, there are attacks agains BitLocker encryption, but they are much more difficult then attacking a user’s password.

In the end, I outlined the above attack scenario to my client and recommended they amend their policy to include mandatory full drive encryption. Hopefully this straightforward scenario shows that solely relying on EFS to protect sensitive files from unauthorized access in the event of a lost or stolen device is an inadequate control.

This was cross-posted from the Neohapsis blog.

Copyright 2010 Respective Author at Infosec Island]]>
Snowden Continues to Expose Allied Cyber Tactics https://www.infosecisland.com/blogview/23874-Snowden-Continues-to-Expose-Allied-Cyber-Tactics.html https://www.infosecisland.com/blogview/23874-Snowden-Continues-to-Expose-Allied-Cyber-Tactics.html Thu, 17 Jul 2014 10:52:42 -0500 NSA whistleblower and Putin poster boy Edward Snowden apparently released yet another document, this one exposing UK cyber spying techniques allegedly used by the GCHQ.

The document, according to The Intercept lists multiple tools that the UK intelligence agency used to spy on social media accounts, interrupt or modify communication and even modify online polls.

Tools like:

  • UNDERPASS – Change outcome of online polls
  • SILVERLORD – Disruption of video-based websites hosting extremist content
  • ANGRY PIRATE – Permanently disables a target’s account on a computer
  • PREDATORS FACE – Targeted Denial Of Service against Web Servers
  • And several others.

The release again leaves me scratching my head.

From ancient times countries spied on each other, even their allies. Only the most naive would assume this practice has magically stopped in the online age. I do love how shocked governments appeared in the media when they found out that the NSA was snooping on them, what a joke.

And in this case, several of these tools listed sound like they are more geared towards fighting or countering online use of enemy communications possibly by Islamic militants.

One would have to ask, does this release from Snowden make the people of the UK or the US safer from government snooping, or more likely would it tell enemy nations exactly what tools have been and will be used against them?

Again with Snowden one would have to ask, is he a champion of internet privacy or simply just a traitor to the US and her allies, exposing tools and techniques used against foreign nations and in the war on terror?

With Snowden pushing for an extension of his stay in Russia, it would seem the later would be correct.

 This was cross-posted from the Cyber Arms blog.

Copyright 2010 Respective Author at Infosec Island]]>
Compliance and Security Seals from a Different Perspective https://www.infosecisland.com/blogview/23873-Compliance-and-Security-Seals-from-a-Different-Perspective.html https://www.infosecisland.com/blogview/23873-Compliance-and-Security-Seals-from-a-Different-Perspective.html Wed, 16 Jul 2014 12:04:12 -0500 Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Think about that for a second.

With that frame of reference, all the push to compliance and all the silly little “Hacker Safe!” security seals on websites make sense. Maybe they’re not secure, or maybe they are, but the point isn’t to demonstrate some level of absolute security. The point is to reassure you, the user, that you are doing business with someone who thought about your interests. Well…at least they pretended to. Whether it’s privacy, security, or both… the proprietors of this website or that store want to give you some way to feel safe doing business with them.

All this starts to bend the brain a bit, around the idea of why we really do security things. We need to earn someone’s business, through his or her trust. The risks we take on the road to earn their business …well that’s up to us to worry about. Who do you suppose is more qualified to make the assessment of ‘appropriate risk level’ – you or your customers? With some notable exception the answer won’t be your customers.

Realistically you don’t want your customers trying to decide for themselves what is or isn’t appropriate levels of security. Frankly, I wouldn’t be comfortable with this either. The reality behind this thinking is that the customer simply doesn’t know any better, typically, and would likely make the wrong decision given the chance. So it’s up to you to decide, and that’s fair. Of course, this makes the assumption that you as the proprietor have the customer’s interests in mind, and have some clue on how to do risk assessments and balance risk/reward. Lots to assume, I know. Also, you know what happens when you ass-u-me, right?

So let’s wind back to my point now. Compliance and security seals are a good thing. Before you pick up that rock to throw at me, think about this again. The problem isn’t that compliance and “security seals” exist but that I think we’re mis-understanding their utility. The answer isn’t to throw these tools away and create something else, because that something else will likely be just as complicated (or useless) and needlessly waste resources on solving a problem that already is somewhat on its way. Instead, let’s look to make compliance and security seals more useful to the end customer so you can focus on making that risk equation balance in your favor. I don’t quite know what that solution would look like, yet, but I’m going to investigate it with some smart people. I think ultimately there needs to be some way to convey the level security ‘effort’ by the proprietor, which becomes binding and the owner can be held liable for providing false information, or stretching the truth.

With this perspective I think we could take these various compliance regulations and align them with expectations that customers have, while tying them to some security and risk goals. This makes more sense than what I see being adopted today. The goal isn’t to be compliant, well, I mean, it is … but it’s not to be compliant and call that security. It’s to be compliant as a result of being more secure. Remembering that the compliance thing and security seal is for your customers is liberating and lets you focus on the bigger picture of balancing risk/reward for your business.

This was cross-posted from the Follow the Wh1t3 Rabbit blog.

 

 

 

 

Copyright 2010 Respective Author at Infosec Island]]>
Security: Not Just a Checkbox Anymore https://www.infosecisland.com/blogview/23871-Security-Not-Just-a-Checkbox-Anymore.html https://www.infosecisland.com/blogview/23871-Security-Not-Just-a-Checkbox-Anymore.html Tue, 15 Jul 2014 10:30:00 -0500 By: Irfahn Khimji

There have been many publicized victims of breaches recently. There can often be a lot of conjecture as to what happened, how it happened, and why it happened.

Did they have security controls in place? Are they getting accurate information? Is the information they are getting actionable? Is anyone actually actioning this actionable information?

These are all questions that we, as security practitioners, should be asking ourselves on a daily basis. All of which are proof that security and compliance cannot just be a check box item anymore.

For example, within my organization, my security team and I may have acquired and filled in all the audit requirements of reporting on vulnerabilities, reporting on changes in my environment, and logging all my events of interest. However, what is happening with that information?

Are they just being filed away so that when the audit team rolls around they can give me my customary passing check mark or are the findings actually being remediated?

What systems am I covering in my organization? Am I only covering the 10% of my systems that are within scope of my audit? What if an attacker leverages an out-of-scope system within my organization as a stepping stone towards my more critical assets? Do I even know what systems are on my network? Do I know what software is installed on those systems? Is that software patched and secured?

As a security practitioner in your organization, I encourage you to take a minute and answer these questions to yourself. Answering these questions is a great first step towards building a deeper understanding of the surface area of risk within your organization.

Let’s take a minute to look at this from the keyboard of an attacker. If I was to target your organization I’m looking for the low hanging fruit:

  • What systems has this organization forgotten about?
  • What vulnerabilities are on these systems?

Chances are if they have been forgotten about, there are some vulnerabilities I can exploit with great ease!

Is this organization monitoring for changes on their network? If not, I can turn off logging and create my own back doors without anyone noticing!

What We Need to do as Defenders:

As defenders of our organization, we need to ensure that we are establishing an appropriate secure technology culture within our organizations. As more and more breaches are being publicized, business owners are becoming more aware of the risk associated with poor security practices.

Now is a great time to leverage a framework such as the Top 20 Critical Security Controls to get the support of key executives.

For more information, check out this post on Demonstrating Enterprise Commitment to Best Practice and Using the Top 20 Critical Security Controls to get Your CFO’s Attention.

If we treat information security as more than just a checkbox, we can make this world more secure one network at a time!

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>