Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Webcast: Segmentation Beyond VLANs, Subnets, and Zones https://www.infosecisland.com/blogview/24645-Webcast-Segmentation-Beyond-VLANs-Subnets-and-Zones.html https://www.infosecisland.com/blogview/24645-Webcast-Segmentation-Beyond-VLANs-Subnets-and-Zones.html Tue, 01 Sep 2015 10:31:39 -0500 Live Webcast: Wednesday, Sept. 2nd, 2015 at 1:00 pm ET

You already know the power of application segmentation to deliver data center and cloud security—now you can take segmentation to the next level. Nano-segmentation is finally a reality.

Illumio WebcastIn 15 minutes, we’ll show you how nano-segmentation delivers the most granular, adaptive security across your data centers and public clouds.

Join Illumio and SecurityWeek for this interactive webcast to find out how to:

- Reduce your data center and cloud attack surface by 99%

- Quarantine compromised servers in seconds

- Achieve compliance in hours

Register Now

Can't Make the Live Event? Register now and we'll email you a link to watch on demand.

Copyright 2010 Respective Author at Infosec Island]]>
A Guide to AWS Security https://www.infosecisland.com/blogview/24642-A-Guide-to-AWS-Security.html https://www.infosecisland.com/blogview/24642-A-Guide-to-AWS-Security.html Thu, 20 Aug 2015 09:30:17 -0500 If you’re looking to migrate your business applications to a public cloud, the chances are that you’ve looked into Amazon Web Services. With its higher capacity and wide range of cloud services, AWS has become the most popular choice for businesses looking for the scalability and cost-effective storage that cloud computing offers.

Security in AWS is based on a shared responsibility model:  Amazon provides and secures the infrastructure, and you are responsible for securing what you run on it.  This provides you with greater control over your traffic and data, and encourages you to be proactive. However, before you go ahead and migrate your applications to AWS, here are some tips on how to manage and enforce security for maximum protection across your AWS and on-premise environment

Understanding security groups

Amazon offers a virtual firewall facility for filtering the traffic that crosses your cloud network segment; but the way that AWS firewalls are managed differs slightly from the approach used by traditional firewalls.  The central component of AWS firewalls is the ‘security group’, which is essentially what other firewall vendors would call a policy, i.e. a collection of rules.  However, there are key differences between security groups and traditional firewall policies that need to be understood.

First, in AWS, there is no ‘action’ in the rule stating whether the traffic is allowed or dropped.  This is because all rules in AWS are positive and always allow the specified traffic – unlike traditional firewall rules. 

Second, AWS rules let you specify the traffic source, or the traffic destination – but not both on the same rule. For Inbound rules, there is a source that states where the traffic comes from, but no destination telling it where to go.  For Outbound rules it the other way around: you can specify the destination but not the source. The reason for this is that the AWS security group always sets the unspecified side (source or destination, as the case may be) as the instance to which the security group is applied.

AWS is flexible in how it allows you to apply these rules. Single security groups can be applied to multiple instances, in the same way that you can apply a traditional security policy to multiple firewalls.  AWS also allows you to do the reverse: apply multiple security groups to a single instance, meaning that the instance inherits the rules from all the security groups that are associated with it.  This is one of the unique features of the Amazon offering, allowing you to create security groups for specific functions or operating systems, and then mix and match them to suit your business’ needs.

Managing outbound traffic

AWS does manage outbound traffic, but there are some differences in how it does this compared to conventional approaches that you need to be aware of.  With AWS, the user is not automatically guided through the settings for outbound traffic during the initial setup process.  The default setting is that all outbound traffic is allowed, in contrast to the default setting for inbound traffic which denies all traffic until rules are created.

Clearly, this is an insecure setting which can leave your organisation vulnerable to data loss, so it’s advisable to create rules that allow only specific outbound traffic, and protect data that is truly critical.  Because the AWS setup wizard doesn’t automatically take you through the outbound settings, you will need to create these rules manually and apply them. 

Auditing and compliance

Once you start using AWS in production, you need to remember that these applications are now subject to regulatory compliance and internal audits. Amazon does offer a couple of built-in features that help with this: Amazon CloudWatch, which acts as a health monitor and log server for your instances, and Amazon CloudTrail, which records and audits your API calls. However, if you are running a hybrid data centre environment, you will require additional compliance and auditing tools.

Depending on which industry you’re in and what type of data you handle, your business will be subject to different regulations – for example, if you process credit card information, you will be subject to PCI. So if you want to use your AWS cloud platform for this sensitive data, then you will need the right third-party security management products in place to provide you with the same reporting facilities that you would get with a conventional firewall.

The most important thing you need from a third-party solution is visibility of the policies from all security groups and of your whole hybrid estate, together with the same analysis and auditing capabilities as an on-site infrastructure, to give you a holistic view and management of your security environment.

Ultimately, it is your responsibility to secure everything that you put onto an AWS environment.  Considering these points and following the steps I’ve outlined will help to ensure that you protect your data and comply with regulatory requirements when migrating to AWS.

About the Author: Professor Avishai Wool is CTO of security policy management provider AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Critical iOS "Quicksand" Vulnerability Lets Malicious Mobile Apps to Grab Enterprise Credentials https://www.infosecisland.com/blogview/24641-Critical-iOS-Quicksand-Vulnerability-Lets-Malicious-Mobile-Apps-to-Grab-Enterprise-Credentials.html https://www.infosecisland.com/blogview/24641-Critical-iOS-Quicksand-Vulnerability-Lets-Malicious-Mobile-Apps-to-Grab-Enterprise-Credentials.html Thu, 20 Aug 2015 08:41:41 -0500 Mobile security firm Appthority says it has identified a critical security flaw in the iOS mobile operating system that affects all iPhone, iPod touch, iPad devices running iOS 7 and later.

Dubbed "Quicksand" by the the security firm, the sandbox security vulnerability enables a malicious mobile app, or a bad actor who gains access to a physical device, to read other installed mobile apps' managed preferences, giving cybercriminals the ability to harvest credentials and exfiltrate other sensitive corporate data. 

Apple has fixed the vulnerability in the most recent iOS 8.4.1 security update, and user should ensure both corporate and employee owned devices are running the most current iOS version.

  Read More at SecurityWeek

Copyright 2010 Respective Author at Infosec Island]]>
Inadequate Processing Parameters Add More Chinks in the EMV Armor https://www.infosecisland.com/blogview/24640-Inadequate-Processing-Parameters-Add-More-Chinks-in-the-EMV-Armor.html https://www.infosecisland.com/blogview/24640-Inadequate-Processing-Parameters-Add-More-Chinks-in-the-EMV-Armor.html Wed, 19 Aug 2015 09:52:35 -0500 A few weeks ago Inteller reported its findings on a vulnerability which enables fraudsters to take data stolen from the magnetic stripe of an EMV card, convert it to the data format used by the smart chip, encode it on the chip and use the card successfully in EMV transactions.

Fortunately, this vulnerability only affected an older implementation of EMV known as Static, or SDA. As this implementation is supposedly being phased out, and as any recent adoption of EMV would most likely involve the “Dynamic” (or DDA) implementation which is not vulnerable to this scheme, it seemed that this vulnerability should have a rather limited impact – and EMV remains strongly secured.

However, Inteller has received information from well-informed industry sources that banks have recently observed several new schemes targeting EMV which were successful in circumventing the standard. While these particular incidents were also limited to SDA, unlike the previously-reported case, they were made possible due to elements in the transaction authorization process that could, theoretically, also impact Dynamic EMV implementation in the long run.

In the first observed incident, criminals manipulated the ARQC, the cryptogram on the card that is used for authorization requests. The criminals sent a wrong ARQC in the transaction to the switch (the processor), which instead of pushing the authorization request to the bank, approved the transaction as normal for a chip card. In a second incident criminals manipulated the counter on the card. The counter is a value which changes after every transaction and is used on one hand to track how many transactions are done with the card per day, as well as search for any inconsistencies in the count that may suggest the card has been cloned. In the incident, a lower counter value was sent and somehow, again we don’t know how, forced the transaction out of EMV and again into magstripe mode. In this mode, the terminal instructs the teller to use the magnetic stripe instead of the smart chip, effectively nullifying EMV.

A third incident was also observed, but it is quite different from the other two, and seems to target the older and more vulnerable Static implementation. In the scheme, which we call “Reverse Conversion”, fraudsters used skimmers to steal data from the smart chips of EMV cards, then they converted it to magnetic stripe data, which they encoded onto fake cards. Using these cards, they were able to steal money from ATMs, as they obtained the PIN code as well from the skimmers. This scheme is somewhat the reverse version of the one we’ve already discussed in our previous blog post. An interesting question comes up from this incident – how were fraudsters able to use these magnetic stripe when a smart chip doesn’t contain the CVV value required to successfully complete a transaction?

Despite the fact that the third incident is quite different than the first two, all three seem to have a common denominator – in all three, most likely (as again there is a lot we still don’t know), inadequate processing parameters.

Processing parameters dictate the logic of how transactions are processed and authorized. All banks have a huge set of parameters that define when a transaction gets approved, when it gets declined, in which scenario to ignore characteristics that otherwise would raise a red flag and so on. These parameters can be seen as the source code of the financial transactions world and, much like a source code, they are very complex and keep changing. Much like source code, they too have “bugs” that cybercriminals learn about and exploit. All it takes is one inadequate or missing parameter to lower the bank’s defenses in one specific scenario and when fraudsters find out about this vulnerability they would recreate this scenario every time they perform a fraudulent transaction in order to exploit this rule. This seems to be the case here as well. As the transactions were done in online mode (meaning, data was sent to the bank who approved the transaction), the banks theoretically could identify and block the transactions, but the fraudsters knew which parameters to exploit.

Inadequate processing parameters are nothing new. Back in 2005 multiple banks, including several big ones, completely ignored any CVV mismatches. When fraudsters discovered this vulnerability, they heavily attacked these banks, creating Phishing attacks that requested that credit card number, expiration date and PIN code of their victims’ cards. Without the CVV to worry about, this data was enough for them to construct a working cloned card. The PIN code was used to steal money out of ATMs. Now, parameters that ignore CVV mismatches may rear their ugly heads again, as they might be the reason for the ability to exploit “Reverse Conversion”. Other inadequate parameters most likely have enabled the other two incidents.

Some readers may note that it isn’t the EMV technology that has been breached, but instead specific banks’ or switch’s rules. While true, one must remember that EMV is a system with multiple parts and processing parameters play an integral part in it. If the parameters are faulty then the technology which makes EMV secure can no longer be effective, and the end result is working cloned EMV cards being successfully used in fraudulent transactions. While the aforementioned incidents are currently only limited to SDA implementation, inadequate processing parameters can also, in theory, authorize DDA transactions. After all, they govern which transactions get approved and which aren’t, what to look for in a transaction and what to ignore – for SDA, DDA and non-EMV implementations.

It seems that fraudsters have learned that the way to survive EMV implementation is not to “break the technology”, but instead do what they have always done – find vulnerabilities in this vast system and exploit them. Even if circumventing EMV becomes a specific-bank or specific-BIN kind of game, depending on where the faulty authorization rules lie, this would not be a deterrent to fraudsters. They already have a massive social infrastructure called the Underground Economy, through which they can share intelligence on these vulnerabilities. These incidents are evidence that EMV is not going to be an impenetrable wall for cybercriminals, it just means that they’ll have to look harder for the cracks. Fortunately for them, this is exactly the kind of thing they know how to do very well.

Copyright 2010 Respective Author at Infosec Island]]>
Microsoft Patches Critical IE Flaw Exploited in the Wild https://www.infosecisland.com/blogview/24639-Microsoft-Patches-Critical-IE-Flaw-Exploited-in-the-Wild.html https://www.infosecisland.com/blogview/24639-Microsoft-Patches-Critical-IE-Flaw-Exploited-in-the-Wild.html Tue, 18 Aug 2015 18:41:18 -0500 Microsoft issued an emergency out-of-band update on Tuesday to fix a critical vulnerability (CVE-2015-2502) being actively exploited in the wild and affecting all versions of Internet Explorer from IE 7 through 11.

The flaw is a remote code execution vulnerability that exists when Internet Explorer improperly accesses objects in memory, and if exploited could corrupt memory and allow an attacker to execute malicious code on a system with the access rights of the current user.

The flaw could be combined with other vulnerabilities to elevate to administrator privileges, Bobby Kuzma, systems engineer at Core Security, told SecurityWeek.

Read the More at SecurityWeek

Copyright 2010 Respective Author at Infosec Island]]>
Get Physical with your Physical Space https://www.infosecisland.com/blogview/24636-Get-Physical-with-your-Physical-Space.html https://www.infosecisland.com/blogview/24636-Get-Physical-with-your-Physical-Space.html Mon, 17 Aug 2015 06:18:17 -0500 There are many false presumptions over physical controls and the old adage in the cyber world is physical possession is the law.  Current social engineering practice has gone beyond mail phishing scams and there is high probability that there is potential of a malicious presence in the mix within the place of work. 

The social factors of cyber threat organizations lead me to believe that a high profile business could be infiltrated physically by actors wishing retrieve or control data sources by implementation of additional components that are not approved by policy.

Small companies my have techs that know the systems and whether or not components should be present but this depends on the proactive nature of the group.  There are measures that could be in place that allow one to know if something new has been added to the environment.   These automated processes may or may not be under review by the proper staff to notify people to track down the rouge additions to the physical infrastructure.   

I asked Jayson Street, a college in my area, how he is able to put a device on a network infrastructure that puts the site in risk of analysis and additional remote threat.   He basically said he has a story and sticks to it.  People believe stories and many are not trained to be observant and suspicious of the risk.   Most would not even know the threat of the device but if one says it speeds up web browsing they could feel compelled to let anything be done.

This would not be the vast majority or threat faced.  Most shops allow USB devices to be integrated.  I’ve seen some pretty small USB NICs that can be hidden in the back of desktops.  Most users are unaware of what should and should not be on their systems and if the threat were on the company payroll, management would probably not be notified. The actors would have additional inside knowledge of the environment and some ability to control it and well as perception.

A USB data key could also be a low profile device attach to a PC collecting passwords, intellectual propertry or other sensitive information.  Plug something like this into a RFID management system and it could allow a full compromise of the controls allowing physical access to restricted areas.  Rouge access points such as a home router, phone or laptop are always a hazard and very hard to track down.  ie. Powered and locked in a desk drawer.

I would recommend for all IT shops not to feel secure in their place of business where security of the physical controls give a notion that nobody can compromise their hardware.   This should provoke a desire to have trusted people check all physical devices and storage areas in a facility to verified everything is plug in that needs to be and noting suspicious in going on.

Observation and creating baselines is essential.  Know what should be there so it can be know what shouldn’t.  Make sure the staff knows what to look for and when to put boots on the ground to track down suspicious offending additions.

Copyright 2010 Respective Author at Infosec Island]]>
Businesses Should Take a Pass on Traditional Password Security https://www.infosecisland.com/blogview/24634-Businesses-Should-Take-a-Pass-on-Traditional-Password-Security.html https://www.infosecisland.com/blogview/24634-Businesses-Should-Take-a-Pass-on-Traditional-Password-Security.html Tue, 04 Aug 2015 12:28:31 -0500 In today’s connected world, authentication is ubiquitous. Whether it’s a website, mobile app, laptop, car, hotel door lock, retail kiosk, ATM machine, or video game console, security is essential to all networked systems. Even individuals must use authentication through state-issued ID cards to validate their identities within the network of a city or state.

Whether virtual or physical, the improper access obtained from failed authentication has tangible effects ranging from stolen identities, fraudulent transactions, intellectual property theft, data manipulation, network attacks, and state-sponsored espionage. These consequences have the potential to cost companies millions of dollars, ruin reputations of individuals, and disrupt business.

Authentication in the Internet Age

Let’s be honest. Historical forms of authentication were never meant for the networked landscape we live in today. The first passwords were adequate authentication solutions only because the systems they secured were isolated. Unfortunately, the isolated systems that pervaded the early days of the computer revolution has set the foundation for authentication in the Internet Age.

Within just a few years, the global computer market transitioned from a disconnected world of isolated computers to a fragmented world connected by the cloud. Not only are computers now interconnected, devices themselves and the applications running on them are as mobile as the users who own them. No longer are applications restricted to specific machines or data centers, they can be distributed, dispersed, or local to mobile devices. The security of any individual system or user now affects the security of those systems networked to it.

The Internet has been ingrained in global culture and commerce to such a drastic degree that every new day increases the risk and impact of improper authentication. And with the impending Internet of Everything — that is, the millions or billions of devices, sensors, and systems that will connect to the Internet — not only is the need for secure authentication exponentially rising, the landscape is also changing.

Today, the tempo of security breaches directly related to stolen passwords and bypassed authentication is increasing along with the severity of their consequences. Further compounding these issues, past breaches are creating a snowball effect resulting in subsequent attacks being easier, quicker, and more widespread than their predecessors.

A new approach to authentication and authorization is required to face the new generation of modern security challenges. 

Houston…We Have a Password Problem

I believe that passwords aren’t simply used incorrectly today; they’re fundamentally insecure and present problems for device authentication in the future.

Traditionally, the primary form of user authentication in networked systems has been the username and password combination. More recently, the concept of strong authentication has become popular whereby an additional factor of authentication is used on top of the password layer for added assurance. Unfortunately, neither passwords nor strong authentication built on top of passwords are bulletproof solutions for today’s security challenges.

As we begin to consider an Internet of Things (IoT) world of connected devices, it’s easy to see how passwords are incompatible with the vast majority of smart objects that constitute the future of our networked world. The in-band centralized nature of passwords requires that users input credentials into the requesting application. However, most devices, such as sensors, door locks, and wearables don’t have an attached keyboard for password input. This means that authentication must happen out of band. Instead of the user supplying a device with credentials, that device must obtain authorization externally in a decentralized manner.

The Problem with Two-Factor Authentication

Security experts have long recommended strong authentication to compensate for the weakness of passwords. While strong authentication is the correct approach to take, the traditional method, known colloquially as two-factor authentication, is inadequate.

Let’s take a look at a few of the key issues:

Architectural Vulnerabilities

Shared secret architectures involve a token or one-time password (OTP) that is sent to a mobile phone or fob that the end user owns. This OTP is compared with a token generated by the application being secured.

The symmetric key cryptography that this process relies on is an inferior security approach because if either the user’s device or the application is compromised, the shared secret can be obtained, thereby allowing an attacker to generate their own correct token. Additionally, since the user’s token must be transposed or delivered back to the application for comparison, there is a risk that the token can be intercepted by a hacker, malware, or observer in a man-in-the-middle (MITM) attack.

Password Layer Remains Unresolved

Traditional two-factor authentication retains the in-band password layer which means the core password problems remain unresolved. The application still holds on to the “bait” that hackers and malware are after, keeping the application layer in the crosshairs of any attack on the authentication layer.

Poor User Experience 

Transposing tokens that quickly expire may be considered an annoying user experience that many users will opt to avoid in lieu of a smoother authentication flow. OTP flows that rely on SMS are unreliable and inconsistent. End users’ preference towards convenience over security means traditional two-factor authentication implementations like OTP may go unused. For organizations and applications, traditional two-factor authentication means sending their users outside of the branded experience that they control.

Additionally, traditional two-factor authentication approaches involve sending the end user to a third-party application. Often, this involves a company or online service forwarding their users to mobile apps or hardware with unaffiliated branding and user experience. Such an approach is often unacceptable, especially for consumer-facing organizations.

Use Cases Are Limited

Authentication is integral to more use cases than login forms. Whether a user wants to approve a purchase in real time, sign for a package, verify their identity, or access a secure corporate office, authentication plays a critical role. In many of these scenarios, an input form to submit credentials like a password and OTP token isn’t available, thereby placing such scenarios out of the scope of traditional two-factor authentication.

High Cost

Many two-factor authentication solutions represent a tangible cost and logistical burden. A single hardware token can cost as much as $100 or more, making a two-factor authentication solution that only satisfies a limited subset of use cases unrealistic.

Time to Move Beyond the Password

Password-based authentication is no longer capable of meeting the demands of modern security. Passwords are inherently insecure as a method of authentication, and their efficacy relies on end users, developers, system administrators, and the applications themselves, all of which are vulnerable to a wide variety of attack vectors currently being exploited by cyberattacks around the world today.

Traditional strong authentication methods like two-factor authentication built on top of passwords does nothing to address the liability and risk of the insecure password layer, while their shared secret architecture (e.g. OTP) is cryptographically inferior, vulnerable to many attack vectors, and creates a cumbersome experience that users dislike and often avoid. Furthermore, both passwords and the strong authentication built on top of them are incompatible with many of the devices and remote “things” that will require user authentication in the future, but lack the requisite input mechanisms like keyboards and forms to use them.

Organizations and applications must remove the vulnerability and liability that passwords have created while implementing more secure authentication methods that account for an evolving and diversified landscape of use cases, end users, and threats.

About the Author: Geoff Sanders is Co-Founder and CEO of LaunchKey

Copyright 2010 Respective Author at Infosec Island]]>
Hackers and Threats: Cybercrime Syndicates Go Global https://www.infosecisland.com/blogview/24633-Hackers-and-Threats-Cybercrime-Syndicates-Go-Global.html https://www.infosecisland.com/blogview/24633-Hackers-and-Threats-Cybercrime-Syndicates-Go-Global.html Tue, 04 Aug 2015 10:29:00 -0500 The pace and scale of information security threats continues to accelerate, endangering the integrity and reputation of today’s most trusted organizations.

The stakes are higher than ever before, and we’re not just talking about personal information and identity theft anymore. High level corporate secrets and critical infrastructure are constantly under attack around the globe and organizations of all sizes need to be aware of the important trends that have emerged or shifted over the past few years. With the speed and complexity of the security threat landscape changing on a daily basis, those organizations that don’t prepare will be left behind, most likely in the wake of reputational and financial damage.

Crime Syndicates are Taking a Quantum Leap

Organizations are struggling to cope with the quantum speed and sophistication of global cyber-attacks being carried out by organized cyber-criminal syndicates. Moving forward, businesses need to prepare to be targeted at any time, and any place, by multiple assailants. Organizations that wish to keep pace with these developments, and remain financially viable, need to take action now, or face the consequences.

Criminal organizations are becoming more sophisticated, more mature and are migrating their activities online at greater pace than ever before. They are beginning to develop complex hierarchies, partnerships and collaborations that mimic large private sector organizations and are taking their activities worldwide. They are also basing their operations where political and law enforcement structures are weak and malleable, and where they can conduct their activities relatively undisturbed. This is forcing domestic organizations to adapt their security strategies and fortify their internal business operations in order to protect themselves from the inevitable data breach.

So how much does a data breach actually cost an organization?

Total Cost of a Data Breach

According to the Ponemon Institute’s 2015 Cost of Data Breach Study, the average consolidated total cost of a data breach is $3.8 million. The study also found that the cost incurred for each lost or stolen record containing sensitive and confidential information increased six percent from a consolidated average of $145 to $154. Ponemon also found that 47% of all breaches in this year’s study were caused by malicious or criminal attacks and the average cost per record to resolve such an attack is $170. In contrast, system glitches cost $142 per record and human error or negligence is $134 per record.

Now, let’s take a look at another area of loss that is affecting organizations of all sizes. The cost associated with lost business has been progressively increasing over the past few years and potentially has the most severe financial consequences for an organization. Ponemon found that the cost of lost business increased from a total average cost of $1.33 million last year to $1.57 million in 2015. This cost component includes the abnormal turnover of customers, increased customer acquisition activities, reputation losses and diminished goodwill. The growing awareness of identity theft and consumers’ concerns about the security of their personal data following a breach has contributed to the increase in lost business.

Cyber Crime Increases as Malspace Matures

I mentioned earlier how criminal organizations arebecoming more sophisticated and mature. In addition, crime syndicates are aligning commercially and diversifying their enterprises, seeking profits from moving more of their activities online. They are basing their operations where political and law enforcement structures are weak and malleable, and where they can conduct their activities relatively undisturbed. This is forcing domestic organizations to adapt their security strategies and fortify their internal business operations. 

In a criminal marketplace with a global talent pool, professionalization will encourage specialization. Different criminal business units will focus on what they do best, and strategy development and market segmentation will follow best practice from the private sector. Malware development will be a prominent example of specialization. Profits will allow crime syndicates to steadily diversify into new markets and fund research and development from their revenue. Online expansion of criminal syndicates will result in increased Crime-as-a-Service (CAAS) along with distributed bulletproof hosting providers that sell services and turn a blind eye to the actions of malicious actors.

In today’s global, connected society, businesses must prepare for the unknown so they have the flexibility to withstand unexpected and high impact security events. With the growth of the Internet of Things (IoT), we’re seeing the creation of tremendous opportunities for enterprises to develop new services and products that will offer increased convenience and satisfaction to their consumers. The rapid uptake of Bring Your Own Device (BYOD) is increasing an already high demand for mobile applications for both work and home.

Smartphones are already the control center for the IoT, creating a prime target for malicious actors. The information that individuals store on mobile devices already makes them attractive targets for hackers, specifically “for fun” hackers, and criminals. Unauthorized users will target and siphon sensitive information from these devices via insecure mobile applications. The level of hyper-connectivity means that access to one app on the smartphone can mean access to all of a user’s connected devices.

But do the apps access more information than necessary and perform as expected?

Worst case scenario, apps can be infected with malware that steals the user’s information – tens of thousands of smartphones are thought to be infected with one particular type of malware alone. This will only worsen as hackers and malware providers switch their attention to the hyper-connected landscape of mobile devices.

I’ve touched upon mobile and the IoT so let’s shift gears for a moment to the supply chain. Here’s a question for you: Do you know if your suppliers are protecting your company’s sensitive information as diligently as you would protect it yourself?  This is one duty you can’t simply outsource because it’s your liability. By considering the nature of your supply chains, determining what information is shared, and assessing the probability and impact of potential breaches, you can balance information risk management efforts across your enterprise.

Organizations need to think about the consequences of a supplier providing accidental, but harmful, access to their corporate information. Information shared in the supply chain can include intellectual property, customer or employee data, commercial plans or negotiations, and logistics. Caution should not be confined to manufacturing or distribution partners. It should also embrace your professional services suppliers, all of whom share access, often to your most valuable data assets.

To address information risk in the supply chain, organizations should adopt robust, scalable and repeatable processes – obtaining assurance proportionate to the risk faced. Supply chain information risk management should be embedded within existing procurement and vendor management processes, so supply chain information risk management becomes part of regular business operations.

Reducing the Risk of Attack

Today, risk management largely focuses on achieving security through the management and control of known risks. The rapid evolution of opportunities and risks in cyberspace is outpacing this approach and it no longer provides the required protection. Cyber resilience requires recognition that organizations must prepare now to deal with severe impacts from cyber threats that are impossible to predict. Organizations must extend risk management to include risk resilience, in order to manage, respond and mitigate any negative impacts of cyberspace activity.

Cyber resilience also requires that organizations have the agility to prevent, detect and respond quickly and effectively, not just to incidents, but also to the consequences of the incidents. This means assembling multidisciplinary teams from businesses and functions across the organization, and beyond, to develop and test plans for when breaches and attacks occur. This team should be able to respond quickly to an incident by communicating with all parts of the organization, individuals who might have been compromised, shareholders, regulators and other stakeholders who might be affected.

Cyber resilience is all about ensuring the sustainability and success of an organization, even when it has been subjected to the almost inescapable attack. By adopting a realistic, broad-based, collaborative approach to cyber security and resilience, government departments, regulators, senior business managers and information security professionals will be better able to understand the true nature of cyber threats and respond quickly and appropriately.

Inside and Out: Preparing Your People

Organizations continue to heavily invest in developing human capital. Let’s be honest. No CEO’s presentation or annual report would be complete these days without stating its value. Leaders, now more than ever, demand return on investment forecasts for the projects that they have to choose between, and awareness and training are no exception. Evaluating and demonstrating their value is becoming a business imperative.

Many organizations recognize their people as their biggest asset, yet many still fail to recognize the need to secure the human element of information security. In essence, people should be an organization’s strongest control. But, instead of simply making people aware of their information security responsibilities and how they should respond, the answer for organizations is to embed positive information security behaviors that will result in their behavior becoming a habit and part of an organization’s information security culture. While many organizations have compliance activities which fall under the general heading of ‘security awareness’, the real commercial driver should be risk, and how new employee behaviors can reduce that risk.

We’ve discussed preparing for an incident, but what about external communication once a breach occurs. Due to the ever-increasing velocity of the 24/7 news cycle, it has become virtually impossible for organizations to control the public narrative around an incident. Responding to unwelcome information released on someone else’s terms is a poor strategy, and a defensive posture plays poorly with customers whose personal details have just been compromised.

The perspective that disclosure will be more damaging than the data theft itself – is a guaranteed way to damage customer trust. However, advance planning is often lacking, as are the services of tech-literate public relations departments. The lesson that we tell our members is to carefully consider how to respond, because your organization can’t control the news once it becomes public. This is particularly true as data breaches are happening with greater frequency and as the general public pays greater attention to information security. I also recommend running simulations with your PR firm so that you are better prepared to respond following a breach.

Have Standard Security Measures in Place

Business leaders recognize the enormous benefits of cyberspace and how the Internet greatly increases innovation, collaboration, productivity, competitiveness and engagement with customers. Unfortunately, they have difficulty assessing the risks versus the rewards. One thing that organizations must do is ensure they have standard security measures in place.

The Information Security Forum(ISF) has designed its tools to be as straightforward to implement as possible. These ISF tools offer organizations of all sizes an “out of the box” approach to address a wide range of challenges – whether they be strategic, compliance-driven, or process-related. For example, the ISF’s Standard of Good Practice for Information Security (the Standard) enables organizations to adopt good practices in response to evolving threats and changing business requirements. The Standard is used by many organizations as their primary reference for information security. The Standard is updated annually to reflect the latest findings from the ISF’s Research Program, input from our global member organizations, and trends from the ISF Benchmark, along with major external developments including new legislation.

Another example that organizations can use is the ISF’s Information Risk Assessment Methodology version 2 (IRAM2). IRAM2 has many similarities to other popular risk assessment methodologies. However, whereas many other methodologies end at risk evaluation, IRAM2 covers a broader scope of the overall risk management lifecycle by providing pragmatic guidance on risk treatment. The IRAM2 risk assessment methodology can help businesses of all sizes with each of its six phases detailing the steps and key activities required to achieve the phase objectives while also identifying the key information risk factors and outputs.

As information risks and cyber security threats increase, organizations need to move away from reacting to incidents and toward predicting and preventing them. Developing a robust mechanism to assess and treat information risk throughout the organization is a business essential. IRAM2 provides businesses of all sizes with a simple and practical, yet rigorous risk assessment methodology that helps businesses identify, analyze and treat information risk throughout the organization.

Don’t Find Yourself in Financial and Reputational Ruin

In preparation for making your organization more cyber resilient, here is a short list of next steps that I believe businesses should implement to better prepare themselves:

  • Focus on the Basics
  • Prepare for the Future
  • Change your Thinking About Cyber Threats
  • Re-assess the Risks to Your Organization and its Information from the Inside Out
  • Revise Information Security Arrangements

Organizations of all sizes need to ensure they are fully prepared to deal with these ever-emerging challenges by equipping themselves to better deal with attacks on their business as well as their reputation. It may seem obvious, but the faster response you have, the better your outcome will be.

Copyright 2010 Respective Author at Infosec Island]]>
The Technical Limitations of Lloyd’s Cyber Report on the Insurance Implications of Cyberattack on the US Grid https://www.infosecisland.com/blogview/24631-The-Technical-Limitations-of-Lloyds-Cyber-Report-on-the-Insurance-Implications-of-Cyberattack-on-the-US-Grid.html https://www.infosecisland.com/blogview/24631-The-Technical-Limitations-of-Lloyds-Cyber-Report-on-the-Insurance-Implications-of-Cyberattack-on-the-US-Grid.html Fri, 31 Jul 2015 04:43:59 -0500 The recent Lloyd’s report on cyber implications of the electric grid serves a very important need to understand the insurance implications of a cyber attack against the electric grid. There have already been more than 250 control system cyber incidents in the electric industry including 5 major cyber-related electric outages in the US. There have been numerous studies on the economic impact of various outage durations, but they have not addressed issues associated with malicious causes. Consequently, there is a need to address the missing “malicious” aspects of grid outages. Unfortunately, I believe the technical aspects of the hypothesized attack in the Lloyd’s study are too flawed to be used.

According to the Lloyd’s report, “the Erebos Cyber Blackout Scenario is an extreme event and is not likely to occur. The report is not a prediction and it is not aimed at highlighting particular vulnerabilities in critical national infrastructure. Rather, the scenario is designed to challenge assumptions of practitioners in the insurance industry and highlight issues that may need addressing in order to be better prepared for these types of events…. On the given day, the malware is activated and 50 generators are damaged in rapid succession.”

The Erebos Cyber Blackout Scenario is essentially the Aurora vulnerability combined with the 2003 Northeast outage. Following the 2007 Idaho National Laboratory Aurora test, CNN published an unclassified report on the Aurora test (http://www.cnn.com/2007/US/09/26/power.at.risk/index.html).

Aurora is not malware but a physical gap in protection of the electric grid causing an out-of-phase condition. Out-of-phase conditions are a known problem to grid equipment and consequently the IEEE has a committee dedicated to out-of-phase conditions. Consequently, it shouldn’t be that difficult to understand what happened to the equipment though it may be very difficult to identify attribution.

The classified Aurora information was declassified in July 2014 and is available on a number of hacker websites. Without the specific Aurora hardware mitigation that very few utilities have employed, Aurora can damage or destroy generators, transformers, and rotating AC equipment connected to the affected substations. Damaging generators or other large equipment is very expensive and can take a significant amount of time and resources to repair or replace. This could be as long as many months to recover assuming the equipment is available, appropriate staff is available to make the repairs or replacements, and transportation can be arranged.

With 50 generators damaged (no mention of transformers which would also be damaged by an Aurora event), the probability that equipment and trained staff will be available on-site on a timely basis is rather low. The 2003 Northeast Outage was only 2-3 days because there was no damage to generators or other critical equipment. With 50 generators damaged, the probability that the grid will be available in 7 days, or even a few weeks, is really, really low.

There are other questions the report did not address. Were all of the generators from one utility or even one region?  That would help identify the potential geographic scope of the outage. How large were each of the generators? Depending on the size of the generator, there may not be requirements for any cyber protection or cyber monitoring. The same goes for the nearby substations connected to the generators. With no cyber monitoring, how will you have any attribution?

Several years ago, I participated in a NERC High Impact/Low Frequency (HILF) workshop. I believe the “Erebos” event could be a High Impact event because of its potential impact to the grid. However, because of the declassified DHS information, I do not believe it is a Low Frequency event.

As the report states, “a cyber attack of this severity is an unlikely occurrence, but we believe that it is representative of the type of extreme events that insurers should assess in order to understand potential exposures.” As mentioned, Aurora has been public since 2007 with the details unclassified in 2014. Based on actual control system cyber events that have already occurred and available knowledge of hacking control systems, I believe the grid and other critical infrastructures are at considerable risk to “frequent” cyber threats.

There is a need for the insurance industry to quantify control system cyber security risks to critical infrastructures. Unfortunately, the technical basis for the Lloyd’s case badly misses the boat.

Copyright 2010 Respective Author at Infosec Island]]>
Debunking Myths: Application Security Checklists Suck https://www.infosecisland.com/blogview/24629-Debunking-Myths-Application-Security-Checklists-Suck.html https://www.infosecisland.com/blogview/24629-Debunking-Myths-Application-Security-Checklists-Suck.html Fri, 31 Jul 2015 04:38:30 -0500 There is a pervasive sentiment amongst the security community about checklists: they suck. We’ve all seen inflexible audit checklists that seem to be highly irrelevant to the specific system being audited.

Moreover, we are all too aware of organizations doing the bare minimum to meet a checklist item on an audit report, even at the expense of achieving *real* security. It’s no wonder that so many IT professionals react with disdain when they see yet another checklist. They are often generic because auditors want an extensible assessment process that doesn’t become out of date when technologies change. Other times they are rigid or outdated because they are too low level and don’t allow for compensating controls.

The problem is, not all checklists suck.

Dr. Atul Gawande and his team have shown that a simple 6-step checklist can reduce deaths due to surgical defects by over 40%. It’s not only surgery; Dr. Gawande found that pilots, engineers and investment fund managers have all recorded measurable gains from using checklists. If checklists are so successful in other industries, why do people abhor them in software security? The reasons are simple:  

  1. Software is dynamic. Each application is unique, and a “one size fits all” checklist just won’t apply to different types of software with different features. For example, a security audit checklist for an online banking web application won’t reflect the risks of a mobile healthcare application. Effective software security checklists themselves need to be dynamic and tailored to the threats of a specific application.
  2. Technology changes rapidly. A static technical checklist can fall out-of-date quickly. Development teams may think checklists that reference old technology are outdated and not applicable to their projects. Effective software security checklists need to be fluid and updated with changes to technology.
  3. Generic isn’t good enough. A simple generic checklist just doesn’t cut it. For example, many organizations refer to the OWASP Top 10 as a simple software security checklist. Digging beneath the surface, the  OWASP Top 10 simply enumerates major classes of application security risks. Digging deeper, a risk like “A3: Broken Authentication and Session Management” can mean over a dozen different requirements for a single application. Only experienced assessors understand the lower-level, technical risks contained within “A3: Broken Authentication and Session Management” and there may even be discrepancies between assessors. Effective software security checklists need to be specific.
  4. Process overhead. Developers working in large companies may already be flooded with process checklists. They often see a new checklist as another distraction from actually building software. At the same time, most developers don’t see functional requirements in the same light as checklists. That’s because requirements lists are made specifically for an application and communicate needs from stakeholders. No software security checklist is as effective as a tailored set of security requirements.

Dynamic, low-level, tailored software security requirements are more effective than any static, generic software security checklist. In fact, effectively designed software security requirements can predict 97% of common software security flaws.  

Cross-posted from the SC Labs blog.  

Copyright 2010 Respective Author at Infosec Island]]>
How to Tell a Landscaper From a Thief https://www.infosecisland.com/blogview/24626-How-to-Tell-a-Landscaper-From-a-Thief.html https://www.infosecisland.com/blogview/24626-How-to-Tell-a-Landscaper-From-a-Thief.html Mon, 20 Jul 2015 21:11:00 -0500 If I can see a person standing in front of a neighboring house inspecting the windows and the doors, should I call the police?

Maybe it is the air-condition technician looking for the best place to install a new air-condition unit, or maybe it is a robber doing reconnaissance and checking what is the easiest way to get into the house. It is hard to tell!

Now what if I can see a user sending requests to non-existing pages in my application? 

Maybe these are broken links created mistakenly by that user, or maybe these are attack reconnaissance, pre-attack activity done by a malicious user. It is also hard to tell! 

A key objective for any security team is to make sure that organizational assets -- whether these are servers, applications or data -- are protected. Therefore a preliminary attack reconnaissance activity that target non-existing assets may be casually dismissed due to lack of: interest, human resources or even proper security controls. More to that, attack reconnaissance activity may look like legitimate users traffic when inspected in the wrong context.

From a threat intelligence point of view these casually dismissed attack reconnaissance should be considered as valuable information and should be treated as such. In many cases this reconnaissance activity is the first or even the only opportunity to detect malicious activity just before it slips under the detection radar.

This article presents an example of how threat intelligence analysis utilizing cloud network and inspecting requests for non-existing Web application pages can help with predicting the up-coming brute force attacks. In other words, how to catch a robber just before he tampers with the door lock.   

The Good, The Bad and The Ugly

Brute force Web attackers attempt to gain privileged access to a Web application by sending a very large set of login attempts. In many cases brute force attacks will start with a preliminary reconnaissance process of finding the login pages in the targeted Web application.

While trying to find those login pages attackers will use a dictionary of possible login pages. Not all of these login pages exist on the targeted Web application; therefore accessing those non-existent pages will result with a Web application failure.

There are 3 questionable scenarios for failures: Good, Bad and Ugly. 

The Good

Q: What if we detect someone trying to accesses a non-existing page “login.aspx” on the Web application?

A: If it happened just once, this kind of activity should be ignored. It is possible that it is a mistake made by a legitimate user, trying to access the wrong page. There is not enough information for determining that this is a malicious attempt.

The Bad

Q: What if we detect many attempts to access different files on the same Web application (“login.aspx”, “log-in.asp”…), all results with failure?

A: It seems like this attacker is looking for the logging page of the Web application and he is just one step from launching a brute force attack. The attacker may use “slow & low” attack technique in order to evade security controls detection.   

The Ugly 

Q: What if we detect “bad” activity of many attempts to access different files, but this time across many different Web applications?

A: It seems like attacker is planning to launch a distributed targets attack against many applications. Executing reconnaissance on several Web applications at the same time in order to scale and increase attack surface.

Learning from the cross-targeted activity of the attack, leads us to one unavoidable conclusion – it is going to be Ugly!

Summary

If the attacker knows the location of the login page, looking at failures in the reconnaissance activity won’t work, but everybody make mistakes – even (especially) attackers. It is up to the security teams to be patient and attentive in detecting similar failures and mistakes leading to the detection of variety of web attacks.

The accuracy of combining those reconnaissance activates into reliable insights rely on the diversity of the analyzed data. In the example presented above, the reconnaissance across many different Web applications was the differentiator in the threat intelligence analysis. Therefore, it is only natural that cloud networks should utilize the rich, diverse and continuous data, streaming through their infrastructure into threat intelligence.   

And when a suspicious person is wandering across your neighborhood inspecting houses doors and windows, it is time to call the police! 

About the Author: Or Katz is a security researcher at Akamai Technologies.

Copyright 2010 Respective Author at Infosec Island]]>
Universities at risk of Data Breaches: Is it Possible to Protect Them? https://www.infosecisland.com/blogview/24625-Universities-at-risk-of-Data-Breaches-Is-it-Possible-to-Protect-Them.html https://www.infosecisland.com/blogview/24625-Universities-at-risk-of-Data-Breaches-Is-it-Possible-to-Protect-Them.html Fri, 17 Jul 2015 09:49:55 -0500 Harvard University recently announced that on June 19 an intrusion on Faculty of Arts and Sciences and Central Administration information technology networks was discovered. According to the announcement on Harvard website, this breach affected eight different schools and thought to have exposed students’ log-in credentials. University IT staff denied that any personal data or information from internal email system had been exposed.

An advisory on the website urges people affiliated with the affected institutions to change their passwords. Password change could be required again soon as a part of security measures to protect Harvard system.

It is not the first time Harvard has been hacked. Earlier this year AnonGhost group hacked website of Institute of Politics at Harvard and in 2012 Harvard was attacked by GhostShell team, which also took responsibility for hacking servers of 100 major universities such as Stanford, the University of Pennsylvania and the University of Michigan.

Higher education certainly is one of the most targeted and – meanwhile – common industries for cyber-attacks. Increased attention to the security of educational institutions derives from the fact that universities are less secure than enterprises while college ERP systems contain not less valuable data, and the amount of important information may be even bigger, that entails large number of potential victims of an attack. The detailed reasons why both cybercriminals and security specialists focus on this area are described below.

Why are universities systems a perfect target for cyber-attacks?

First and the main reason lies in the environment of campus systems. University networks have a large number of users. Thousands of freshmen go to university every year, it’s hard to imagine that any business institution hires so many new employees on the regular basis. College systems store personal information, payment information, and medical records of current and former students and employees. Great amount of sensitive information always comes with attempts to steal them. Moreover, the exposure of this information may have long-term consequences, as some of the students of the top universities are likely to held key positions in the nearest future. University systems supported BYOD (bring your own device) policy before this term appeared in the business sphere. Students are active in using latest technologies. File sharing, social media, and adult content is a source of malware and viruses. If a student’s device synced with college network is compromised, it’s not only the student who is affected, so does the university. More information on mobile application security and mobile Device management security you can find in our article.

Universities have to provide an easy access to their systems for all these students and personnel. It makes incidents investigation more difficult than when we deal with business structures.

Finally, such systems can store not only educational and personal information but governmental and even military research materials. So, university systems are an attractive target to state-sponsored hackers, as this data can be used for industrial or state espionage.

What had happened? Was Harvard breached via a vulnerability in PeopleSoft?

Harvard has not disclosed any technical details about the breach, thus, it is a fertile ground for speculations and baseless conclusions. The only thing we can say for sure is that PeopleSoft application is installed in multiple Harvard colleges (as it is known from public sources) and that real examples of universities’ hack via PeopleSoft vulnerabilities took place in last few years.

Several cases of data breaches related to vulnerabilities in Oracle PeopleSoft applications have been published in the media since 2007 when two students faced 20 years in prison after they hacked California state university’s PeopleSoft system. In August 2007, three students installed keylogging software on computers at Florida A & M University and used the passwords they gleaned to gain access to the school’s PeopleSoft system to modify grades. A student at the University of Nebraska in 2012 was able to break into a database associated with the university's PeopleSoft system, exposing Social Security numbers and other sensitive information on about 654,000 students, alumni and employees. In March 2013, Salem State University in Massachusetts alerted 25000 students and staff that their Social Security Numbers may have been compromised in a database breach.  And this is not the full list of university attacks, and it is only against PeopleSoft systems.

PeopleSoft systems are widely used in higher education, they are implemented in more than 2000 universities and colleges around the world. ERPScan’s research revealed that 236 servers related to universities are accessible on the internet (including Harvard server). It means, that at least 13% of universities with PeopleSoft systems are accessible from the Internet while Enterprises have about 3-7% depending on the Industry. 78 of these universities are vulnerable to TokenChpoken attack presented at HackInParis Conference by Alexey Tyurin. 7 of these universities are among America’s top 50 colleges by Forbes, so they seem a real treasure for cybercriminals.

TokenChpoken attack allows to find the correct key to Token, login under any account and get the full access to the system. In most cases, it takes not more than a day to decrypt token by using a special bruteforcing program on latest GPU that costs about $500. It’s almost impossible to identify the fact of this attack, as an attacker uses common legitimate system functionality, he brute-forces token password remotely by downloading a token from web page, and then all he needs is just to login to the system.

Other Universities (besides 78 mentioned before) are also potentially vulnerable, but only students with access to internal University PeopleSoft system can exploit this vulnerability and get administrative rights.

Moreover, 12 universities still have a default password for a token, so any unskilled attacker can successfully perform an attack.

What should we learn from the hacks?

First, we should admit that higher education institutions face risks that can actually result in espionage, blackmail, and fraud.

PeopleSoft is clearly the leader in higher education though there are other university ERP vendors like Three Rivers Systems, Ellucian, Jenzabar, Redox, and others.

As all university networks are complex and consist of numerous modules and there are numerous vulnerabilities in them, protecting them seems a nightmare for any IT team. Cybersecurity is not some separate steps taking from time to time, but the ongoing process. Of course, no one can prevent all threats and attacks, so safety lies in continuous monitoring and mitigation of risks.

The awareness of Oracle PeopleSoft security is even worse than with SAP Security where is also the lack of awareness, but it is decreasing. As for PeopleSoft, there are real examples of vulnerabilities and breaches, but nobody cares about it.

Related Reading: Many Organizations Using Oracle PeopleSoft Vulnerable to Attacks

Copyright 2010 Respective Author at Infosec Island]]>
Understanding the Strengths and Limitations of Static Analysis Security Testing (SAST) https://www.infosecisland.com/blogview/24620-Understanding-the-Strengths-and-Limitations-of-Static-Analysis-Security-Testing-SAST.html https://www.infosecisland.com/blogview/24620-Understanding-the-Strengths-and-Limitations-of-Static-Analysis-Security-Testing-SAST.html Fri, 17 Jul 2015 09:44:21 -0500 Many organizations invest in Static Analysis Security Testing (SAST) solutions like HP Fortify, IBM AppScan Source or Checkmarx or Coverity to improve application security. Properly used, SAST solutions can be extremely powerful: they can detect vulnerabilities in source code during the development process rather than after it, thereby greatly reducing the cost of fixing security issues versus dynamic analysis/run time testing. They can also discover kinds of vulnerabilities that dynamic analysis tools are simply incapable of finding. Because they are automated, SAST tools have the capability to scale across hundreds or thousands of applications in a way that is simply impossible with manual analysis alone.  

After investing in SAST, some organizations refrain from making further investments in application security. Stakeholders in these organizations are often under the belief that static analysis covers the vast majority of software security weaknesses, or that they cover the most important high risk items like the OWASP Top 10 and are therefore “good enough”. Instead of building security into software from the start, these organizations are content with getting a “clean report” from their scanning tools before deploying an application to production. This mindset is highly risky because it ignores the fundamental limitations of SAST technologies.  

The book, “Secure Programming with Static Analysis,” describes the fundamentals of static analysis in detail.  The books authors Brian Chess and Jacob West were two of the key technologists behind Fortify Software, which was later acquired by HP.  In the book, the authors state, ” half [of security mistakes] are built into the design” of the software, rather than the code. They go on to list classes of  software security issues, including: context-specific defects that are visible in code, and … . They go on to say, “no one claims that source code review is capable of identifying all problems”.  

Static analysis tools are complex. To function properly they need to have a semantic understanding of the code, its dependencies, configuration files, and many other moving pieces that may not be written in same programming language. They must do this while effectively juggling speed with accuracy and reducing the number of false positives to be usable. Their effectiveness is greatly challenged by dynamically-typed languages like JavaScript and Python where simply inspecting an object at compile time may not be able to reveal its class/type. This means that finding many software security weaknesses are either impractical or impossible.  

The NIST SAMATE project sought to measure the effectiveness of static analysis tools to help organizations improve their use of the technology. They performed both static analysis and manual source code review on open source software packages and compared results.  Their analysis showed that, between one-eight and one-thrid of all discovered weaknesses were “simple”. They further discovered that tools only found “simple” implementation bugs but did not find any vulnerability requiring a deep understanding of code or design. When run on the popular open source tool Tomcat, the tools produced warnings for 4 out of the 26 or 15.4% of the Common Vulnerability & Exposure entries. These statistics mirror the findings in Gartner in the paper “Application Security: Think Big, Start with What Matters” in which the authors suggest “anecdotally it is believed that SAST only covers up to 10% to 20% of the codebase DAST another 10% to 20%”. To put this in perspective, if an organization had built a tool like Tomcat themselves and run it through static analysis as their primary approach to application security, they would be deploying an application with 22 out of 26 vulnerabilities left in-tact and undetected.  

Dr. Gary McGraw classifies many of the kinds of security issues that static analysis cannot find as flaws rather than bugs. While the nature of flaws varies by application, some of the kinds of issues that static analysis is not reliably capable of finding includes:  

  • Storage and transmission of confidential data, particularly when that data is not programmatically discernible from non-confidential data
  • Issues related to authentication, such as susceptibility to brute force attacks, effectiveness of password reset, etc.
  • Issues related to entropy for randomization of non-standard data
  • Issues related to privilege escalation and insufficient authorization
  • Issues related to data privacy, such as data retention and other compliance (e.g. ensuring credit card numbers are masked when displayed)

Contrary to popular belief, many of the coverage gaps of static analysis tools carry significant organizational risk. This risk is compounded by the fact that organizations may not always have access to source code, the SAST tool may be incapable of understanding a particular language or framework, and the challenge of simply deploying the technology at scale and dealing with false positives.  

While static analysis is a very valuable technology for secure development, it is clearly no substitute for building applications with security in mind from the start. Organizations that embed security into the requirements and design and then validate with a combination of techniques including static analysis are best positioned to build robust secure software.  

Cross-posted from the SC Labs blog.  

Copyright 2010 Respective Author at Infosec Island]]>
Cloud Security: It’s in the Cloud - But Where? (Part III) https://www.infosecisland.com/blogview/24622-Cloud-Security-Its-in-the-Cloud-But-Where-Part-III.html https://www.infosecisland.com/blogview/24622-Cloud-Security-Its-in-the-Cloud-But-Where-Part-III.html Mon, 06 Jul 2015 09:59:00 -0500 In Part II, I discussed how organizations can enable cloud resilience and why it’s necessary to secure the cloud provider.

Today, let’s look at the need to institute a cloud assessment process and the four actions that organizations of all sizes can take to better prepare themselves as they place their sensitive data in the cloud.

While the cost and efficiency benefits of cloud computing services are clear, organizations cannot afford to delay getting to grips with their information security implications. In moving their sensitive data to the cloud, all organizations must know whether the information they are holding about an individual is Personally Identifiable Information (PII) and therefore has adequate protection.

There are many types of cloud-based services and options available to an organization. Each combination of cloud type and service offers a different range of benefits and risks to the organization. Privacy obligations do not change when using cloud services – and therefore the choice of cloud type and cloud service require detailed consideration before being used for PII.

Unfortunately, there is often a lack of awareness of information risk when moving PII to cloud-based systems. In particular, business users purchasing a cloud-based system often have little or no idea of the risks they are exposing the organization to and the potential impact of a privacy breach. In some cases, organizations are unaware that information has been moved to the cloud. Other times, the risks are simply being ignored. This is at a time when regulators, media and customers are paying more attention to the security of PII.

Here are four key issues:

  • Business users often have little or no knowledge of privacy regulation requirements because privacy regulation is a complex topic which is further complicated by the use of the cloud
  • Business users don’t necessarily question the PII the application will collect and use
  • Business users rarely consider cloud-based systems to be different from internal systems from a security perspective, and thus expect them to have the same level of protection built in
  • Application architects and developers often collect more PII than the applications need.

These issues often expose the organization to risks that could be completely avoided or significantly reduced.

The Cloud Assessment Process

Not to sound like a broken record, but putting private information into the cloud will certainly create some risk and must be understood and managed properly. Organizations may have little or no visibility over the movement of their information, as cloud services can be provided by multiple suppliers moving information between data centers scattered around the world. If the data being moved is subject to privacy regulations, and data centers are in different jurisdictions, this can trigger additional regulations or result in a potential compliance breach.

The decision to use cloud systems should be accompanied by an information risk assessment that’s been conducted specifically to deal with the complexities of both cloud systems and privacy regulations; it should also be supported by a procurement process that helps compel necessary safeguards. Otherwise, the tireless pressure to adopt cloud services will increase the risk that an organization will fail to comply with privacy legislation.

The ISF cloud assessment process has an objective to determine if a proposed cloud solution is suitable for business critical information. When assessing risk, here are a few questions that you should ask of your business:

1.       Is the information business critical?

2.       Where is it?

3.       What is the potential impact?

4.       How will it be used?

5.       How does it need to be protected?

6.       What sort of cloud will be used?

7.       How will the cloud provider look after it?

8.       How will regulatory requirements be satisfied?

Managing information risk is critical for all organizations to deliver their strategies, initiatives and goals. Consequently, information risk management is relevant only if it enables the organization to achieve these objectives, ensuring it is well positioned to succeed and is resilient to unexpected events. As a result, an organization’s risk management activities – whether coordinated as an enterprise-wide program or at functional levels – must include assessment of risks to information that could compromise success.

Better Preparation

Demand for cloud services continues to increase as the benefits of cloud services change the way organizations manage their data and use IT. Here are four actions that organizations of all sizes can take to better prepare:

  • Engage in cross business, multi-stakeholder discussions to identify cloud arrangements
  • Understand clearly which legal jurisdictions govern your organizations information
  • Adapt existing policies and procedures to engage with the business
  • Align the security function with the organizations approach to risk management for cloud services

With increased legislation around data privacy, the rising threat of cyber theft and the simple requirement to be able to access your data when you need it, organizations need to know precisely to what extent they rely on cloud storage and computing.

But remember: privacy obligations don’t change when information moves into the cloud. This means that most organizations’ efforts to manage privacy and information risk can be applied to cloud-based systems with only minor modifications, once the cloud complexity is understood. This can provide a low-cost starting point to manage cloud and privacy risk.

Copyright 2010 Respective Author at Infosec Island]]>
Challenges and Solutions of Threat and Vulnerability Sharing in 2015 https://www.infosecisland.com/blogview/24621-Challenges-and-Solutions-of-Threat-and-Vulnerability-Sharing-in-2015.html https://www.infosecisland.com/blogview/24621-Challenges-and-Solutions-of-Threat-and-Vulnerability-Sharing-in-2015.html Mon, 29 Jun 2015 11:40:00 -0500 The Evolution of Information Sharing for Banks

Overcoming the challenges that information sharing presents will require greater collaboration across the financial industry and a focus on combined efforts rather than individual protection

The concept of threat and vulnerability sharing is not new. The practice has been around for decades now, taking on various forms. But, the cyber-attacks on JPMorgan Chase and several other financial institutions this past year have caused a major push for improved bank security in 2015.

Information sharing programs should reach across sectors to increase accessibility and enhance the conversation between different companies about emerging cybersecurity threats and enhancements. When an embarrassing breach occurs, the last thing a bank wants to do is share the details and seem vulnerable to their competitors, but this will in the end help prevent further attacks across the entire financial sector.

Although more banks are starting to share cybersecurity threats with peer institutions, several challenges still remain as a roadblock to adopting as a best practice. Institutions need to take initiative and get involved with information sharing programs such as Soltra Edge,and FS-ISAC, a financial services information sharing organization sponsored by the federal government. Banks should continue to donate towards and involve themselves with these types of initiatives in order to be proactive about their data safety in the future.

Data collaboration has evolved over time from loose relationships to more formal methods of communications between humans and machine-assisted system updates. New efforts are changing operations, especially in the automation and reporting process, becoming more machine centric.  Much like the IPS movement a decade ago took the alerts from the IDS systems and acted on them, this will allow large organizations to recognize threats rapidly. Tools lend to quicker detection allowing banks to combine efforts and identify single actors that are affecting several different financial institutions. By shifting their security vision to be incorporative of the entire ecosystem, banks will see their competitors as peers and work collaboratively to eliminate cyber attacks as a whole.  As offensive tools are used, this type of coordination can help minimize their useful life span from hours and days, to seconds.

Many people have raised questions surrounding government regulation and whether or not officials are doing enough to protect banks in the coming year. Although Congress has had failed attempts at passing legislation that would encourage information sharing among banks in the past, financial institutions need to be less reliant on the Fed and more reliant on themselves for security. When government regulates or legislates technical solutions it can dampen innovation in establishing new ways to handle problems directly and even create unintended consequences.  At the same time, too many companies are keeping details of breaches to themselves, making these attacks effective for a longer span of time.  Fortunately more and more banks are seeing the value of information, and offering up threat and vulnerability experiences more willingly. Industry and government can foster this by supporting information release and helping set up trusted forums.

In 2015, we should see a surge in new cooperative efforts among concerned companies and government organizations in sharing details and acting to put an end to some of the more persistent problems in cybersecurity today. In many cases, the same malicious malware is being used to attack different institutions across all sectors, and sharing this information could help protect an extremely wide scope of companies. The vision for cybersecurity in 2015 for the financial sector should be collaborative and proactive. The financial industry should embrace and adopt information sharing practices in the future and take control over threat actors. This will lead to  less devastating breaches and cyber attacks. With continued support, similar collaborative efforts will utilize information sharing and help level the cyber playing field.

About the Author: Shawn Masters is Senior Technical Director, Novetta Solutions

Copyright 2010 Respective Author at Infosec Island]]>
Enterprises See 30 Percent Rise in Phone Fraud: Report https://www.infosecisland.com/blogview/24619-Enterprises-See-30-Percent-Rise-in-Phone-Fraud-Report.html https://www.infosecisland.com/blogview/24619-Enterprises-See-30-Percent-Rise-in-Phone-Fraud-Report.html Thu, 25 Jun 2015 12:57:17 -0500 Based on data from its “telephony honeypot,” anti-fraud company Pindrop Security has determined that the number of scam calls aimed at enterprises has increased by 30 percent since 2013.

According to the State of Phone Fraud 2014-2015 report published by Pindrop on Wednesday, financial institutions are the most attractive target for fraudsters.

The company says card issuers are the most impacted, with a fraud call rate of 1 in every 900 calls. Banks and brokerages report a fraud call rate of 1 in every 2,650, respectively 1 in every 3,000 calls.

“The higher rate of phone fraud for card issuers can be attributed to the fact that credit cards are one of the most common ways for the public to complete transactions, and thus card numbers are at greater risk for theft,” Pindrop noted in its report. “Compared to credit card numbers, banking or brokerage account numbers are less widely distributed. The less an account number is distributed across channels, the less likely it is to be at risk for fraud.”

Financial institutions risk exposing an average of $7-15 million to phone fraud every year, the report shows.

Retailers, including online retailers, are also targeted by scammers. The average fraud call rate in the case of retailers is 1 in 1,000 calls, but the rate increases for products that are easy to resell, Pindrop Security noted.

Consumers are an attractive target for phone scammers. In the United States, consumers receive 86.2 million scam calls every month, with 2.5 percent getting at least one robocall each week. Pindrop says 36 million of the scam calls made to US consumers can be traced to one of the 25 most common schemes, such as technical support and IRS scams.

Read the rest of this article on SecurityWeek.com.

Copyright 2010 Respective Author at Infosec Island]]>