Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Weak SOHO Router Default Passwords Leave Tens-of-Thousands at Risk https://www.infosecisland.com/blogview/24468-Weak-SOHO-Router-Default-Passwords-Leave-Tens-of-Thousands-at-Risk.html https://www.infosecisland.com/blogview/24468-Weak-SOHO-Router-Default-Passwords-Leave-Tens-of-Thousands-at-Risk.html Sun, 19 Apr 2015 22:04:46 -0500 Security researcher Viktor Stanchev has publicly disclosed that Bell’s SOHO modem/routers are shipped with extremely weak default passwords that can be cracked in a matter of days, leaving tens-of thousands of users at risk of network intrusions and sensitive data loss.

Stanchev says that the Bell modems are equipped with router features and are pre-configured as WPA wifi enabled. They are shipped with a sticker on them that tells the user the SSID and default password, with the SSID shown as BELLXXX – where XXX is a three digit number – and the password as eight hex characters, allowing for only 16 possibilities each.

“It’s easy to calculate the total possible passwords. They are 16 ^ 8 ~= 4 billion. Naturally, I fired up HashCat to see how many WPA passwords I could guess per second. Based on a 4 year old article 100,000 hashes/second is the speed a reasonable attacker could guess hashes at,” Stanchev wrote.

“This means that it would take less than 12 hours to crack with a good graphics card. My mid-range graphics card can guess 13,000 hashes per second. In theory, it should take up to 4 days to guess the password. In practice, it took me three days.”

While users do have the option to replace the weak default passwords with stronger ones that are less likely to be cracked, the problem is that statistically a large percentage of users do not do so, leaving them vulnerable to attack.

Once an attacker has gotten onto the wifi network, they can use the free Internet connection, they can use the target’s source IP address to launch attacks against websites, and they can download illegal content or post threats, implicating the owner of the device.

They can log into the router in most cases because the default username/password on the router management interface is usually admin/admin. If they log into the router, they can change the DNS server, flash the firmware, and they can change any other settings including the wifi password.

They can also “perform man-in-the-middle attacks using ARP spoofing or various methods available if they have the username and password for the router… They can backdoor any executable downloaded from the Internet and take over any of the machines connected to the network, they can downgrade HTTPS connections to HTTP, [and] they can replace TLS certificates and intercept traffic if the user clicks through the error,” Stanchev warned.

Stanchev first alerted Bell of his concerns on March 5th, and made multiple attempts to get in contact with officials at the company which finally resulted in brief conversations with @Bell_Support, but never received any official statement on the issue. prompting him to disclose the vulnerability publicly.

“I hope you fix this insecure default. I don’t think there is any cheap way to do that at this point. Maybe you have omniscient backdoor access into the routers. If so, you can use that to get a list of customers who are using the default passwords and call them to make them set their own SSIDs and passwords,” Stanchev said in a statement directed to Bell.

“For new routers you need to increase the character set from 16 to 62 (upper case, lower case, numbers) and the length to 10 to get 62 ^ 10 = 800 quadrillion possible passwords. While you are at it, make sure you have a good source of entropy when generating the passwords.”

This was cross-posted from the Dark Matters blog.

Copyright 2010 Respective Author at Infosec Island]]>
Three Things That Need Spring Cleaning in InfoSec https://www.infosecisland.com/blogview/24467-Three-Things-That-Need-Spring-Cleaning-in-InfoSec.html https://www.infosecisland.com/blogview/24467-Three-Things-That-Need-Spring-Cleaning-in-InfoSec.html Sun, 19 Apr 2015 21:50:29 -0500 Spring is here in the US, and that brings with it the need to do some spring cleaning. So, here are some ideas of some things I would like to see the infosec community clean out with the fresh spring air!

1. The white male majority in infosec. Yes, I am a white male, also middle aged…. But, seriously, infosec needs more brains with differing views and perspectives. We need a mix of conservative, liberal and radical thought. We need different nationalities and cultures. We need both sexes in equity. We need balance and a more organic talent pool to draw from. Let’s get more people involved, and open our hearts and minds to alternatives. We will benefit from the new approaches!

2. The echo chamber. It needs some fresh air. There are a lot of dropped ideas and poor choices laying around in there, so let’s sweep that out and start again. I believe echo chamber effects are unavoidable in small focused groups, but honestly, can’t we set aside our self-referential shouting, inside jokes, rock star egos and hubris for just one day? Can’t we open a window and sweep some of the aged and now decomposing junk outside. Then, maybe, we can start again with some fresh ideas and return to loving/hating each other in the same breath. As a stop gap, I am nominating May 1, a Friday this year, as Global Infosec Folks Talk to Someone You Don’t Already Know Day (GIFTTSYDAKD). On this day, ignore your peers in the echo chamber on social media and actually go out and talk to some non-security people who don’t have any idea what you do for a living. Take them to lunch. Discuss their lives, what they do when they aren’t working, how security and technology impacts their day to day. Just for one day, drop out of the echo chamber, celebrate GIFTTSYDAKD, and see what happens. If you don’t like it, the echo chamber can come back online with a little fresh air on May 2 at 12:01 AM EST. How’s that? Deal? :)

3. The focus on compliance over threats. Everyone knows in their hearts that this is wrong. It just feels good. We all want a gold star, a good report card or a measuring stick to say when we got to the goal. The problem is, crime is an organic thing. Organic, natural things don’t really follow policy, don’t stick to the script and don’t usually care about your gold star. Compliant organizations get pwned  – A LOT (read the news). Let’s spring clean the idea of compliance. Let’s get back to the rational idea that compliance is the starting point. It is the level of mutually assured minimal controls, then you have to build on top of it, holistically and completely custom to your environment. You have to tune, tweak, experiment, fail, succeed, re-vamp and continually invest in your security posture. FOREVER. There is no “end game”. There is no “Done!”. The next “bad thing” that visits the world will be either entirely new, or a new variant, and it will be capable of subverting some subset or an entire set of controls. That means new controls. Lather, rinse, repeat… That’s how life works.. To think otherwise is irrational and likely dangerous.

That’s it. That’s my spring cleaning list for infosec. What do you want to see changed around the infosec world? Drop me a line on Twitter (@lbhuston) and let me know your thoughts. Thanks for reading, and I hope you have a safe, joyous and completely empowered Spring season!

This was cross-posted from the MSI State of Security blog.

Copyright 2010 Respective Author at Infosec Island]]>
The Current State of Insecurity: Strategies for Inspecting SSL Traffic https://www.infosecisland.com/blogview/24466-The-Current-State-of-Insecurity-Strategies-for-Inspecting-SSL-Traffic.html https://www.infosecisland.com/blogview/24466-The-Current-State-of-Insecurity-Strategies-for-Inspecting-SSL-Traffic.html Fri, 17 Apr 2015 07:27:58 -0500 Encrypted traffic accounts for a large and growing percentage of all network traffic. While the adoption of SSL and its successor, Transport Layer Security (TLS), should be cause for celebration – since encryption improves confidentiality and message integrity – it also puts organizations at risk. This is because hackers can leverage encryption to conceal their exploits from security devices that do not inspect SSL traffic. Attackers are wising up and taking advantage of this gap in corporate defenses.

Organizations that do not inspect SSL communications are providing an open door for attackers to infiltrate defenses undetected and steal data. To prevent cyber-attacks, enterprises need to inspect all traffic, and in particular encrypted traffic, to avoid advanced threats.

SSL traffic is growing and it will continue to increase in the foreseeable future due to concerns about privacy and government snooping. Many leading websites today, including Google, Facebook, Twitter and LinkedIn, encrypt application traffic. But it’s not just the web giants that are encrypting communications; 48 percent more of the million most popular websites used SSL in 2014 than a year earlier, according to Netcraft's January 2014 Web Server Survey.

To mitigate these risks, organizations are increasingly deploying dedicated SSL inspection platforms. But if they acquire these platforms in haste, they might be blindsided later by escalating SSL bandwidth requirements, deployment demands or regulatory implications. Therefore, organizations must carefully evaluate the features and performance of SSL inspection platforms before selecting a solution.

If your organization is looking at SSL inspection platforms, you should consider the following five criteria before selecting a solution.

Performance

Performance is perhaps the most important evaluation criteria for SSL inspection platforms. Organizations that thoroughly evaluate performance benchmarks should be able to avoid surprises in their production environments. Organizations must assess their current Internet bandwidth requirements and ensure that their SSL inspection platform can handle future SSL throughput requirements. Testing SSL decryption speeds without considering the impact of deep packet inspection (DPI), URL classification or other features will not provide a clear picture of real-world performance.

Compliance

While IT security teams have deployed a wide array of products to detect attacks, data leaks and malware – and rightfully so – they must walk a thin line between protecting employees and intellectual property, and violating employees’ privacy rights. Privacy and regulatory concerns have emerged as one of the top hurdles preventing organizations from inspecting SSL traffic. To address regulatory requirements such as HIPAA, Federal Information Security Management Act (FISMA), Payment Card Industry Data Security Standard (PCI DSS) and Sarbanes-Oxley (SOX), an SSL inspection platform should be able to bypass sensitive traffic, such as traffic to banking and healthcare sites. By bypassing sensitive traffic, IT security teams can rest easy knowing that confidential banking or healthcare records will not be sent to security devices or stored in log management systems.

Support for Complex Networks

Organizations must not only contend with security threats from external factors but also from disgruntled employees. To safeguard their digital assets, organizations have deployed an ever increasing number of security products to stop intrusions, attacks, data loss, malware and more.

Some of these security products are deployed inline, while others are deployed out of band as passive network monitors. Some analyze all network traffic, whereas others focus on specific applications, such as web or email protocols.

Many organizations wish to deploy best-of-breed security products from multiple vendors; they do not want to get locked into a single vendor solution. The security landscape constantly evolves to combat emerging threats. In one or two years, organizations may want to provision new security products and they need to make sure that their SSL inspection platform will interoperate with these products.

As a result, SSL inspection platforms should interoperate with a diverse set of security products from multiple vendors. They should support transparent deployment and be able to route traffic from one security device to another with traffic steering.

By selecting an SSL inspection platform that supports flexible deployment, traffic steering and granular traffic controls, they will be able to provision their choice of security solutions in the future.

Maximize Capacity and “Uptime”

Organizations depend on their security infrastructure to block cyber-attacks and prevent data exfiltration. If their security infrastructure fails, threats may go undetected and users may be unable to perform business-critical tasks, resulting in loss of revenue and brand damage.

While firewalls have increased their capacity over time, they often cannot keep up with network demand, especially when multiple security features such as IPS, URL filtering and virus inspection are enabled.

Therefore, SSL inspection platforms should not just offload SSL processing from security devices. They should also maximize the uptime and performance of these devices. They should also maximize the overall capacity of security infrastructure through load balancing and integrated high availability. Only then can organizations unlock the full potential of their SSL inspection platforms.

Securely Manage Certificates and Keys

Whether providing visibility to outbound or inbound SSL traffic, SSL inspection devices must securely manage SSL certificates and keys.  When SSL inspection devices are deployed in front of corporate applications to inspect inbound traffic, they may need to manage tens, hundreds or even thousands of certificates. As the number of SSL key and certificate pairs grows, certificate management becomes more challenging. 

Organizations constantly add, remove or redeploy servers to meet business needs. This fluid and dynamic environment makes it difficult for organizations to account for all SSL certificates at any given time and ensure that certificates have not expired.

SSL certificates and keys form the basis of trust for encrypted communications. If they are compromised, attackers can use them to impersonate legitimate sites and steal data.

Conclusion

IT security teams face their own set of challenges as they tackle threats such as cyber-attacks and malware – threats that can use encryption to bypass corporate defenses. 

Privacy concerns are propelling SSL usage higher; businesses face increased pressure to encrypt application traffic and keep data safe from hackers and foreign governments. In addition, because search engines such as Google rank HTTPS websites better than standard websites, application owners are clamoring to encrypt traffic. 

With SSL accounting for nearly a third of enterprise traffic and with more applications supporting 2048-bit and 4096-bit SSL keys, organizations can no longer avoid the cryptographic elephant in the room. If they wish to prevent devastating data breaches, they must gain insight into SSL traffic. And to accomplish this goal, they need a dedicated SSL inspection platform.

About the Author: Kasey Cross is a senior product marketing manager for A10 Networks, a provider of application networking technologies.

Copyright 2010 Respective Author at Infosec Island]]>
PCI DSS 3.1 Sets Deadline for SSL Migration https://www.infosecisland.com/blogview/24465-PCI-DSS-31-Sets-Deadline-for-SSL-Migration.html https://www.infosecisland.com/blogview/24465-PCI-DSS-31-Sets-Deadline-for-SSL-Migration.html Thu, 16 Apr 2015 17:15:20 -0500 The PCI Security Standards Council (PCI SSC) has released the latest version of the PCI Data Security Standard (PCI DSS) with an eye towards addressing security concerns related to the Secure Sockets Layer (SSL) protocol.

PCI DSS v3.1 is available here. The update marks the latest critique of SSL security, which has taken a number of hits due to vulnerabilities such as FREAK and POODLE. Under the rules of the revision, SSL and early versions of the Transport Layer Security (TLS) protocol are no longer considered examples of "strong cryptography." The standard defines early TLS as TLS v1.0 as well as version 1.1 in some cases.

The move by the council follows the National Institute of Standards and Technology (NIST) identifying SSL v3.0 as not being acceptable for data protection purposes due to "inherent weaknesses" within the protocol. As a result, the council decided to update the PCI standard.

According to the new rules, companies have until June 30, 2016, to update to a more recent version of TLS. Prior to this date, existing implementations using SSL and or early TLS must have a formal risk mitigation and migration plan in place. Effective immediately, all new implementations must not use SSL or early TLS.

Point-of-sale (PoS) and point-of-interaction (POI) terminals such as magnetic card readers or chip card readers that enable a consumer to make a purchase that can be verified as not being susceptible to all known exploits for SSL and early TLS may continue using these protocols as a security control after June 30.

Read the rest of this story at SecurityWeek.com.

Copyright 2010 Respective Author at Infosec Island]]>
2015 Verizon DBIR and the Human Attack Surface https://www.infosecisland.com/blogview/24464-2015-Verizon-DBIR-and-the-Human-Attack-Surface.html https://www.infosecisland.com/blogview/24464-2015-Verizon-DBIR-and-the-Human-Attack-Surface.html Thu, 16 Apr 2015 12:54:41 -0500 By: Mandy Huth

Verizon’s annual Data Breach Investigations Report (DBIR) gives annual analysis and insight to the prior year’s security incidents and confirmed data breaches. As a security practitioner, I look to this report as a bellwether for our own security practices – what patterns are emerging and what should be my immediate takeaways to better protect my organization.

The DBIR assessed nearly 80,000 security incidents this year, two-thirds of those occurring in the US. As I reviewed this year’s data, the primary factor that jumped out at me was that people account for the majority of incidents.

“The common denominator across the top four patterns – accounting for nearly 90% of all incidents is people.”

You might ask then, what are you going to focus on, to help secure the humans? There are four areas that I targeted based on the new data.

PERFECT OUR ANTI-PHISHING SKILLS
Phishing is the pivot by which threat actors gain entrance into and begin their stealthy march inside the network. As Dwayne Melancon points out in his review, phishing attacks are becoming more sophisticated and overwhelming than ever for many organizations. The best way to evade these attempts is to hone our skills at identifying when we may be targets.

We remind our employees often that they are all targets of phishing. But it takes more than singular reminders for practice to become a habit. Something we might consider beyond the reminders is a war games exercise within the company – something to both raise awareness and educate at the same time? Publicizing our results, and trending our improvement over time with potentially even awards for high performing organizations? Definitely on my radar for this year.

PROACTIVE STEPS TOWARD PROTECTING CREDENTIALS
We are reminded in the Key Takeaways that getting into our networks is often only the first step in attacks and there is usually a secondary victim/target. With that in mind, how can we prevent the progression of the attack if we do inadvertently provide credentials to threat actors?

“Over 95% of these [web app] incidents involve harvesting credentials from customer devices, then logging into web applications with them.”

Two considerations come to mind. First, adding authentication to your most critical services will stop threat actors in their tracks, or potentially raise the bar high enough that they quickly move on. Requiring a password and a second form of proof will go a long way to slowing or completely stopping infiltration efforts.

Second, segment your networks. Flat is easy, but creates a jackpot for the threat actor. Take time to create “bubbles” in your network and put your most critical information there. If you already have these mechanisms in your organization, congratulations! Make sure you take time to audit them and extend them as needed.

FASTER WORKFORCE REPORTING OF LOSS/STOLEN DEVICES
Does everyone in your organization know what to do if their laptop or phone is stolen? Most theft is opportunistic and can happen to any of your employees. 55% of incidents happen within an employee’s work area. How will you respond?

“15% of incidents still take days to discover. Incentivize your workforce to report all incidents within a certain number of hours.”

OUT OF SIGHT, OUT OF MIND
IT and IT Security teams typically run fairly lean, and it seems we never have enough time or resources. Our world revolves around immediate and critical priorities needing to be addressed often within minutes and hours. It’s therefore sometimes easy to forget about important work that takes more time, analysis, and thoughtful decision-making.

We can tend to defer these important but less immediate tasks. When important tasks become “archived” in our minds (and this can happen to IT teams just as with any corporate group) we can lose track of key items can leave us open to attack.

“Ten CVEs account for almost 97% of the exploits observed in 2014.”

“The DBIR indicates that 71% of known vulnerabilities had a patch available for more than a year prior to the breach.”

The indicator for me here is to not forget the old ones! Each week we get a new list of vulnerabilities from US-CERT. Often there are 50-100 high and medium vulnerabilities to work on, so ensuring that we promptly find time for the low vulnerabilities will be an important consideration in my resource choices this year.

SUMMARY
Many security practitioners find themselves struggling to balance a lean team with the many demands that come from a strong security posture. The 2015 Verizon DBIR certainly helps IT Security practitioners to focus on trends and patterns seen within emerging threats having high probabilities for our environments – allowing us to get ahead of it and reduce our threat landscape.

Thanks for the insights, Verizon – and this summarizes a few takeaways for the IT Security Practitioner:

  • Educate the workforce on phishing characteristics;
  • Authentication is required to prove that we are who we say we are;
  • Request timely reporting (hours, not days) of lost corporate assets;
  • Remind our IT teams of older vulnerabilities to assure we’re as current as can be with patch maintenance.

These practices will be high on my list this year. How about you?

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
ASV Guidance for SSL/TLS Vulnerabilities https://www.infosecisland.com/blogview/24463-ASV-Guidance-for-SSLTLS-Vulnerabilities.html https://www.infosecisland.com/blogview/24463-ASV-Guidance-for-SSLTLS-Vulnerabilities.html Thu, 16 Apr 2015 09:16:11 -0500 Hidden by all of the news about v3.1 of the PCI DSS being published, is a notice that was sent to all PCI approved scanning vendors (ASV) from the PCI SSC regarding how to handle SSL and “early TLS” vulnerabilities.

In regards to the “early TLS” comment, the Council did define the term by referencing everyone to NIST SP800-52 rev1. That NIST document essentially tells the reader that while TLS 1.1 is allowed, whenever possible, TLS 1.2 should be the only version used. In fact, NIST is highly recommending that all government entities move to TLS 1.2 by January 1, 2016.

FYI TLS 1.3 is in a draft specification by the IETF as we speak. I would expect that we will see TLS 1.3 released by the time the PCI SSC’s June 30, 2016 deadline.

With that covered, what is an ASV to do with a scanning customer’s SSL and TLS 1.0/1.1 issues?

According to the letter sent to the ASVs:

“Prior to 30 June 2016: Entities that have not completed their migration should provide the ASV with documented confirmation that they have implemented a risk mitigation and migration plan and are working to complete their migration by the required date. Receipt of this confirmation should be documented by the ASV as an exception under “Exceptions, False Positives, or Compensating Controls” in the ASV Scan Report Executive Summary and the ASV may issue a result of “Pass” for that scan component or host, if the host meets all applicable scan requirements.”

The key here is that you must be mitigating the vulnerability and working to migrate to TLS 1.2.

So what would a mitigation plan look like? Most likely you would monitor for usage of SSL or TLS 1.0/1.1 connections to your devices that only support SSL and TLS 1.0/1.1.

For those of you that are not going to be able to migrate to TLS 1.2, the Council gives ASVs guidance there as well.

“After 30 June 2016: Entities that have not completely migrated away from SSL/early TLS will need to follow the Addressing Vulnerabilities with Compensating Controls process to verify the affected system is not susceptible to the particular vulnerabilities. For example, where SSL/early TLS is present but is not being used as a security control (e.g. is not being used to protect confidentiality of the communication).”

The reason the Council has to be able to provide a solution past June 30, 2016 here is that it is my understanding that a lot of comments were received about “baked in” SSL that was going to require wholesale replacement of devices to correct the problem. A lot of those devices are IP-based point of interaction (POI) devices. ASVs have been instructed on the process to use to reduce the CVSS so that the vulnerability is no longer considered “high”.

If you have any further questions regarding this announcement, I would discuss it with your ASV. As with all things PCI, every ASV will have variations based on their own risk adversity as to what this pronouncement says.

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Preview: Suits and Spooks London - May 6-7, 2015 https://www.infosecisland.com/blogview/24462-Preview-Suits-and-Spooks-London-May-6-7-2015.html https://www.infosecisland.com/blogview/24462-Preview-Suits-and-Spooks-London-May-6-7-2015.html Thu, 16 Apr 2015 07:37:00 -0500 With less than three weeks to go until Suits and Spooks London 2015 kicks off, the agenda is nearly finalized. The first 2-day Suits and Spooks international event will host experts in cyber warfare, intelligence, advanced persistent threats, sophisticated malware, and political issues.

The first day kicks off with a presentation by Jeffrey Carr, president and CEO of Taia Global, who will reveal details on Cyclone, an FSB case study on technology acquisition.

CERT research scientist Dr. Char Sample will then discuss culture and computer network attack behaviors. Dr. Sample will talk about the findings of cross-discipline research combining anthropology with cyber security and statistical analysis. The research, which has various applications, quantitatively characterizes cyber behaviors by country.

Adrian Nish, cyber threat intelligence team lead at BAE Systems, will shed light on the activities of a sophisticated threat actor in the Middle East, including a Suits and Spooks exclusive disclosure of a new cluster of activity which may be the most advanced Middle Eastern threat found to date.

Suits and Spooks LondonLater in the day, Roderick Jones, CEO of Concentric Advisors, will give a talk on the future of the individual's right to bear arms in cyberspace, and Kaspersky Lab senior security researcher Juan Andrés Guerrero-Saade will hold a briefing on the cyber threat landscape in Latin America.

Day one will also include interesting presentations from former US Defense Intelligence Agency CTO Lewis Shepherd, who is currently a consultant and vice chair at the AFCEA Intelligence Committee, and Thomas Rid, professor of security studies at Kings College in London and author of “Cyber War Will Not Take Place.”

On day two of Suits and Spooks, Costin Raiu, director of the global research and analysis team at Kaspersky Lab, will discuss state-sponsored malware. The expert will share insight on why sophisticated threats are discovered faster, and why it isn’t a very good idea for a nation state to use the same pieces of malware to target both terrorists and allies.

The second part of the day will focus on Russia. More precisely, Paul Joyal will discuss how Russia uses cyber operations as a new form of active measure, and Marina Litvinenko, the widow of former KGB spy Alexander Litvinenko, will offer an off the record interview on her husband’s alleged assassination.

“Thanks to some great talks by people like Marina Litvinenko whose husband (a former FSB officer) was allegedly assassinated in London by agents of the Russian government, we expect to have a good turnout from Britain's intelligence community for this event,” said Carr, founder of Suits and Spooks.

Suits and Spooks is a limited attendance, single-track event that brings together experts in the public and private sectors. The low speaker-to-attendee ratio encourages debate and discussion, and participants are given the opportunity to challenge speakers at any moment during their talk.

Suits and Spooks London 2015 takes place on May 6-7 in association with TechUK, an organization that represents the interests of more than 850 members of the tech industry in the United Kingdom. TechUK members, which range from innovative startups to major companies featured in the Financial Times Stock Exchange (FTSE) 100 index, employ roughly 700,000 people, or about half of the jobs in the UK tech sector.

The event will be held at the TechUK facility on 10 St Bride Street. A limited number of seats are still available and attendees can register online to hold a spot. 

Sponsors of Suits and Spooks London include Kaspersky LabLookingglass Cyber Solutions,LogRhythmIntercedeNorse and ANRC Services.

Copyright 2010 Respective Author at Infosec Island]]>
Healthcare Industry Challenged by Data Breaches, Compliance https://www.infosecisland.com/blogview/24461-Healthcare-Industry-Challenged-by-Data-Breaches-Compliance.html https://www.infosecisland.com/blogview/24461-Healthcare-Industry-Challenged-by-Data-Breaches-Compliance.html Wed, 15 Apr 2015 13:15:29 -0500 Compliance may be a key focus of the healthcare industry, but that hasn't always translated into secure environments.

In fact, in some cases, compliance efforts appear to be falling short. In a new report from Vormetric focused on healthcare organizations, almost half (48 percent) of the IT decision makers from the U.S. said their organization either failed a compliance audit or experienced a data breach in the last year.

The statistic comes from the 2015 Vormetric Insider Threat report, which is based on a survey of 818 IT decision makers in healthcare organizations around the world, including 102 from the United States. According to Vormetric, 92 percent of the U.S. respondents said their organizations are either somewhat or more vulnerable to insider threats. Forty-nine percent said they felt very or extremely vulnerable.

Some 62 percent of respondents identified privileged users – those who have access to all resources available from systems they manage – as the most dangerous group of insiders. Partners with internal access and contractors ranked second and third, respectively.

The report did not say specifically why so many organizations failed compliance audits. Regardless, the fact that they did indicates organizations are failing at basic data protection, opined Alan Kessler, CEO of Vormetric.

"Compliance requirements evolve slowly, while threats to data undergo rapid change," said Kessler. "Time and again, organizations that were compliant have been breached in the last few years."

Read the rest of this story on SecurityWeek.com. 

Copyright 2010 Respective Author at Infosec Island]]>
What Threat Intelligence Data Can Tell Us: The Sad Story of WF https://www.infosecisland.com/blogview/24460-What-Threat-Intelligence-Data-Can-Tell-Us-The-Sad-Story-of-WF.html https://www.infosecisland.com/blogview/24460-What-Threat-Intelligence-Data-Can-Tell-Us-The-Sad-Story-of-WF.html Wed, 15 Apr 2015 10:19:31 -0500 People differ in how they approach data analytics. One camp prefers to postulate a theory and find data that supports or negates that theory. Another camp prefers to let the data tell the story.

I’m in the latter camp – not simply because I like to avoid being biased by pre-formed opinion, but also largely because I like surprises and data is always full of those.

One of those surprises has come in the form of which countries had the highest saturation rates of malicious IP addresses. The Norse threat intelligence platform is extremely dynamic.

To gauge saturation, I took snapshots of Darklist data at regular intervals throughout December 2014.

The ppm (parts per million) and prevalence rate were calculated for each country for each interval, with the final results being the median ppm and median rate for each country overall.

(Big shout out to Kurt Stammberger who suggested ppm as a unit of measure – it turned out to be an excellent comparison methodology.)

Wallis and Futuna (WF), a small island collective located in the South Pacific Ocean, has a GDP ranking of 223/229 in the world. According to the CIA World Factbook, total population in 2014 was 15,561.

Total Internet population as of December 2013 was 1337 – an ironic match for leetspeak, the alternative English alphabet used on the Internet.

Ironic because – despite its tiny comparative size – Wallis and Futuna had the second highest global rate of malicious IP addresses in December 2014:

Globally, the median prevalence rate was 1:521 or 1921 ppm. The United States had a prevalence rate of 1:2864 or 349 ppm. The United Kingdom was 1:2293 or 436 ppm.  France (of which Wallis and Futuna is a collectivity) was 1:1377 or 726 ppm.

The following are the top ten lowest saturations in December 2014:

Of course, whether ranking higher or lower, the rates don’t imply these are deliberately malicious actors.

To illustrate this, we can look again at the island chain of Wallis and Futuna – all of the malicious traffic from that country resulted from bot-infected computers.

Of the entire top ten most saturated, only Wallis and Futuna and the island of Guam were the result of bot-infected computers and nothing else.

The intersection of Internet-emerging countries and Internet risks can present unique challenges. In the case of Wallis and Futuna, the CIA World Factbook reports the literacy rate of Wallis and Futuna is only 50% and 89% of the population speak only localized dialects.

Obviously, Wallis and Futuna’s high saturation rate has negligible impact on the rest of the world due to its tiny size, but the 1337 Internet-enabled residents there have already overcome significant adverse odds just to get online.

Having those herculean efforts rewarded by an epidemic bot infection is the sad story uncovered in our data.

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Real-Time Bidding and Malvertising: A Case Study https://www.infosecisland.com/blogview/24459-Real-Time-Bidding-and-Malvertising-A-Case-Study.html https://www.infosecisland.com/blogview/24459-Real-Time-Bidding-and-Malvertising-A-Case-Study.html Wed, 15 Apr 2015 10:03:19 -0500 Malvertising continues to be one of the biggest and most effective infection mechanism which, for the most part, is based on rogue advertisers inserting malicious ads in the machine.

We wrote several stories on this blog about this subject, but today, we wanted to get into a particular concept that is behind all of this called real-time bidding (RTB).

In this post, we will share some of the details behind a malvertising campaign we observed recently and show how it was made possible through RTB.

Real-time bidding

There was a time when ads were bought in bulk and displayed on specific publishers’ websites. This model has changed and been replaced by real-time bidding (RTB) where advertisers compete in a real-time auction for specific targets and audiences.

RTB is less costly, more efficient and flexible than bulk sales. The barrier to entry for new advertisers or those with a small budget is also lowered because they can literally assign a budget and only win the bids that matter to them.

Ad networks act as middlemen between publishers (the websites we browse wishing to monetize their traffic with ads) and advertisers looking to promote a brand on those sites.

There are a few select top ad networks and a myriad of ad agencies which all try to get their piece of the pie (a multi-billion dollar industry). Advertisers typically interact with a second level ad agency, which acts like a broker with the likes of Google, Yahoo!, AOL, etc…

The problem with RTB is that malicious actors are easily abusing the system and for the most part getting away with it.

Rogue advertisers bid on impressions just like anybody else, except that their ads are laced with malicious code intended to redirect users to fake alerts/software or exploit kits.

Malvertising via an ad platform

In this example, we look at an ad agency that is used as a self serve platform for advertisers. It boasts that it is connected to several ad exchanges guaranteeing that you will never run short of traffic.

An advertiser started bidding for one of their creatives (the “Loan up” advert you see in the picture below) and won it. Now, when you go to a certain site that meets the criteria of the advertiser (demographic, geolocation, etc) this ad will appear.

Now we see one problem with how the advert is loaded, or more particularly where it is loaded from. Rather than it being retrieved from the ad agency’s servers, it is linked from the third-party advertiser.

This is an issue because the third-party has full control over what is going to be loaded into the visitor’s browser.

Case in point, this advertiser was not legitimate and in addition to loading the regular advert, it also side loads a malicious iframe, to an exploit kit landing page. Think of the ad being the Trojan Horse…

malvertising_case

Third-party ad servers

It didn’t take us too long to figure out that the third party advertiser looked a little bit shady.

Both domains it was using (bndtrk.com and marialoantracker.com) are registered via an anonymizer service:

Registrant Organization: Whois Privacy Corp. 
Registrant Street: Ocean Centre, Montagu Foreshore, East Bay Street 
Registrant City: Nassau 
Registrant State/Province: New Providence Registrant 
Postal Code: 0000 
Registrant Country: BS

And yet, without too much trouble, this advertiser was able to sign up and start loading their creative into a well connected ad platform.

Popular publishers who trusted the ad agency were essentially taken advantage of and their visitors were exposed to malware.

Here’s a short list of some popular sites that loaded the malicious ad:

srv.bndtrk.com/adsrv.js?dt=wealthbrokerage.com&pid={removed}
srv.bndtrk.com/adsrv.js?dt=answers.com&pid=openxnat_{removed}
srv.bndtrk.com/adsrv.js?dt=newegg.com&pid=openxnat_{removed}
srv.bndtrk.com/adsrv.js?dt=weather.com&pid=aol_{removed}
srv.bndtrk.com/adsrv.js?dt=commerce.cnet.com&pid=smartadserver_{removed}
srv.bndtrk.com/adsrv.js?dt=blind.appnexus.adnetwork&pid=aol_{removed}
srv.bndtrk.com/adsrv.js?dt=blind.stanmoremedia.adnetwork&pid=pubmatic_{removed}
srv.bndtrk.com/adsrv.js?dt=blind.mg8.adnetwork&pid=pulsepoint_{removed}
pub.marialoantracker.com/508613968.js?domain=twcc.com&pubid=aol_{removed}

Exploit kit and malware infections

Following a successful redirection to the malicious iframe, the victims face the infamous Angler exploit kit, a cyber weapon designed to exploit any outdated browser or plugin in order to load malware.

MBAE_AnglerWith great precision and limited costs, the bad guys were able to expose to malware thousands of users from popular sites.

Putting all bad guys in the same basket is a little bit incorrect. For this operation, different groups were most likely involved.

The rogue advertisers are usually known as traffers and their business is to buy and sell traffic.

In this case they acquire traffic by posing as ‘advertisers’ so they can later resell that traffic to their own customers, which could be exploit kit operators or even scammers (i.e. fake tech support scam pop ups).

Whack-a-mole game

If the ad agency identifies them as a source of problems, they may suspend their account. However, if the advertiser is a big customer, the ad agency might simply warn them and give them a slap on the wrist.

Worst case scenario, if the rogue account does get terminated, the bad guys could easily open up a new one there or elsewhere and start this all over again.

This is one of the many reasons why malvertising remains a huge problem despite large amounts of money being spent to fight it.

RTB safeguards

Real-time bidding is here to stay and every second of the day, auctions take place for advertisers to display their creatives on publishers’ websites.

Some top level ad agency claim to process billions of impressions each day which means the volume of ads going through is enormous. Inspecting each and every ad becomes a gargantuesque task leaving space for bad actors to jump in on the bandwagon.

For less malicious activity in RTB, some loose ends need to be tied. For one, advertisers should be screened with more attention and anonymous accounts should raise a red flag.

Creatives (the adverts themselves) should ideally be loaded from the ad agency itself, and not from the third party advertiser. If the Flash banner resides in-house, there are less chances for it to load additional scripts or get compromised.

Many ad agencies struggle with being able to identify bad actors quickly, despite investing a lot to be proactive. One of the reasons for this is that the bad guys are becoming more and more creative and sneaky with how they abuse legitimate services.

For example, an advert may only be malicious to certain users of a particular country at a specific time of the day. Even the most advanced QA and security test lab in the world is going to have a problem catching those.

We have been writing about malvertising attacks on this blog with a double end goal: To report attacks to the ad networks while also protecting our users.

Malvertising is everybody’s responsibility. Certainly, ad networks should not allow for malicious ads to be loaded in the first place. But also, the reason why the bad guys keep doing this is because they can infect people’s computers often not patched or secured properly.

The malvertising war is far from over, but battles have been won already and changing the landscape will force cyber-criminals to look elsewhere.

This was cross-posted from the Malwarebytes blog. 

Copyright 2010 Respective Author at Infosec Island]]>
California Privacy Advocates Urge Defeat of Federal Data Breach Notice Bill https://www.infosecisland.com/blogview/24458-California-Privacy-Advocates-Urge-Defeat-of-Federal-Data-Breach-Notice-Bill.html https://www.infosecisland.com/blogview/24458-California-Privacy-Advocates-Urge-Defeat-of-Federal-Data-Breach-Notice-Bill.html Tue, 14 Apr 2015 20:48:38 -0500 Six California privacy and consumer groups have called on members of the US House Energy and Commerce Committee to oppose federal legislation that would wipe out California's landmark data breach notification laws. The House Committee may hear the Data Security and Breach Notification Act of 2015 as early as Wednesday, April 15. California Congress members on the House committee are Lois CappsTony CardenasAnna EshooDoris Matsui and Jerry McNerney. Privacy Rights Clearinghouse, Consumer Federation of California, Consumer Watchdog, World Privacy Forum, The Utility Reform Network (TURN), and Consumer Action are urging the members of Congress to defeat the proposed federal bill.

California was the first state to implement a data breach notice law in 2003, and has since amended the law several times to address changing threats. It is among the strongest such laws in the country, and offers Californians significant consumer protections. It has served as a model for legislation enacted on dozens of states. The inadequate standards of the federal Data Security and Breach Notification Act of 2015 would preempt and replace the more protective state breach notice laws.

Consumer rights advocates point out that the proposed federal legislation:

  • Contains a significantly narrower definition of personal information than existingCalifornia law, eliminating several categories of personally identifiable data, including certain sensitive medical and health insurance information.
  • Ties a breach notification to a breached business' subjective guess work regarding possible financial harm, rather than California's requirement of a consumer notice whenever records are likely acquired by unauthorized people.
  • Eliminates the California requirement that a breached entity provide notice to theCalifornia Attorney General.
  • Strips California security breach victims of the right to sue to recover damages.
  • Removes a California requirement for breached entities to provide identity theft prevention and mitigation services to residents whose private information is hacked or exposed.

SOURCE Consumer Federation of California

Copyright 2010 Respective Author at Infosec Island]]>
FFIEC Issues Guidance on Destructive Malware Attacks https://www.infosecisland.com/blogview/24457-FFIEC-Issues-Guidance-on-Destructive-Malware-Attacks.html https://www.infosecisland.com/blogview/24457-FFIEC-Issues-Guidance-on-Destructive-Malware-Attacks.html Tue, 14 Apr 2015 14:54:38 -0500 The Federal Financial Institutions Examination Council (FFIEC) released two documents with guidance for financial institutions on mitigating risks from the increase in cyber attacks that compromise user credentials or employ destructive software.

The FFIEC says that comprehensive resilience for organizations depends on their ability to identify security events and quickly minimize any potential damage, to recover data, and to restore operations following attacks involving destructive malware infecting critical information systems.

“To ensure that critical backup data are not destroyed or corrupted by destructive malware, financial institutions and their technology service providers should ensure that recovery strategies address the potential for simultaneous cyber attacks on backup data centers (e.g., mirrored sites ) or the potential for corrupted data to replicate to backup systems,” the first advisory states (PDF).

The second advisory issued is in regards to the increase in attacks designed to obtain large volumes of account credentials, such as passwords, usernames, email addresses, and other forms of authentication used by customers, employees, and third parties, as well as the theft of system credentials such as digital certificates.

“The theft of each type of user credential presents distinct risks. Stolen customer credentials may give an attacker access to customers’ account information to commit fraud and identity theft,” the advisory said (PDF).

“Stolen system credentials may also be used to gain access to internal systems and data to further distribute malware or impersonate the financial institution to facilitate fraud such as accessing payment processing systems for automated clearing house transactions.”

The FFIEC had previously issued revised Business Continuity Planning (BCP) (PDF) guidelines for the financial services sector in February, which for the first time included risk mitigation strategies to foster cyber-resilience in the face of escalating attacks targeting the industry.

The revised guidelines were part of the FFIEC Information Technology Examination Handbook (IT Handbook), and includes the addition of a new appendix on Strengthening the Resilience of Outsourced Technology Services, which underscored an organization’s responsibility to help manage risks of third-party service providers (TSPs).

The specific risks outlined in the guidelines include those from malware, insider threats, data or systems destruction and corruption, and communications infrastructure disruptions like denial of service attacks.

“This is the first step for the new cybersecurity/risk guidance. Because cybersecurity concerns surrounding third parties and banking institutions’ internal defenses are noted in this appendix, it indicates that the agencies believe these are the most important elements of the CA [cyber-exam] results,” said Stephanie Collins of the Office of the Comptroller of the Currency.

“The OCC, along with the FFIEC member agencies, have and will continue to emphasize the importance of comprehensive resilience and security controls for financial institutions.”

This was cross-posted from the Dark Matters blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The CISO Role in Cybersecurity: Solo or Team Sport? https://www.infosecisland.com/blogview/24456-The-CISO-Role-in-Cybersecurity-Solo-or-Team-Sport.html https://www.infosecisland.com/blogview/24456-The-CISO-Role-in-Cybersecurity-Solo-or-Team-Sport.html Tue, 14 Apr 2015 10:07:09 -0500 The average length of time in the commercial sector between a network security breach and when the detection of that breach is more than 240 days, according to Gregory Touhill, deputy assistant secretary of Cybersecurity Operations and Programs for the Department of Homeland Security. What could happen to your company during that eight-month period? Could your company survive?

This alarming statistic is just one of the reasons why the National Cybersecurity Institute at Excelsior College (NCI) undertook the task of surveying the nation’s chief information security officers. With the support of social media campaigns from Dell cybersecurity and the International Information Systems Security Certification Consortium, also known as ISC(2), NCI was able to collect a statistically significant number of responses across eight industry verticals. Although a formal analysis of the data is still being conducted, some important early revelations have already been identified.

While the overall survey broadly covered the domain, one of the most interesting insights for me came from a high-level response from just three questions:

  • What are the top three items/resources you need to accomplish your job?
  • Which of the following are the top five sources of application security risk within your organization?
  • Which of the following five skill sets best prepares someone to become a chief information security officer?

The survey designers worked hard not to focus just on the technical aspects of the CISO role. To that end, respondents had to choose from nine job resources, 10 security risk options and 11 specific skill sets. They also enjoyed the option of writing in a response. Although every option on each of these three questions had some takers, the most predominant answers were:

  • The top resource needed to accomplish the CISO job is the support of other management leaders.
  • The top source of application security risk is a lack of awareness of application security issues within the organization; and
  • The best skill set for preparing someone to become a CISO is a statistical tie between business knowledge and knowledge of IT security best practices.

Some may find it surprising that neither technical knowledge, technical skills nor the technology itself is an overwhelming favorite for the surveyed professionals. So with that observation, what truths can we learn from this answer set?

To be sure, additional analysis and rigor are needed, but from a personal point of view this early data hints that technical knowledge is not the primary CISO skill requirement. It also tips a hat toward the need for robust internal education as well a focus for reducing application security risks. For me, it also shows that a good CISO must also be a collaborative and communicative teacher across his or her organization. Is it me or do these traits describe a team leader or coach?

If you are a CISO, do these traits describe you? Are education and collaboration a core part of your company’s cybersecurity plan? Have you enabled management to give you the support needed for your own success? Can you describe yourself as the cyber team coach?

This was cross-posted from the Cloud Musings blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The Cost of a Non-Malicious Control System Cyber Incident – More Than $1Billion https://www.infosecisland.com/blogview/24455-The-Cost-of-a-Non-Malicious-Control-System-Cyber-Incident--More-Than-1Billion.html https://www.infosecisland.com/blogview/24455-The-Cost-of-a-Non-Malicious-Control-System-Cyber-Incident--More-Than-1Billion.html Tue, 14 Apr 2015 09:59:57 -0500 There is a tendency by many in the cyber security community to only care about malicious cyber attacks as opposed to unintentional cyber incidents.

April 9th, 2015, the California Public Utilities Commission fined Pacific Gas & Electric (PG&E) $1.6 BILLION for the September 2010 San Bruno natural gas pipeline rupture that killed 8 and destroyed a neighborhood (there are also 28 federal criminal charges and numerous other fines and penalties). This was not a malicious cyber attack but an unintentional control system cyber incident. The incident occurred following scheduled PG&E maintenance on the local SCADA system that resulted in the over-pressurization of a pipeline with a previously unknown weakness.

As PG&E did not immediately have the locations of the required manual shut-off valves following the pipe rupture, PG&E has now installed more than 200 gas valves that can be controlled remotely. Remote shut-off valves increase the threat attack surface. Considering San Bruno was not the first pipeline rupture that was cyber-related, there is a need to consider cyber and physical security protections of all pipelines using remote-automated shut-off valves. This should include known cyber vulnerabilities that affect pipeline operations such as Aurora and appropriate control system cyber security policies and procedures.

This was cross-posted from the Unfettered blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Law Enforcement, Security Firms Team Up to Disrupt Simda Botnet https://www.infosecisland.com/blogview/24454-Law-Enforcement-Security-Firms-Team-Up-to-Disrupt-Simda-Botnet.html https://www.infosecisland.com/blogview/24454-Law-Enforcement-Security-Firms-Team-Up-to-Disrupt-Simda-Botnet.html Mon, 13 Apr 2015 12:19:31 -0500 More than a dozen command and control (C&C) servers used by the Simda botnet were seized last week by law enforcement authorities coordinated by Interpol.

Officers from the United States Federal Bureau of Investigation (FBI), the Dutch National High Tech Crime Unit (NHTCU), the Police Grand-Ducale Section Nouvelles Technologies in Luxembourg, and the Cybercrime Department “K” of the Russian Ministry of the Interior took part in the operation. Technical support was provided by Microsoft, Kaspersky Lab, Trend Micro, and Japan’s Cyber Defense Institute.

Authorities disrupted the Simda botnet’s activities on Thursday by seizing a total of 14 C&C servers, ten of which were located in the Netherlands. Other servers were found in the United States, Poland, Luxembourg, and Russia.

According to Interpol, the malware powering the Simda botnet, detected as Backdoor.Win32.Simda, Simda.AT and BKDR_SIMDA, has infected over 770,000 computers in more than 190 countries over the past six months. The United States is one of the most affected countries, with roughly 90,000 new infections being detected in the first two months of 2015 alone.

Tools designed to help Simda victims clean up their computers are available from Microsoft, the Cyber Defense Institute, Trend Micro and Kaspersky.

Read the rest of this story on SecurityWeek.com.

Copyright 2010 Respective Author at Infosec Island]]>
10 Steps to Improve Your Layered Defense Strategy https://www.infosecisland.com/blogview/24453-10-Steps-to-Improve-Your-Layered-Defense-Strategy.html https://www.infosecisland.com/blogview/24453-10-Steps-to-Improve-Your-Layered-Defense-Strategy.html Mon, 13 Apr 2015 11:06:34 -0500 By: Isiah Jones 

We have a problem in the security community – or maybe within the modern information age of humanity in general. That problem is we see security as a technology, policy, privacy or people issue, instead of an integrated combination thereof. However, despite standards, laws, best practices, lessons learned and new technology we continue to practice defense-in-depth wrong.

We still treat security as an IT problem. We still treat risk and compliance as a paperwork exercise. We lack lack the implementation of a true security culture throughout all members of an organization. We continue to believe IDS, SIEM and anti-virus is enough. And we still think the audit, compliance and operational center tiered help desk approaches are all true defense-in-depth, especially while operating each in the same old silo culture apparatus.

If we truly want to improve security today, we need to take some steps to improve how people see, define and handle these security issues.

This is not an exhaustive list but in my experiences, here are some items we can change immediately to improve our layered defense strategy:

1. CREATE THE ROLE OF THE CSRO
The Chief Security and Risk Officer (CSRO) should be created and answer directly to the CEO, Board of Directors, and political appointees etc., as the organization’s Chief independent voice for all security and risk issues.

This would include emergency, life safety and physical security issues, privacy issues, and cybersecurity issues. The traditional CISO, CSO, deputy CIO or security director is not working for the current landscape. This role should not be subordinate to CFO, COO, CIO and CTO but should replace CISO, CSO, CRO, etc.

2. ESTABLISH THE CSRO TEAM
We should create an authoritative cross-functional team led by the CSRO and/or his/her deputy that is the sole authoritative body on all security and risk issue decisions, response coordination, accountability, leadership and policy enforcement for the organization.

This team should meet at least weekly. It should also have a well-defined charter with each member having voting power on the team and be given its authority in writing from the highest ranking official within the organization itself. This team must and should consist of the following type of membership of subject matter experts (SME) at a minimum:

  1. Senior IT security SMEs.
  2. Senior Legal counsel rep.
  3. Senior Privacy Officer.
  4. Senior HR rep.
  5. Senior audit and financial rep from the CFO/COO part of the organization.
  6. Senior Physical Security and Life safety manager/SME.
  7. Senior Program/Project Manager and Operations Management rep.
  8. Senior Technical Engineers.
  9. Applicable business area/data/information/system owners as needed.
  10. Key external partners, suppliers and customer stakeholders as needed.

3. TAKE ON AN ACTIVE DEFENSE STRATEGY
The overall strategy must include an offensive element in the form of active defense. This does not mean that the organization needs to outright attack those they believe targeted them. However, it does mean that Honeypots, non-malicious droppers and other methods to study attackers, obtain creditable attribution and increased deterrence or derailment of adversarial efforts is possible and should be used. Moreover, outright attacking should be left up to those with the existing jurisdiction to do so in the kinetic or physical world today such as military, intelligence and law enforcement.

4. PRACTICE DEFENSE-IN-DEPTH
All layers of the OSI model, as well as the human layer, must be covered in the defense-in-depth approach of the organization.

For example, Network IDS and IPS, web content filtering, Web application firewalls, malware analyzer tools, vulnerability analyzer tools, host level IPS with DLP, eDiscovery & forensics tools, decryption and encryption at rest, as well as in transit tools, Lojack tools, SIEM and machine data mining tools etcetera must all be stacked and layered from the Application Layer all the way down to the physical layer of protection. Mobile application and data security, Cloud security with sound SLAs and Wireless protection should also be included.

5. ACCOUNT FOR ADJUSTMENT
Evolving baseline with daily, weekly and monthly adjustments will be needed. Study the LDAP, SNMP, DNS, HTTP and other traffic occurring within your networks on a regularly used basis. Watch Admin account behavior and know your access control practices, not just the policy on paper. Additionally, establish a request process and change control for business units requiring or requesting various types of software.

Ensure security testing, evaluation and analysis, as well as testing and locking down the host images deployed on assets across the organization to prevent users from installing software that they are unauthorized to install. It is far easier to target behavior that is not usual for your specific organization than it is to take an ITIL trouble ticket approach to every single IDS/IPS and SIEM alert that pops up on the dashboard.

In fact, it is a far better approach to security than wasting your resources chasing alerts and generating trouble ticket metrics rather than putting all of your resources into learning the dynamic behavior of the organization itself.

6. LEVERAGE WHITELISTING AND BLACKLISTING
This goes along with baselining but also requires active global malware analysis. It requires studying indicators of compromise, threat intelligence and incident after action reports from many organizations, not just your own. Then you must apply them to your organization’s evolving daily, weekly, biweekly and monthly baseline.

7. BUILD A VULNERABILITY MANAGEMENT AND PATCH MANAGEMENT PROGRAM
Break out all segments of the network – all hardware and software and user groups – into a daily, weekly, biweekly and/or monthly schedule, so that at least every 90 days all segments will have been patched and scanned for the latest vulnerabilities at least once.

Build a point of contact list for each segment to hold accountable for mitigating discovered vulnerabilities and out-of-date patches. This will at least create a collaborative culture of testing and developing mitigations as the norm, instead of just for compliance exercises or audits etcetera.

8. CREATE A COLLABORATIVE WORKING ENVIRONMENT
Leverage online and virtual penetration testing, malware analysis and forensic tools, websites, labs etc. in the office as the norm, not the exception. Create weekly ways for your teams to cross-train in different areas.

Create an organizational team that participates in global competitions, as well as internal organizational competitions of attack and defend. Create internal wikis and training sessions that allow peers to tutor each other on a weekly- and monthly-basis.

This enables your existing workforce to continue training even when the budget is not supportive of flying off to conferences and formal training. The best teams are collaborative with each other and continuously cross-trained as a culture. This is especially important in large organizations with dispersed teams and various duties split across various sections of the organization. Keep the culture collaborative as the norm, not just for an audit or an incident.

9. ALLOW OPPORTUNITIES FOR GROWTH AND SUCCESS
Leadership experiences, training and position rotations of primary and secondary duties are often great for the individual, and will pay off dividends for the organization long-term. This applies the same as number eight above, but in this case, do the same for the non-technical cross training culture needs. This will further allow your technical and non-technical folks to cross-train in other primary and secondary duty areas to acquire new skill sets. It further creates a collaborative culture of respect, cross pollination and regular communication.

10. DEVELOP A CULTURE OF VIGILANCE
Lastly, even if you think all is well, engage an outsider to assess, penetrate and audit your organization both kinetically and via cyber at least twice a year, so that your organization will continue to develop a prepared and proactive culture, from the janitor up to the heads of the organization and their staff.

This was cross-posted from Tripwire's The State of Security blog. 

 

Copyright 2010 Respective Author at Infosec Island]]>