Infosec Island Latest Articles https://www.infosecisland.com Adrift in Threats? Come Ashore! en hourly 1 Digital Reputation: Can’t Buy it, Gotta Earn It https://www.infosecisland.com/blogview/24274-Digital-Reputation-Cant-Buy-it-Gotta-Earn-It.html https://www.infosecisland.com/blogview/24274-Digital-Reputation-Cant-Buy-it-Gotta-Earn-It.html Thu, 29 Jan 2015 11:49:14 -0600 Can’t buy it, gotta earn it, as the old saying goes. With a few short searches, it is easy to find tweets that have had a significant impact on the reputation of institutions, police departments, online ecommerce properties, and corporations in 2014.

Whether it is a political statement, cultural difference or negative public review exposing competitive weaknesses, I think we all agree that bad news travels fast.

Does this same digital reputational reality exist in the physical world? Yes it does. The mileage and distribution may vary, but, for the most part, cyber can be compared to physical world reputations.

Years ago, before the Internet, if fraud was detected on a credit card, the card was blocked by merchants and banks. Going back further, in the 1800’s, the ultimate cost of negative reputation could include death by hanging for stealing a horse.

So, does a tweet of today compare to a “Wanted Dead or Alive” poster from yesteryear?

Today, the success and failure of many corporations can be placed on their online reputation. The security breaches of 2014 will become the justification of increased intensity on reputation.

Reputation is assigned to an IP address, online identities, physical identity and elements of online presence throughout the Internet to help determine good, neutral and bad actors.

An example of negative reputation would be if the IP’s of a corporate or personal website were to be compromised by hackers using known vulnerabilities with a goal to distribute malware, the end result could be that the site is flagged by web reputational services such as Google Safe Browsing.

Cleaning up the situation and reinstating the online presence of a company can be difficult, time consuming and, in some rare cases, irreversible.

As mentioned, in some rare cases, negative reputation is irreversible and the result is permanent. For example, in 2003, Microsoft recommended that TCP Port 135 be blocked due to the Blaster Worm, and in 2004 it went further with a blocking recommendation to include other ports.

At the time, Internet Service Providers (ISPs) as well as large enterprises took significant measures to protect their consumers and networks from certain disruption by the rapid spread of these network-borne viruses.

Today, a decade later, Port 445 continues to lead targeted attack vectors according to Akamai, and security infrastructure from home networking equipment to firewalls have built-in firmware to protect against these persistent threats.

The reputation of IP’s, domains and other supporting elements of one’s online presence directly translate against their bottom line. In fact, many industries have been built in the past 15 years just to track reputation and potential sources of negative impacts to the online presence of corporations.

Conversely, a consistent positive reputation can have a durable outcome as proven with brands like Facebook, Google and Apple with their undeniable growth of positive reputation online services.

Hackers are lurking in the dark corners of the Internet, the DarkNets and inside networks where they can move easily, impersonate and attack as well as deteriorate the reputation of a company at will.

So how does a company protect it’s online reputation?

  • Know what assets you’re trying to protect:
    • What intellectual property would be of value to your corporate adversaries?
    • Who could you adversaries be?
    • Where would you find your digital assets if they were to be stolen?
    • How would you know they had been stolen?
  • Monitor for E-Hacktivism against your brand by using various security services.
  • Implement best practices within the defense in depth to protect against your online presence and internal networks from being compromised.
  • Listen to the abuse@yourdomain.com email box as well as the technical contact listing for your domain as other security experts will attempt to use these addresses by default to contact your company before, during or after a compromise.
  • Ensure your company has an incident handling procedure and test it often with ongoing security audits and penetration testing.

Remember, you can’t buy reputation, you gotta earn it.

This was cross-posted from the Norse blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Two-Factor Authentication Transforms Even ‘123456’ Into a Secure Password https://www.infosecisland.com/blogview/24273-Two-Factor-Authentication-Transforms-Even-123456-Into-a-Secure-Password-.html https://www.infosecisland.com/blogview/24273-Two-Factor-Authentication-Transforms-Even-123456-Into-a-Secure-Password-.html Thu, 29 Jan 2015 11:06:52 -0600 Since 2011, the same two passwords have ranked as the most common (and worst) among users. Care to take a guess as to what they are?

You don’t have to be a savvy hacker to figure them out – “123456” and “password” have again topped the list this year. The good news is the prevalence of these two passwords in particular has fallen quite a bit, from 8.5 percent of all passwords in 2011 to less than 1 percent now.

As a password to an individual’s Facebook or Tumblr account, these are probably adequate. The accounts they’re “protecting” are low-profile, unlikely targets, and hackers wouldn’t really gain much from breaking into them anyway. It’s a different story when a user sets up a work-related email or credit card account – much more likely targets of attackers – using these easy-to-crack passwords.

Instead of using brute force and repeatedly trying passwords, hackers barely have to break a sweat or exert any effort. They can simply type in “1-2-3-4-5-6″ or “p-a-s-s-w-o-r-d” and they’ll be granted entry on their first try. A gold mine of information suddenly materializes right at their fingertips.

At first glance, network administrators appear to have a few different courses of action to prevent these types of weak passwords and shore up their network security. They could try employee education – teaching their workforce best practices when it comes to setting up their credentials. Or they could provide them with tools that both randomly generate secure passwords and then store them securely for easy recall.

The problem with each of these solutions is that they’re really just temporary bandages – they still don’t account completely for the human factor. An employee could still circumvent these processes, either deliberately, for convenience, or accidentally. Then the network administrator is back to square one – the network security vulnerability still exists.

A stronger solution for IT departments is two-factor authentication. By adding another step to the user verification process, beyond requiring just a password, the security of the account suddenly becomes much stronger. This is why nine in 10 global IT managers said they would plan to use one-time passwords (OTP) in 2014 as part of a two-factor authentication strategy to help improve their network security.

So why isn’t every IT department rolling out this seemingly ironclad method of verification across the board? The answer is simple. As is often the case with any issue involving network security, the conflict lies in the balance between convenience, resources and security. Simply, it’s not practical or expedient for every server or file folder to be accessible only through two-factor authentication.

At the same time, selectively protecting only certain files through two-factor authentication could leave an entire network vulnerable. As PC World’s Tony Bradley points out, “It’s like locking every door and window in your house except for one, and hoping a burglar isn’t thorough enough to find the one unlocked entrance.”

Bradley is right. And to elaborate on his point, one of the most glaring “unlocked entrances” a network can have is in its remote access infrastructure. Fortunately, some VPNs come equipped with secure enterprise management capabilities that include support for two-factor authentication and a randomly generated, one-time password sent to a user via SMS.

When faced with this additional hurdle, any hacker hoping to exploit a remote access vulnerability would be even less likely to successfully break into an account, even if a user made the mistake of setting a password to a laughably common one like “123456” or “password.”

This was cross-posted from the VPN HAUS blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Interview with Accuvant co-founder Dan Wilson https://www.infosecisland.com/blogview/24263-Interview-with-Accuvant-co-founder-Dan-Wilson.html https://www.infosecisland.com/blogview/24263-Interview-with-Accuvant-co-founder-Dan-Wilson.html Thu, 29 Jan 2015 09:47:43 -0600 In 2014 I began my quest to interview some of the most interesting folks in the Colorado security community. The goal of this series is to highlight different perspectives in the region, and have some fun doing it. In March, I interviewed information lawyer Dave Navetta. In April I sat down with Chris Petersen, co-founder of Boulder-based Log Rhythm, in June I spoke with Johan Hybinette, CISO for Hosting.com, then in October I met with Jericho from Attrition.org.

For my next conversation I set out to talk with the biggest security shop in town… Accuvant I reached out through some of my friends over there and was set up a meeting with one of the co-founders, Dan Wilson. Originally we were scheduled to meet November 5th… the day of the announcement of the joining forces of between Accuvant and FishNet Security. Accuvant and FishNet Security have been the clear leaders in the security VAR and reseller space. Their union is sure to change the landscape of the information security industry. This made the opportunity to talk with Dan even more exciting for me. With all the surrounding activity and the holidays, we were able to get together for lunch on January 16th.

We sat at an upstairs table at Hapa sushi downtown. Our conversation blew way past the scheduled hour, it was fast-paced, and covered a lot of topics. Below I have summarized the key points of the conversation and tried to give you some insight into the founding and building of Accuvant – one of the biggest security companies in the world, and one of Colorado’s biggest success stories.

My questions are indicated in bold, with Dan’s responses paraphrased below.

Thanks for getting together Dan. As I mentioned, I am focused on shedding light on the Colorado security community. So to start, how did you end up in Colorado?

dan-wilson

I am a Colorado guy. My parents moved here when I was two years old, and I’ve been here ever since. I graduated from CU Boulder, and continue to live on the north side of town.

What’s your background in security?

Dan Burns (Accuvant co-founder) and I both got into IT in about 1992 working for a local Sun Microsystems distributor called Access Graphics. We both worked there for 4-5 years. Sometime around the beginning of 1997 Dan Burns left and joined another company called Netrex. Netrex was one of the first managed security providers, and really one of the first VAR model plays.

Netrex had their start providing security to the automotive industry. They built a SOC (security operations center) to support their work for the automotive industry. It was successful, and they sold the same concept to US West here in Denver.

I remember sitting around a table at Siamese Plate in Boulder, with Dan Burns and another colleague who had moved to Netrex. They were telling me about this security thing, and how it really seemed to be a big opportunity. I remember replying with something like, “I don’t know… I am doing pretty well selling Sun Sparc chips. I don’t really see that market going away.” They twisted my arm, and in the end the three of us opened up the Denver office to support US West.

Over the next few years they really blew up. They grew from those first two offices to 12-15. In 1999 they were acquired by ISS (before IBM bought ISS). I stayed for about a year and a half after the acquisition. So, like a lot of other folks in the security industry, I have ISS somewhere in my history.

Why not stay around there at ISS?

I’ve found in my career that I prefer not to be required to sell a specific manufacturer’s products. Regardless of how good the product is, being told I need to sell a few specific solutions limits the options I can provide customers. When innovation is going so fast, and technology continues changing, having the ability to recommend multiple manufacturers is the best way to serve a customer.

So what came after Netrex?

I moved to Exault, another small security company, very similar to Netrex. I was only there a few months before that company was acquired by VeriSign.

Dan Burns, William Strub, Scott Walker and I had all worked together at Netrex. We believed that there was a big need for a company like Netrex or Exault in the security space. There was so much confusion in the security industry, where customers didn’t know who to trust about technology and solutions. We believed that where Netrex and Exault had gone wrong was in selling so quickly. We knew we could do a lot of similar things to what they had done, while avoiding some of the mistakes they had made, and make a successful organization. We would build a company with a strong services arm, wouldn’t be beholden to a specific product set, and try to make sure we anticipate customer needs.

Accuvant was the fruit of those conversations. It was born in 2002 at Willie G’s here in Denver. The business plan was literally born on the back of a cocktail napkin.

Just the four of you to start?

Yes, just the four initially. I led partnerships, Dan Burns was leading sales, Scott handled operations, and Bill was leading consulting. Shortly after we brought over another Netrex alumni, David Bonvillain, to start up our strategic services; what would eventually become Accuvant Labs. We brought him from the hacker community, similar to the Jericho type group. David and Jericho would almost certainly have run into each other multiple times.

What about the name Accuvant? Was that the original name? Where did it come from?

Yes, that was the original name we came up with.

At the time we had a methodology we called AIM, (assess, implement and monitor). So we looked for a name based on Accuracy. We tried a bunch of different prefix and suffixes, and Accuvant came from combining accuracy and savant. We came up with about 10 different options, took them to our friends and family, and we picked the one that had the most votes.

So, it wasn’t exactly scientific method that got us the name. Scott Walker joked that the name is Latin for “This domain name is available.”

How did you fund the company? Did you take capital?

Not really. We took a little bit of friends and family money, but mostly we bootstrapped. For the first couple years we took no salary. And for the few years after that we took minimal salaries. I was fortunate that my wife was working to help our family pay the bills, but a couple of the other guys weren’t so lucky, and they really had to burn through their savings as the company got up and running.

We were able to bootstrap it because it is a low cost business model. Most of the expense comes in compensation, and most of that is variable comp since we’re such a sales heavy organization. We really had a credo, and still do, that we want to reinvest profit back into the business to grow it. A lot of the money that would have gone into our own pockets has gone back into the business.

Were you located downtown right off the bat?

Yeah, we were in the old post office building, at 17th and California. In class D office space. It was a humble beginning. I remember flickering lightbulbs and card tables. And we signed a long-term lease to reduce costs, so even after we started to have good success we were there for a couple years. I remember starting every meeting with a short apology about the conditions, and trying to convince customers that, despite our surroundings, we were doing pretty well.

The company kicked off in 2002, tell me about the beginning.

When we had first imagined the company we thought a big part of our model would be selling Checkpoint and op sec support around that. We had years of prior experience and relationships with them, and thought that would give a leg up. What we experienced was that a ton of competitors came out of the woodwork to offer Checkpoint and that our history and relationships didn’t really give us much. We found ourselves mired in a very competitive space, so it quickly went to price.

We were fortunate to partner with OneSecure early on. They were one of the very first IPS solutions. The fact that we were able to offer something to our clients that was a little off the beaten path was a differentiator for us. Rather than having to sell the same few products that all the other VARs were offering, we had something different.

Netscreen (later Juniper) was another big early play for us that allowed us some differentiation. The SSL VPN technology was just emerging at the time, and we were able to use that to get an edge in the market.

Now, remember my position with the company was as the head of partner relationships. So I can’t help but talk about the technologies we brought on to help us. But we have always been about services, and our services really set us apart. Since the beginning we have focused on delivering a whole solution to our clients, rather than a technology. My focus on technology is mostly based on my role in the company.

What about expansion?

We had a goal from the beginning to open a new office every quarter for the first 3 years. We were able to accomplish that. So by the end of the third year we had 12 offices located around the country. Now, an office in a town might only mean 1 or 2 people working from home, but we at least had a presence in a dozen cities early on. We’ve slowed down since then, but with the pending union with FishNet Security we will have a presence in over 40 cities.

How did that transition from scrappy start-up to successful company go?

We were driving so hard for such a long time; it took a while for us to realize we were there. We had hired quite a few individual contributors in the first several years, but it wasn’t until about 5 years in that we started to bring in help at the management level. Specifically in the finance area we brought in Ed Wittman. We needed someone with a CFO background to help us out. I think the first thing he told us was that we were badly underpaying ourselves. It was this outsider’s perspective that helped a lightbulb appeared over our heads… Yeah, we are doing pretty well now. We don’t need to continue running the place check to check and hand to mouth.

Now, don’t misunderstand. It’s still extremely important to us that we reinvest everything we can back into the company. This is something that Dan Burns regularly talks about, and is a focus for us.

Did you guys ever get any pressure to move the headquarters out to Silicon Valley?

No, not really. It was (and still is) our goal to build a national presence. In order to work with large national customers we needed to have a presence where they did business, which was a big part of why it was an early objective of ours to grow out to those 12 locations in the first three years. And certainly the Bay Area was one of the first ones we opened. Because we were able to develop that footprint throughout the country, grew organically and never sought to take funding, we never were pressured to move the headquarters to California.

When we sold a majority stake in the company to a private equity firm in 2008, there was no conversation about it even then. We are Denver guys, and we plan to have a strong presence here long term.

One unique thing about Accuvant – I understand that you guys operated for the first decade or so without a CEO. How did that work?

Yes, it wasn’t until a few years ago that we named Dan Burns the CEO. For most of our history we really operated as a four-person committee. And sometimes the committee grew even bigger. It was a really powerful way for us to build the company, with different people focusing on different areas. No single “buck stops here” person. Bill was very technical, Scott had a great head for numbers (and was our de facto CFO before we had one), and Dan Burns and I were handling the sales… his team to customers and mine to partners.

In 2009, before we had a CEO, we were nominated by Ernst and Young for an entrepreneur of the year award. We were about 400 million in revenue at the time.  The judge from Ernst and Young just kept saying, “It’s impossible to build a company that big without a single throat to choke.”

Eventually that proved to be right. As effective and successful as it was, we eventually had some differences of opinions that couldn’t be worked through without a single chief executive.  We all had budget control, and we all had a ‘highest priority’ which didn’t always align. Getting a single CEO eventually allowed us to ensure we had an aligned strategy throughout the company.

Can you share any of Accuvant’s missteps?

Yeah, sure. We did a deal to acquire a company called Ciphent in 2010. The thinking was that it would ramp up our managed services capability, would give us a presence in the northeast that we lacked, and would help our federal practice. But the Ciphent deal itself was not a misstep. It was a 1+1=3 opportunity. The misstep was in our integration effort. We encountered some unexpected hurdles that I’m sure many companies face when going through their first integration. Our two cultures were drastically different, which we should have realized earlier in the process and that definitely had a short-term impact on the business.

For a long time after, the only lesson we learned from that was to never make a deal. But I believe our experience with Ciphent has really informed how we are doing integration with FishNet Security right now. The Ciphent acquisition was obviously a different scale... they were a 40-person organization, versus now with FishNet Security, it’s two 650-person organizations. But that experience taught that it’s not reasonable to expect people to just come together and use their spare time to figure out the details. We are being very deliberate, and bringing in outside help to assist with the integration, and putting some of our best people on the integration on a full-time basis.

I’ve had the chance to talk to a number of folks in the industry about the union with FishNet Security, and what the resultant company will look like. There is some skepticism out there that the end result will be greater than the sum of their parts. Obviously you guys wouldn’t have made this move if you agreed. Why do you believe the two companies are stronger together?

Well, clearly the proof will be in the pudding. We strongly believe that’s true, and there are an awful lot of other smart people who agree and supported this combination.

There are some real good reasons to think this is a strong union. The overlap of key clients between our companies is extremely small, about 3%. To be clear about what that means, it doesn’t mean that we don’t have a presence in those companies; it’s that 80-90% of their business goes to one of the companies or the other. We see that in security, companies want to have a trusted advisor, and will generally stick with one partner for most of their business.

I have heard the Accuvant/FishNet Security union compared to Coke and Pepsi joining forces – where people simply have a preference for one business over the other. In such a case, the consumer who liked one company over the other might lose out on their choice. Do you believe that is a risk to customers?

That concern makes complete sense. The answer is that this is a relationship business. Of course there will be adoption of process improvements from both Accuvant and FishNet Security into the new company. Things like client invoicing and SOW creation will see improvements as we utilize the best from both companies. However, from an end-client perspective, it all comes down to the people you work with. Our clients will continue to work with the people who have made their experience a success.

The reason both FishNet Securityand Accuvant have been successful in those accounts is due to the people who service the clients. Sales, pre-sales technicians and delivery people who do a great job. Those people will still be the face, hands and feet of the company.

I don’t think it’s unreasonable to say that the standardization of the back-office processes will deliver a much smoother experience to the customers, with no loss in the high-touch experience from the professionals they’ve come to trust.

Both Accuvant and FishNet Security have invested heavily in building unique offerings. For example, Accuvant’s office of the CISO under Jason Clark. They are investing in research and putting together tools to offer strategic assistance to CISOs. Or what Ryan Smith’s team is doing in R&D. They are looking to solve challenging technical problems for the industry. Exploit development and research are among their focus, and it really makes us much stronger as a company.

On the other hand, FishNet Security has invested in developing a very strong Identity and Access Management (IAM) practice. We at Accuvant decided not to focus intensely in that area because FishNet Security was already doing it so well.

The larger organization will also allow us to offer these services to both sets of clients, and going forward, expand on the special resources we can offer.

What are the biggest challenges of the company?

It’s definitely finding the right talent and finding enough of it. The supply of skilled security professionals has not been able to keep up with the demand, and the shortage is continuing to get worse.

What are you guys doing to address the talent shortage?

David Brown is running a program we recently created, which is a sort of cybersecurity boot camp. We bring in those who just graduated from a security college program or former military and give them a 3 month training program. This program equips them to step into a role within Accuvant. The first portion of the training is general security training, where they learn about each of the disciplines. Then for the remainder they are placed on a track based on their future job: sales engineer, pen testing, risk management, managed services, etc.  We just graduated our first class of 10, and they are now starting on their job in the field. We will probably look to increase the throughput on the program once we prove it out.

We’ve realized that we need 500 additional people by the end of next year. Clearly this model won’t scale up to provide that many immediately, but it helps fill the gap.

For those who are thinking of jumping into the security field, what do you recommend? What’s the right first step?

Get a really strong understanding in one of the key disciplines. Data scientist is one of the areas where there is a huge shortage right now. Knowing how to manipulate data to pull out the information you want is a big need in security. It’s not even necessarily a security skillset, but it’s one we lean on heavily.

Another area I’d recommend is developing a strong background in risk management. Figure out how to talk intelligently about how security relates to the company itself.

In terms of how to get started on that path… that’s simple; come work with us. Or, pick up a CISSP or other security book. Start learning and asking questions.

We are always looking for people to help with event management and correlation. Especially product-specific skill-sets. Go learn Splunk or QRadar, and learn how to analyze and act on the various log sources you’re pulling in.

We are also seeing a big move for all of our highly technical people, that they have the ability to code. Even for those in other disciplines, the ability to code has become an important requirement.

Last question for you; what are CISOs doing wrong out there? What can we do better?

There is a wide variety of maturity and career stages among CISO’s. There are a lot of them who are thinking programmatically. But there are also those who have a hard time getting out of the tactical, and for them we want to help them move their thinking up a level to the strategic. The tactical CISOs might see that they have X amount of budget, and will play a game moving resources around to play whack-a-mole, dealing with the issue of the day. They need to be enabled to go back to the board and tell the stories that will set a vision for where the program should be, and provide the resources to get there.

Thanks so much to Dan Wilson for taking the time to talk about Colorado security with me. I look forward to continuing this series and shining a light on more interesting members of the Colorado security community. If there’s someone you’d like to see spotlighted, drop me a note and I’ll see what I can do.

Copyright 2010 Respective Author at Infosec Island]]>
Suits and Spooks DC 2015: The Agenda - Last Chance to Register https://www.infosecisland.com/blogview/24272-Suits-and-Spooks-DC-2015-The-Agenda-Last-Chance-to-Register.html https://www.infosecisland.com/blogview/24272-Suits-and-Spooks-DC-2015-The-Agenda-Last-Chance-to-Register.html Thu, 29 Jan 2015 09:26:00 -0600 Suits and Spooks DC is just one week away! The event will be held at the Ritz-Carlton, Pentagon City on February 4-5. Due to multiple sell-out events, we have expanded our attendee capacity to 200, but the event is again close to being sold out

Take a look at the agenda to see why this will be one of the hottest events of the year. Registration is just $725 for industry and $325 for Government Employees for two full days at the conference. (Click here to launch the full interactive agenda site)

Copyright 2010 Respective Author at Infosec Island]]>
Thought Experiment: Mandatory Online Banking Security Standards https://www.infosecisland.com/blogview/24271-Thought-Experiment-Mandatory-Online-Banking-Security-Standards.html https://www.infosecisland.com/blogview/24271-Thought-Experiment-Mandatory-Online-Banking-Security-Standards.html Wed, 28 Jan 2015 12:12:14 -0600 Banks are required by law to follow government regulations; these subject the banks to specific requirements, restrictions and guidelines. The end goal being, among other things, transparency.

What about setting specific requirements for banking website security? Pew Research Center statistics reveal that 51% of U.S. adults bank online and 35% of cell phone owners bank using their mobile phones. That was from a study performed in August 2013, as of January 2015 I’m sure those numbers are likely higher.

I count myself as one of those who performs bank transactions online and from my mobile phone. As a security professional, this has me wondering why aren’t there federal standards for online banking security?

A cursory check of the security of my banks’ on-line site shows me that it lacks strong encryption and cipher standards. The lack of stronger encryption isn’t enough to merit I stop using website banking, but as a customer I would feel safer knowing stronger crypto was in place. As an Information Security professional I would feel safer knowing there were a set of common standards that all online banking sites were required to meet.

This brings me back to my thought experiment: just as banks are regulated to meet specific financial criteria, so they should be required, by regulation, to meet strong encryption standards for online banking.

Once a set of security standards becomes Federal standard regulation, banks would be regularly checked to verify they adhere to those standards, and if they fail to meet the minimum standard, then heavy fines would be levied for non-compliance.

What would be the requirements for strong site cryptography and who would make the final call on those requirements? It seems logical it could fall under the non-partisan agency NIST.

What would you consider as strong standards? That would be a spirited debate and decision for all involved, but I would like to see all bank sites required to support 2FA.

What do you think about mandatory online banking security standards? Please leave a comment with your thoughts.

Happy and Safe Computing!

About the Author: Brian M. Thomas (@InfoSec_Brian) is a passionate professional with 17 years’ experience providing Tier-4 data solutions in all disciplines of IT including Network/Server administration and Information Security. Proven experience in HIPAA, ISO 27001 and PCI compliance.

This was cross-posted from Tripwire's The State of Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Defeat The Casual Attacker First!! https://www.infosecisland.com/blogview/24269-Defeat-The-Casual-Attacker-First.html https://www.infosecisland.com/blogview/24269-Defeat-The-Casual-Attacker-First.html Wed, 28 Jan 2015 10:15:23 -0600 I have not done a philosophical security blog post for a long time – and now I was suddenly inspired to write one while installing – rather, replacing with an HD version – security cameras at my house.

2015-01-27 11.32

Given the house we have, I can imagine a physical security setup where every possible entrance (including second floor windows) and every camera is in the view of at least one security camera. That will take between 12 and 16 cameras. Coupling this with tamper-proof camera enclosures and protected cables, as well as smartly placed indoor cameras and a couple of hidden devices, one can … waste a lot of money.

Am I doing this? No, I don’t! I just want coverage of common ingress points [into the house] and a degree of assurance that a casual “attacker” (i.e. burglar) will be caught on camera at least once and the images would then be available to the police.

My focus here is a commodity attack, not a targeted one. Making a regular house resistant to dedicated burglar is an impossible affair, and the law of diminishing returns kicks hard – and early (I also have a dog — and not just any dog …)

IMG 20150126 150815

In any case, why all this? I hear that many organizations developed a sudden, vendor-marketing-infused interest tofight advanced and targeted attacks. But guess what? More than a few of said organizations actually aren’t that good at fighting basic, commodity attacks – and they are NOT improving.

So, it is a free country and it is [in most industries] legal to really suck at infosec / “cyber.” However, I find it highly illogical and, in fact, wasteful, to attempt stopping or detecting an advanced attacker before you managed to succeed with a common one.

Along the same vein, I worry about people who are “concerned about targeted attacks” but lack any ability to tell that “yes, this attack IS in fact targeted” and , moreover, lack moderately effective defenses against opportunistic attacks in the first place.

So, yes, advanced attacks ARE real. Persistent threats ARE real. 0h-day-wielding state-sponsored superhackers ARE real. But, by god, why focus there if you can barely detect a more traditional intrusion, one that utilizes mid-1990s style tools, exploits and tactics!?

Focus on improving your security maturity – not on randomly picking high-maturity tools (likeNFT) and practices (like hunting) and then declaring success! Before you buy another “anti-advanced-anything” box, THINK – are you handling the basics well already and, if YES, what is the best direction for improvement from your current position?

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
New Legislation on Threat Intelligence Sharing May Have a Chance https://www.infosecisland.com/blogview/24268-New-Legislation-on-Threat-Intelligence-Sharing-May-Have-a-Chance.html https://www.infosecisland.com/blogview/24268-New-Legislation-on-Threat-Intelligence-Sharing-May-Have-a-Chance.html Wed, 28 Jan 2015 05:00:00 -0600 After years of political wrangling, apprehensions about corporate liability, and a host of data privacy objections, Congress finally moved on the passage of some key cybersecurity legislation in December.

But the four bills that were approved last month did not address all of the top concerns, namely the creation of an information-sharing platform that would enable better information exchange about cyber-based threats between the public and private sectors.

Similar legislation had died in the Senate last year, but President Obama opened the door for new proposals in his recent State of the Union address, and key Congressional committee members in both the House and Senate are planning to introduce new legislation soon.

“The president’s proposal is an important first step in developing that legislation,” said chairman of the Senate Homeland Security and Governmental Affairs Committee, Senator Ron Johnson, who scheduled a hearing for this Wednesday on the need for information-sharing.

“Cybsecurity is not a Republican or Democratic problem. It’s a serious problem that both parties have the same self-interest to solve before something really devastating happens like an attack against our electric grid.”

The main obstacle to the passage of information-sharing is concerns that businesses may share too much private information about their customers with the government, an issue that has some civil liberties groups lined up to oppose any such legislation.

“We don’t think any bill is necessary,” said Gabe Rottman of the American Civil Liberties Union. “The high-profile hacks we’re hearing about tend to be cases where the companies need to more careful in defending their own systems. An information-sharing bill would not have stopped any of those hacks.”

Obama had threatened to veto previous iterations of information-sharing legislation based on similar concerns, but the administration’s willingness to discuss how these obstacles can be circumvented to come to a consensus gives supporters confidence that the White House and Congress can come to an agreement.

“We think it’s very important that the administration wants to get engaged and wants a seat at the table to discuss the bill with lawmakers and the private sector,” said Matt Eggers of the U.S. Chamber of Commerce.

“It’s good that the administration has made the cybersecurity information-sharing bill a priority. Once the Senate and House pass the bill and send it to the president’s desk, we would expect that he would sign it.”

The most significant piece of legislation passed in December was S. 2519, the National Cybersecurity Protection Act of 2014, which was designed to further enhance the Department of Homeland Security to collaborate with the private sector on security issues through information sharing efforts via the National Cybersecurity and Communications Integration Center (NCCIC).

While this was a significant step in formalizing the processes for the sharing of cybersecurity intelligence, the bill did not address issues raised by the private sector regarding providing immunity against lawsuits for private companies that share security threat and data breach information with the federal government, one of the other big obstacles to the passage of similar legislation.

Congress also approved S. 2521, the Federal Information Security Modernization Act of 2014, which updates the 2002 Federal Information Security Management Act in order to better organize federal government cybersecurity management efforts under the authority of DHS.

The other two bills passed last month included S. 1691, the Border Patrol Agent Pay Reform Act of 2014, which in part allows eDHS the ability to expand the department’s cybersecurity workforce, and H.R. 2952, the Cybersecurity Workforce Assessment Act, which requires DHS to carry out regular assessments of that cybersecurity workforce and provide updates to Congress on its status.

Cross-posted from Norse's DarkMatters Blog

Copyright 2010 Respective Author at Infosec Island]]>
Google Says It’s Not Practical to Fix Flaws in Pre-KitKat Android https://www.infosecisland.com/blogview/24267-Google-Says-Its-Not-Practical-to-Fix-Flaws-in-Pre-KitKat-Android.html https://www.infosecisland.com/blogview/24267-Google-Says-Its-Not-Practical-to-Fix-Flaws-in-Pre-KitKat-Android.html Tue, 27 Jan 2015 14:23:39 -0600 Researchers reported earlier this month that Google was no longer patching vulnerabilities affecting the WebView component in Android Jelly Bean (4.3) and prior. The search giant has motivated its decision by saying that it’s no longer practical to apply patches to old branches.

Over the past months, security experts identified several vulnerabilities in the WebView used by the Android Open Source Platform (AOSP) browser shipped by default with versions of Android older than KitKat (4.4). After reporting the issues to Google, researchers were informed that the company is no longer developing patches for older versions of WebView, but pointed out that those who report bugs can submit patches for consideration.

Some researchers believe the company should not neglect these versions of the operating system because, according to Google's own statistics, approximately 60% of devices still run Android Jelly Bean, Ice Cream Sandwich, Gingerbread, and Froyo.

“The news of Google not only abandoning security updates to its WebView in version 4.3 and below, but also the lack of transparency of doing so, is proof that device makers won’t be responsible for security indefinitely, letting the weight fall on corporate IT/Security departments in their stead,” Domingo Guerra, president and co-founder of Appthority, told SecurityWeek when the news broke. “With Android market share being #1 worldwide, it is hugely concerning, and surprising, that Google is leaving such a large install-base out in the wind.”

Read the rest of this story on SecurityWeek.com. 

Copyright 2010 Respective Author at Infosec Island]]>
Do You Want “Security Analytics” Or Do You Just Hate Your SIEM? https://www.infosecisland.com/blogview/24266-Do-You-Want-Security-Analytics-Or-Do-You-Just-Hate-Your-SIEM.html https://www.infosecisland.com/blogview/24266-Do-You-Want-Security-Analytics-Or-Do-You-Just-Hate-Your-SIEM.html Tue, 27 Jan 2015 11:46:39 -0600 Now that I’ve taken a fair number of “security analytics” client inquiries (with wildly different meanings of the phase), I can share one emerging pattern: a lot of this newly-found “analytics love” is really old “SIEM hatred” in disguise.

A 101% fictional and slightly over-dramatized conversation goes like this:

  • Analyst: you said you wanted security analytics, what specifically do you want?
  • Enterprise: I want to collect logs and some other data, correlate, analyze, report.
  • Analyst: wait a second … that is called “SIEM”, SIEM does that!
  • Enterprise, passive-aggressively: Well, ours doesn’t!!!
  • Analyst: have you tried to .. you know… actually use it?
  • Enterprise: as a matter of fact, we did – for 5 years! Got anything else to ask?!

Upon some analysis, what emerges is a real problem that consists of the following:

  • Lack of resources to write good correlation rules, tune them, refine them and adapt them to changing needs
  • A degree of disappointment with out-of-the-box rules (whether traditional or baseline-based) and other SIEM content
  • Lack of ability to integrate some of the more useful types of context data (such as IdM/IAM roles and user entitlements, as well as deeper asset data)
  • Lack of trust that even well-written rules will let them detect attacker lateral moves, use of stolen/decrypted credentials, prep for data exfil, creating backdoors, etc
  • Occasionally, a lack of desire to understand a multitude of their own monitoring use cases, but instead to buy a box for each problem.

So, a few years of such SIEM unhappiness have born a result … UBA. Some vendors’ UBAs are “SIEM add-ons” (since their rely on SIEM for collection, normalization and storage), others are more like a “narrower but smarter SIEM” (since their collect a subset of SIEM logs and maybe other data).

A few can work with DLP and not just a SIEM (as we all know, tuning DLP is often – imagine that! – a bigger pain than tuning a SIEM) in order to create additional insight from SIEM and DLP outputs. As I hypothesize, UBA is where a broader-scope security analytics tooling may eventually emerge.

Now, do you need/want analytics or do you just hate your SIEM?

This was cross-posted from the Gartner blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Looking Logically at Legislation https://www.infosecisland.com/blogview/24265-Looking-Logically-at-Legislation.html https://www.infosecisland.com/blogview/24265-Looking-Logically-at-Legislation.html Tue, 27 Jan 2015 11:44:09 -0600 There's a lot of fuss around the recent White House proposal to amend the Computer Fraud and Abuse Act, and some level-headed analysis of it. There's also a lot of defensive and emotional reaction to it ("ZOMG we're going to be illegal!").

First of all, everyone take a deep breath. The reason why proposed changes are made public is to invite comment. This is a really good time to step up and give constructive feedback, not just say how much it sucks (although a large enough uproar will be taken into account anyway). Try assuming that nobody is "out to get you" -- assume that they're just trying to do the right thing, as you would want them to do for you. Put yourself in their shoes: if you had to figure out how to protect citizens and infrastructure against criminal "cyber" activity, and do it legally, how would you do it?

There's another really important point here, beyond the one that if you don't like it, suggest something more reasonable. Jen Ellis talks about the challenge of doing just that in her great post. And I agree with Jen that an intent-based approach may be the most likely avenue to pursue, although proving intent can be difficult. I'm looking forward to seeing concrete suggestions from others. As I've pointed out before, writing robust legislation or administrative rules is a lot like writing secure code: you have to check for all the use and abuse cases, plan for future additions, and make it all stand on top of legacy code that has been around for decades and isn't likely to change. We have plenty of security people who should be able to do this.

If they can't -- if there's no way to distinguish between security researchers and criminals in a way that allows us to prosecute the latter without hurting the former -- then maybe that's a sign that some people should rethink their vocations. (It also explains why society at large can't tell the difference, and doesn't like security researchers.) After a certain point, it's irrational to insist on your right to take actions just like a criminal, force other people to figure out the difference, and not suffer any consequences. If you want to continue to do what you're doing, step up and help solve the real problem.

This was cross-posted from the Idoneous Security blog. 

Copyright 2010 Respective Author at Infosec Island]]>
End of Life https://www.infosecisland.com/blogview/24264-End-of-Life.html https://www.infosecisland.com/blogview/24264-End-of-Life.html Tue, 27 Jan 2015 08:40:07 -0600 This topic has started to come up again as we go through PA-DSS research on applications and find that the listings contain operating systems that are at or past end of life (EOL).

The example below is only one of many listings in the PA-DSS database maintained by the PCI SSC that has this condition.  Unfortunately, when you export the PA-DSS database, it does not include the platform and version number information fields, so we have limited ability to audit what the database actually contains in this regard unless we encounter it as we have with this application.

As this listing shows, this application is still acceptable for new deployments for Windows XP SP3 and IBM System i AS/400 6.1.  Thanks to all of the media reports this past year, we should all be aware that the standard desktop version of Windows XP has past EOL.  V6.1 of IBM i5/OS will reach EOL on September 30, 2015, so it has a very short lifespan for this entry.

PA-DSS PAYware Entry So what is going on?  As near as I can tell, this is a breakdown in the PA-DSS validation process involving the Council and the vendors.

The way the process is supposed to work is that the vendor is supposed to re-validate their application annually or whenever any of the following occur:

All or three or more PA-DSS Requirements/sub-Requirements are affected, not including Requirements 13 (maintain an implementation guide) and 14 (have a training program for employees, customers, resellers and integrators);
The Payment Application’s functionality or half or more of its code-base is changed; or
Addition of tested platform/operating system to include on the List of Validated Payment Applications.
In order to re-validate, the vendor will incur a cost both to have the application reassessed by their PA-QSA and then have the application listed or existing listing updated on the PCI SSC Web site in the PA-DSS database.

However, what this does point out is a flaw in the process at the Council’s end.  One would think that the Council should have removed Windows XP from this entry when the revalidation was posted since XP was long past EOL.

This also points to a flaw on the vendors’ part – PA-DSS validating applications on too few platforms when they initially validate or re-validate those applications.  I get that PA-DSS validation is not inexpensive both from the assessment process as well as registering the applications with the PCI SSC.  However this is not the time or place to cut costs particularly if your application runs on Windows.  Microsoft introduces a new version of Windows and the application vendor does not PA-DSS validate the application for the new version of Windows.

Continuing on with our example, VeriFone re-validated their PAYware Transact probably on or around November 11, 2014 based on the current Revalidation Date of November 11, 2015.  That date is well after the XP EOL date back in April 2014, so why did the vendor not re-validate their solution for a newer version of Windows?  My guess having been involved with these re-validations is that the vendor wanted to re-validate their listing for i5/OS v6.1, not Windows XP.  I would additionally assume that VeriFone is validating a new version of PAYware Transact for Windows 7/8 and possibly i5/OS v7.  As a result, for VeriFone there is no reason to validate v3.2.4 for the newer Windows versions.

Vendors seem to forget that if their application runs on Windows 7 or 8 64-bit, it will likely be run by some customers on the Windows Server versions as well and vice versa.  I have seen this happen most often with vendor sales people who want to close the sale and know that the application will run on any recent version of Windows even though it was not necessarily PA-DSS validated for those versions of Windows.

This leads to what we can face as QSAs when dealing with PA-DSS validated applications.  The first are the clients that insist because Windows XP is still listed for PAYware Transact on the PCI SSC PA-DSS database, that they are allowed to continue running Windows XP like they always have done with the application.  While the PCI DSS does not prohibit the running of EOL operating systems, anyone doing so must have compensating controls implemented to mitigate the risks of running that EOL OS.  It is those compensating controls that send most clients over the edge because they are typically very onerous to implement and maintain if such compensating controls can even be developed.

The more common condition is all of you running PAYware Transact on Windows 7/8, Windows Server 2008/2012 or even i5/OS v7.  Unfortunately, you are not running this application in the PA-DSS validated environment as listed in the PCI SSC PA-DSS validated applications database.  Since it was never tested on those platforms for validation, the rules state that your QSA cannot rely on the PA-DSS validation for the application.  As a result, a QSA will need to test the application to ensure it is secure just as they would any application that is not PA-DSS validated.  We encounter this most often with Windows, but are starting to encounter this more and more with Linux variants as well.

But where it really gets messy and nasty is when a QSA assesses a PA-DSS validated application running in such an environment and the QSA finds one or more issues with the application that indicate it should never have been PA-DSS validated.  When that does happen, it is the QSA’s client’s responsibility to contact the PCI SCC with their concerns and evidence of the issues related to questioning the PA-DSS validation.

So what would I like to see from this discussion?

  • The PCI SSC needs to do more with their PA-DSS validation database so that EOL OS environments get removed from listings or at least flagged as EOL on the “Acceptable for New Deployments”.
  • If a PA-DSS validated application comes under scrutiny for possibly not complying with the PA-DSS, the listing should be flagged as “Under Review” and treated similar to how the Council treats QSACs that are “In Remediation”. Implementations could proceed but the issues under review must be disclosed to customers and their QSAs/ISAs so that they can be assessed and compensating controls put into place.
  • Vendors need to validate their applications for all operating systems, not just the ones that were around when it was originally validated if it is to remain under the “Acceptable for New Deployments” category.
  • Vendors need to validate their applications for all operating systems that they support; not just one version of Windows and/or one version of Linux.
  • If operating system is not an issue that influences the security of the application if the OS is properly configured, then the PCI SSC should consider some sort of indication that any current version of an OS is acceptable versus only the versions tested.

This was cross-posted from the PCI Guru blog. 

Copyright 2010 Respective Author at Infosec Island]]>
The State of Obama Cybercare https://www.infosecisland.com/blogview/24262-The-State-of-Obama-Cybercare.html https://www.infosecisland.com/blogview/24262-The-State-of-Obama-Cybercare.html Mon, 26 Jan 2015 09:43:00 -0600 By: Ken Westin 

As expected, President Obama mentioned briefly his cybersecurity proposals to congress last night. First, I think we should take a moment and appreciate the fact that cybersecurity made it into the State of the Union address to begin with.

Over the past few years, we have seen cybersecurity move from the realm of IT into the boardroom and now onto the political stage. The reason for this is clear—the resiliency, security and safety of the Internet is critical to our economy and the progress of our society as a whole. It is our future.

I believe the spirit and intentions behind the the Cyber Intelligence Sharing and Protection Act, known as CISPA are good. However, the devil is in the details. The proposal itself may be premature without plans established with regards to implementation.

BIG GOVERNMENT, BIG DATA, BIG PROBLEMS
Currently, there are various private companies already sharing threat intelligence data with each other, such as the financial services industry through FS-ISAC and the Soltra initiative. Meanwhile, many in the industry wonder what the government would bring to the table in terms of useful data. The U.S. government does not have a particularly stellar record when it comes to developing and maintaining large-scale systems for information sharing.

In addition to developing a system that can handle large amounts of data will be the government’s ability to maintain and secure it. If the private industry shares information with the government, some of this information could lead to further compromise and embarrassment if it falls into the wrong hands.

Regardless on your stance on Snowden, the fact that information that was meant to be top secret was so easily exfiltrated by a contractor does not provide confidence in our own government’s ability to secure its own systems. Along the same lines, in many cases, information that is shared with the government regarding a data breach may also include personal information of customers.

What will the limits of government access to this data be? Before we begin the discussion about what information we can share with the government regarding incidents, we should clearly establish what the government can collect in the first place.

IS SHARING CARING?
Collecting information regarding breaches is one thing, but being able to make use of it is another. The three-letter agencies already lack resources on the cybersecurity front – the additional data and reporting can have a significant impact on workload.

In addition to reporting and sharing of information, there should be help for businesses to help secure their infrastructure in the first place. The current proposals are like trying to solve the problem of traffic fatalities with more ambulances and sharing photos of accident scenes, when what is needed is safer cars and roads and better drivers.

What is the motivation for businesses to implement better security practices in their environments to avoid breaches to begin with? The government should assist businesses with not only information sharing about attacks but also guidance on how to better secure their networks.

CAN’T SHARE WHAT YOU DON’T HAVE
A key piece of the proposal hinges on intelligence gathering from businesses that are compromised, which leads us to a “chicken and egg” scenario. The organizations lacking even the most basic of security controls fail to gather information from systems at all. One of the first things the FBI or Secret Service request when they come on site to assist companies is access to their log data. I have heard horror stories from agents in the field who go on site to discover little to no log collection, or the data they need had already been dumped.

One promising proposal is the Personal Data Notification & Protection Act – a requirement to have risk assessments include logging data for at least the prior six months for all systems containing sensitive personal data. Although we have similar requirements at the state level and for many regulatory compliance frameworks, this would help set a precedence at the national level for businesses to ensure basic logging for incident response.

Although I believe it is great that the President is taking an active interest in cybersecurity, my hope is that real change is made not just with regards to how respond to breaches, but also how we can prevent them and decrease our response time to mitigate the damage. The government has an opportunity to help businesses as they struggle to secure their infrastructure, but they need to focus more efforts on education and awareness.

This was cross-posted from Tripwire's The State of Security blog.

It was originally published January 23. 

Copyright 2010 Respective Author at Infosec Island]]>
Bringing Metasploit Exploits to Life with PowerShell https://www.infosecisland.com/blogview/24261-Bringing-Metasploit-Exploits-to-Life-with-PowerShell.html https://www.infosecisland.com/blogview/24261-Bringing-Metasploit-Exploits-to-Life-with-PowerShell.html Mon, 26 Jan 2015 09:39:14 -0600 You have a remote shell to a Windows box in Metasploit, very cool, but what can you do? Granted Metasploit is loaded with features, options and tons of post modules (which are all amazing by the way), but what if you want to do something a bit more custom? Say, like adding custom pop-ups and even voice, but you have no clue about programming in Ruby.

How about PowerShell?

Let me start this out by saying I am no programmer. Sure I have futzed around with various languages over the years, and even supervised programmers at a couple jobs – but trust me, I am not a programmer. Secondly, I never would have been able to do this without one of the Metasploit gods – Mubix over at Room362.com. Thanks Mubix!

Talking with a friend about exploit capabilities, we came up the thought that wouldn’t it be cool if when a machine was exploited during a red team pentest, if it would pop up a Windows error message on the screen saying, “Knock, Knock Neo.” You know, from the Matrix movie.

And wouldn’t it be cool if you could get the computer to speak to said victim in a woman’s voice saying the same thing? What if, as long as we are custom creating our Matrix-ish payload, we also wanted to pop up a picture on the target system of the green text filled Matrix screen? I mean wouldn’t that be cool too?

Well, with PowerShell, you can!

If you look at Mubix’s “Powershell Popups + Capture” article, you can see the step-by-step process that we will follow.

Create a text file containing the Powershell commands, I used something like this:

$shell = New-Object -ComObject “Shell.Application”;
$shell.minimizeall();
Start-Sleep -s 2;
[System.Reflection.Assembly]::LoadWithPartialName(“System.Windows.Forms”);
[System.Windows.Forms.MessageBox]::Show(“Knock, knock, Neo.” , “Status” , 2);
(New-Object –ComObject SAPI.SPVoice).Speak(“Knock, Knock Knee Oh, the Matrix has you!”);
c:\test\matrix.jpg;

The first two lines allow the script to clear the user’s screen by minimizing all open windows. We then pause the script for a couple seconds for dramatic effect. The next two lines pop up a Windows (Abort, Retry, Ignore) message box with the movie message, “Knock, Knock Neo.”

Once the user clicks on one of the message box buttons, the script calls the Windows built in text to speech capabilities to audibly speak the same message out of their speakers. Sometimes the words don’t come out exactly like they should so you need to help the Windows voice API by using slightly different, but similar sounding words (ex. “Knee Oh” instead of “Neo”).

The final command opens a Matrix .jpg file that we would need to have already uploaded to the system via the Meterpreter upload command. (Pick a big one that fills the screen!)

We need to take the text file and encode it as Mubix’s site shows:

PowerPoint Text to Speech

Then run the following command in our remote shell, adding in the encoded text stream above:

powershell -ep bypass -enc <Paste in the Encoded Text>

And that is it!

Powershell Message BoxOne more step that would make this even more creepy (or visually convincing in a red team pentest) would be to use Meterpreter’s built in webcam capability to first snap a picture of the remote user at his computer, upload that picture to their system in place of the matrix.jpg, and then run the command for a more personalized message from “the Matrix”!

Best defense against these types of attacks is to never, ever open or run unexpected files or attachments in e-mails. Never use a USB drive that you find laying around your company. Avoid public Wi-Fi when possible. Finally, always use a script blocking program on your internet browser.

This was  cross-posted from the Cyber Arms blog. 

Copyright 2010 Respective Author at Infosec Island]]>
Stealth Mode: Lying in Wait Inside the White House’s Network https://www.infosecisland.com/blogview/24260-Stealth-Mode-Lying-in-Wait-Inside-the-White-Houses-Network.html https://www.infosecisland.com/blogview/24260-Stealth-Mode-Lying-in-Wait-Inside-the-White-Houses-Network.html Mon, 26 Jan 2015 08:04:54 -0600 Recent data breaches involved an unclassified computer network used by President Obama’s senior staff, prompting countermeasures by the administration and resulting in temporary system outages. Officials said the attack did not appear to be aimed at destruction of either data or hardware, or assuming control of other systems at the White House, which poses the question – what were the hackers looking for?

Washington Post reports have disclosed cyber-¬espionage campaigns by hackers thought to be working for the Russian government. Targets have included NATO, the Ukrainian government and U.S. defense contractors. Russia is regarded by U.S. officials as being in the top-tier of states with cyber-capabilities. The Washington Post also reported the nature of this breach is consistent with a state-sponsored attack.

Interestingly, FireEye developed a report supporting this assertion. According to the report, APT (Advanced Persistent Threats) 28: A Window Into Russia’s Cyber Espionage Operations, FireEye believes APTs that target malware, language and focused operations indicate a government sponsor that is most likely Russian. While there have been no reports that definitively confirm the Russian government was responsible for this particular breach, the ways in which the actors behaved are similar to those described in the FireEye report. 

The truth is, attacks such as this are becoming more prevalent and the actors are becoming more devious. The Department of Homeland Security reports that cyberattacks are growing more “sophisticated, frequent, and dynamic.” To decrease the likelihood of future breaches, government entities are encouraged to join the Continuous Diagnostics and Mitigation (CDM) program to implement tools that identify cybersecurity risks on a continuous basis, prioritize risks based upon potential impact, and enable cybersecurity personnel to mitigate the most significant problems first.

Different agencies in the federal government experience breaches of increasing levels of gravity, which results in these particular agencies  moving up in priority on the CDM  task order list and getting closer to obtaining funds for CDM. Sadly, it seems as though a data breach needs to happen before elevating it within the task order listing, which is a bit of circular logic. Agencies should take a more proactive stance by:

• Shifting their security mindsets from “incident response” to “continuous response,” wherein systems are assumed to be compromised and require continuous monitoring and remediation
• Adopting an adaptive security architecture for protection from advanced threats
• Spending less on prevention; investing in detection, response and predictive capabilities

Federal agencies need to become more proactive and aggressive in protecting their biggest assets – their data.

Wallace Sann is federal chief technology officer (CTO) with ForeScout Technologies. 

Copyright 2010 Respective Author at Infosec Island]]>
Three Compliance Trends to Watch in 2015 https://www.infosecisland.com/blogview/24259-Three-Compliance-Trends-to-Watch-in-2015.html https://www.infosecisland.com/blogview/24259-Three-Compliance-Trends-to-Watch-in-2015.html Mon, 26 Jan 2015 07:47:25 -0600 The champagne glasses are stocked away, the New Year is in full swing and now, of course, it’s time for compliance audits. Yay! (Not really.) For most, compliance is a difficult task in and of itself. Add it to the already daunting task of monitoring increasingly complex IT infrastructure and it becomes completely overwhelming. Unfortunately, all signs point toward it not getting any easier in 2015.

So, to help address compliance head-on in the New Year, here are three major compliance-related trends to be aware of, and advice on how to meet the challenges of an ever-stricter regulatory environment.

Compliance Doesn’t Equal Security

A major issue this year will be broader understanding and honest acceptance that being compliant is one thing, but being secure is something else entirely. Think of all the high profile data breaches we have seen over the past two years. How many of those companies were “compliant”? Well, quite frankly, all of them had to meet regulations and many did so successfully. Yet they still made data breach headlines.

Thus, it is important to not fall into the trap of thinking that if one adheres to compliance requirements, security is guaranteed. In fact, many regulatory bodies are now making a point to educate organizations that the compliance standards they oversee will not always ensure their company data is secure.

Less Breach Shaming, More Breach Sympathy

Due to nationwide data breach disclosure laws now in place, the news seems filled with reports of new (and sometimes old) breaches, not often lost in the coverage is commentary on compliance and if the affected companies were indeed complaint and what issues with compliance they’ve had in the past. Expect more of the same in 2015.

However, while these reports have traditionally questioned the competency of the affected organizations, thereby essentially breach shaming the companies, we have started to see more breach sympathy—“If it can happen to company XYZ, which was compliant, it could happen to us.” While this new sense of sympathy for organizations that have suffered a breach is on the rise, it will need to be fostered in 2015. Doing so will help promote better collaboration across the industry. IT professionals have traditionally excelled at sharing information and expertise on a personal level, but in the near future, organizations will begin to share information with each other to develop collective strength against shared threats. Regulatory bodies will also hopefully participate in this free exchange, which will affect both what it takes to be compliant and means to be compliant.

Continuous Compliance, Increasing Complexity

To aid in closing the gap between being compliant and actually being secure, many are moving towards a continuous compliance model to help reduce and limit exposure to compliance and security risks. This will gain steam in 2015.

Continuous compliance involves constantly reviewing processes and quickly making any necessary updates as a result of deviations from their intended performance. However, despite the fact that continuous compliance is effective at eliminating the gaps between compliance and security, it also greatly increases the complexity of managing compliance. Tools, technologies and processes to help manage this complexity will be more important than ever.

Tackling Compliance Head-On

A fourth trend to be aware of, but one that will come as no surprise, is that 2015 will see regulatory compliance standards become stricter. The following best practices will help in meeting existing and new compliance challenges head-on in 2015 and beyond.

· Thoroughly document processes, policies and procedures. Documentation is a crucial component of compliance, but it is often the most neglected aspect.Creating comprehensive, in-depth documentation will be beneficial beyond an audit. If tasked with securing the network and preparing for audits, organizing and documenting policies and procedures is absolutely critical. Compliance is an ongoing process, so it’s important to always keep documents and information current by scheduling time to review and revise documentation throughout the year.

· Clearly understand compliance requirements for the industry. Every regulated industry is different. Regardless of which flavor of compliance an organization follows—PCI DSS, HIPAA or custom corporate policies—it’s imperative to understand what exactly is required. Remember, some compliance requirements are clearly defined while others provide only vague guidelines.

· Monitor devices and systems for compliance. Once proper documentation and a clear understanding of industry requirements is achieved, the next step is to identify which network devices, systems and applications must be monitored for compliance. This step is particularly important if deploying a security information and event management tool, since these often require configuring additional applications and systems to collect logs.

· Continuously review policies and procedures. Reviewing policies and procedures on an ongoing basis, and then comparing them with the most updated requirements, helps overcome the fear and stress that often accompany audits. Meeting compliance regulations can be challenging when it comes to collecting the necessary audit trails, so continuously reviewing policies will help ease that process.

· Automate processes wherever possible. When dealing with an immense amount of data, reviewing audit trails can be a long and challenging task. Byautomating wherever possible, workloads will be decreased and processes simplified. SIEM tools and log solutions can play an important role in automating many compliance-related tasks and processes, along with providing important alerting functionality.

While following these best practices will greatly aid in easing the burden of ensuring compliance, remember, compliant does not equal secure. Beyond these best practices and beyond being compliant, organizations of all sizes need to recognize the necessity of a proactive security plan to ensure that their infrastructure and the potentially sensitive data therein remains safe and secure.

About the AuthorMav Turner is Director of Security at SolarWinds 

Copyright 2010 Respective Author at Infosec Island]]>
Can Hackers Use FraudFox VM to Defeat Your Fraud Prevention? https://www.infosecisland.com/blogview/24258-Can-Hackers-Use-FraudFox-VM-to-Defeat-Your-Fraud-Prevention.html https://www.infosecisland.com/blogview/24258-Can-Hackers-Use-FraudFox-VM-to-Defeat-Your-Fraud-Prevention.html Thu, 22 Jan 2015 12:55:00 -0600 In the last few days, a number of tech magazines like Computerworld and PC Advisor have reported that FraudFox VM poses a threat to the security of online businesses—especially banks and payment services.

FraudFox VM is a special version of Windows with a heavily modified version of the Firefox browser that runs on VMware's Workstation for Windows or VMware Fusion on OSX. It's for sale on Evolution, the apparent successor to the Silk Road online contraband market, for 1.8 bitcoins, or about $390.

FraudFox VM was created to defeat device recognition, or fingerprinting, which is used in fraud prevention to assess the risk of a device connecting to a business. Web browsers are used to collect data like operating system version, time zone and IP address. Each of these characteristic can be used to assess risk and uncover possible fraud.

So how worried should your business—and customers—be about this new software? I sat down with Scott Waddell the Chief Technology Officer of iovation, the fraud prevention experts, to find out what the reality is behind the media headlines.

1. How reliant are banks and financial institutions on this kind of technology to stop fraudulent transactions these days? Is fingerprinting used more for mobile than on desktop?

Banks leverage device reputation solutions with great success in both fraud mitigation and risk-based authentication strategies. Of course, good security is all about layered defenses, so smart banks use these tools as part of a defense-in-depth strategy to avoid over-reliance on any one security technology.

Device recognition is used on all Internet connected devices these days, mobile and desktop alike. Mobile transactions are the fastest growing segment being protected with these tools, but the majority still originate from desktop operating systems.

2. Do you think this would be an effective method for cybercriminals to get around those defenses?

FraudFox VM may be interesting for its purpose-built virtual machine packaging, but there's really nothing new in the approach. Tools have been available to fraudsters for years to facilitate changing device parameters, manipulating JavaScript, blocking data collection, obscuring IP address and location, and so on. Many of these capabilities have even migrated into easy-to-use settings in the major web browsers to make testing easier for web developers.

Device reputation solutions have evolved along with such tools and continue to provide great uplift in fraud catch in spite of them.

From the reported attributes that FraudFox can change, it would be unable to evade native recognition tools (those embedded in native desktop apps) and it would stumble over transactional similarity scoring on the web that considers more device attributes along with tagged recognition. So the tendency at financial institutions would be to trigger step-up authentication to one-time passwords through out-of-band channels (SMS, mobile app, voice) that FraudFox could not intercept.

3. Is it possible to fake browser fingerprints manually or using other tools? Does this thing look like a good consolidation of other tools that people might use to defeat fingerprinting?

As previously mentioned, there are other tools and techniques fraudsters use to evade recognition or to try to mimic the devices of their victims. These often stand out from actual browsers in ways that defeat their intended purpose. A couple years ago, the Gozi Prinimalka trojan attempted to duplicate device attributes of compromised systems much as FraudFox VM aims to do. However, its limitations made it ineffective against modern device reputation offerings that evaluate risk and reputation through multiple strategies including link analysis, profiling techniques, velocity rules, proxy and Tor unmasking, device attribute anomalies, and more.

FraudFox VM seems to be relatively limited in its capabilities considering the variety of techniques sophisticated fraud mitigation tools bring to bear.

4. Any other thoughts?

It's certainly interesting to see tools like this for sale on Evolution, which appears to be catering to fraudsters and identity thieves. All the more reason for online businesses to take advantage of collaborative technologies that bring the power of community to the fight against the increasingly organized economy of cybercrime.

Fraudsters will always look for new ways to commit cybercrimes. However, a strategic, multi-layered approach to fraud prevention is the best defense.

Cross-posted from the iovation Stories Blog

Copyright 2010 Respective Author at Infosec Island]]>