Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 SAP Cyber Threat Intelligence Report – October 2016 Thu, 20 Oct 2016 10:59:00 -0500 The SAP threat landscape is always growing thus putting organizations of all sizes and industries at risk of cyberattacks. The idea behind SAP Cyber Threat Intelligence report is to provide an insight on the latest security threats and vulnerabilities.

Key takeaways

  1. SAP's critical patch update for October fixes 48 vulnerabilities, which is the record-breaking number since 2012.
  2. The majority of them implements Switchable authorization checks (i.e. fixes implementation flaws).
  3. One of the vulnerabilities (Authentication bypass in SAP P4) potentially threatened SAP customers since 2013.

SAP Security Notes – October 2016

SAP has released the monthly critical patch update for October 2016. This patch update closes 48 vulnerabilities in SAP products (47 SAP Security Patch Day Notes and 1 Support Package Notes), which is almost twice more than the average number for this year. According to the latest SAP Cyber Security in Figures report, In 2011, the approximate number of monthly SAP Security Notes was equal to 61. In 2012, it decreased to 53 notes, and in 2013 it amounted to 30 notes a month. The average number remained almost the same in 2014 (32) and fell slightly in 2015 (25) and in 2016 (22).


5 of all the Notes were released after the second Tuesday of the previous month and before the second Tuesday of this month. Just one SAP Security Note is an update to a previously released Security Note.

3 of the released SAP Security Notes have a high priority rating. The highest CVSS score of the vulnerabilities is 7.5.


The most common vulnerability type is Implementation Flaw.


About Switchable authorization checks

The majority of the issues closed this month are implementation flaws, namely the Security Notes titled “Switchable authorization checks ”. In the full-text versions of the Notes, SAP describes this functionality in detail.

By these patches, new switchable authorization checks were implemented. By default, they are inactive to ensure compatibility with processes. It is important to enable the authority check using Switchable Authorization Checks Framework ( the transaction SACF).

Issues that were patched with the help of ERPScan

This month, 2 critical vulnerabilities identified by ERPScan’s researcher Vahagn Vardanyan were closed.

Below are the details of the SAP vulnerabilities, which were identified by ERPScan researcher.

  • A Denial of Service vulnerability in SAP ASE (CVSS Base Score: 7.5). Update is available in SAP Security Note 2330422. An attacker can exploit a denial of service vulnerability to terminate a process of a vulnerable component. Thus, nobody will be able to use the service, which, in its turn, affects business processes, system downtime, and business reputation of a company.
  • A Missing Authentication check vulnerability in SAP NetWeaver AS JAVA P4 Servercore component (CVSS Base Score: 7.3). Update is available in SAP Security Note 2331908. An attacker can exploit a missing authorization check vulnerability to access a service without passing authorization procedures and use functionality of this service, access to which shall be limited. This may result in an information disclosure, privilege escalation and other types attacks.

About Missing Authentication check vulnerability in P4 Servercore component

Missing Authentication check vulnerability affects SAP NetWeaver AS JAVA P4. This service enables a remote control of SAP’s JAVA platform, for example, all SAP Portal systems.

P4 is usually exposed to the Internet, which makes the vulnerability exploiting easier. Scanning conducted by our researchers revealed that there are at least 256 vulnerable services accessible online.

image image

The issue was first reported and patched in 2012. However, during one of penetration tests, ERPScan team found out that the issue still affected almost all new versions of the service. For example, the service pack 0.9 for the version 7.2 which is vulnerable, was released in 2013. It means that potentially the mission-critical service stayed unpatched for at least 3 years, i.e. 256 systems (possibly this number was higher in last 3 years) could be compromised.

The most critical issues closed by SAP Security Notes October 2016 identified by other researchers

The most dangerous vulnerabilities of this update can be patched by the following SAP Security Notes:

  • 2348055: SAP ST-PI component has an SQL injection vulnerability (CVSS Base Score: 6.3). An attacker can exploit an SQL injection vulnerability with specially crafted SQL queries. They can read and modify sensitive information from a database, execute administration operations on a database, destroy data or make it unavailable. Also, in some cases an attacker can access system data or execute OS commands. Install this SAP Security Note to prevent the risks.
  • 2344441: SAP MESSAGING SYSTEM SERVICE component has a Cross-Site Scripting vulnerability (CVSS Base Score: 6.3). An attacker can exploit a cross-site scripting vulnerability to inject a malicious script into a page. To exploit a reflected XSS vulnerability it is necessary to trick a user from an attacker’s side, i.e. an attacker must make a user follow a specially formed link. As for a stored XSS, a malicious script is injected into a page body and permanently stored there. Thus, a user is attacked without performing any actions. A malicious script can access all cookies, session tokens and other critical information stored by a browser and used for interaction with a site. An attacker can gain access to user's session and learn business-critical information. In some cases, it is even possible to get control over this information. In addition, XSS can be used for unauthorized modifying of displayed site content. Install this SAP Security Note to prevent the risks.
  • 2335427: SAP BusinessObjects has a Cross-Site Request Forgery vulnerability (CVSS Base Score: 6.1). An attacker can use a Cross-site request forgery vulnerability for exploiting an authenticated user's session with a help of making a request containing a certain URL and specific parameters. A function will be executed with an authenticated user's rights. An attacker may use a cross-site scripting vulnerability to do this, or he can present a specially crafted link to an attacked user. Install this SAP Security Note to prevent the risks.

Advisories for these SAP vulnerabilities with technical details will be available in 3 months on Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.

SAP customers as well as companies providing SAP Security Audit, SAP Vulnerability Assessment, or SAP Penetration Testing services should be well-informed about the latest SAP Security news. Stay tuned for next month’s SAP Cyber Threat Intelligence report.

Copyright 2010 Respective Author at Infosec Island]]>
Securing the Remedy Before the Breach Thu, 20 Oct 2016 09:50:00 -0500 A security breach can damage a business's reputation, tarnish a client relationship and result in collateral damage that could take years to remedy. Over the past few years, we’ve seen a variety of high profile companies suffer the aftermath of a data breach; from jobs lost and businesses ruined, to the destruction of customer loyalty. It is clear that when it comes to a security breach, prevention is the best remedy.

Mobile breaches in particular present a unique challenge. Any response to a cyber breach is complicated, but this complexity is often worsened due to the personal nature of a mobile device itself. With the adoption of BYOD and work emails being accessed on personal devices, a tangled web of data cross-pollination is being created that can increase productivity, but can also present significant security risks.

Wandera research has shown that companies globally spend twice as much cleaning up mobile security breaches than they do on investments in mobile security software. The study also revealed that more than 28 percent of U.S. companies reported having suffered a mobile breach over the course of a year – with the cost of remedying the breach at $250,000 to $400,000 in many cases.

There are three basic, but crucial, steps that organizations must take to ensure that they are sufficiently prepared to remedy a mobile data breach.

  1. Assess and notify

    This should be at the top of the to do list when a breach occurs, as the news will need to be immediately shared within your organization. Too often businesses are silent when data breaches occur. The fear of discovery – from competitors, government regulators or customers – outweighs the importance of having a wider discussion throughout the organization.

    When it comes to a mobile data breach, businesses need to realize that the situation is different. The split personality of a mobile device serves an individual as well as the business. The faster the company notifies all that are involved and shares intelligence on what was breached, the less of a ripple effect the breach will have.

    Device users will need to change passwords – not just those used within the company, but any that were put at risk – and take defensive steps if sensitive data such as contact lists, credit cards, business or personal images and location information was leaked.

  2. Perform a forensics analysis

    To truly clean up a breach, the business must understand how it occurred and exactly what was put at risk. The only way to perform a post breach forensics investigation is to start with visibility across the mobile fleet in the first place.

    To get ahead before a breach occurs, companies should invest in a mobile threat defense solution that can provide data on how the breach occurred, which users were impacted and provide clues as to which data may have been compromised. By having complete visibility of the issue, businesses will be more aware of what the next move needs to be to minimize further damage.

  3. Improve defenses

    The visibility that is obtained during a forensics investigation can also pave the way for improved defenses via policy controls in the future. Many IT teams typically rollout an open mobility program to start, allowing users to install their own apps and ensure there are no restrictions on the websites they can access.

    As breaches occur, compliance violations are observed and as productivity concerns are raised, IT will often need to take a step back and implement mobile policies to ensure that these corporate resources are used effectively and securely. After a breach, companies need to take a close look at their access policies and ensure they are taking adequate steps to protect mobile data, while simultaneously ensuring that users can stay productive.

Unfortunately, today it’s less about ‘if’ there will be a breach, and more about ‘when’ a business will discover they have hacked. With hacking attacks on US organizations costing an average of $15.4 million per year, it’s clear that businesses need to put strategic operations in place to ensure they can successfully move forward after a hack has taken place. This is a sad truth for organizations globally, but companies must be prepared to mitigate a pending hack and put coordinated plans in place for the unthinkable.

Copyright 2010 Respective Author at Infosec Island]]>
U.S. Election Drives Increase in Malware and Spam Thu, 20 Oct 2016 09:07:19 -0500 The activity of malware and spam groups has intensified in the wake of United States presidential election, Symantec warns.

Over the past month, the security company has blocked over 8 million spam emails related to the election, and has also observed a steady increase in the email volume as the November 8 polling day draws near. While most of the spam was represented by unwanted and unsolicited emails, some of the messages carried malicious attachments to install malware onto the victim’s computer.

Two of the spam emails analyzed by Symantec reference to Republican nominee Donald Trump, featuring the “Donald Trump’s Secret Letter” and “Donald Trump Reavealed” (sic) subject lines. Both of these emails have malicious .zip files attached. Another spam email supposedly shows Democratic nominee Hillary Clinton with an ISIS leader, but has a malicious Java file attached, designed to infect computers with a remote access Trojan.

“The number of malware-bearing emails has spiked periodically over the past four weeks. However, the overall trend is moving upwards, indicating that attack groups are increasingly leveraging the election as we move closer to the polling date,” Symantec notes.

Malicious JavaScript files (JS.Downloader) represent the most commonly used type of attachment, and the security researchers explain that these files are normally used to spread ransomware and financial Trojans. According to Symantec, the Dridex financial Trojan accounted for 15% of the blocked malicious emails, while generic Trojans represented other 15%.

“Given the already-growing volume of malicious emails attempting to capitalize on the US presidential election, it’s reasonable to assume that attackers will up their efforts over the next three weeks as the election campaign goes into overdrive. Exercise caution with any emails you receive, particularly if they come from an unfamiliar source or contain sensationalist subject lines,” Symantec says.

As it turns out, email isn’t the only attack surface related to the upcoming US presidential election that threat actors might attempt to abuse. Symantec also demonstrated that the voting system itself is vulnerable to different types of attacks that could alter the election results and shatter US public’s trust in the election process.

First and foremost, the security company reveals, electronic voting machines are susceptible to hacking, because of the chip cards that voters are handed when entering polling stations. Because they function as credit cards (they have RAM, CPU and operating system), these cards can be exploited just as any computing device.

According to Symantec, a simple $15 Raspberry Pi-like device, coupled with some knowledge on how to program a chip card, could allow an attacker to secretly reactivate their voter card while inside the privacy of a voting booth. Thus, one person could vote multiple times or could cast multiple votes, all with the help of a card reader that fits into the palm of the hand.

Another issue, the security researchers say, is that the internal hard drive of the voting machines isn’t encrypted and that an outdated operating system was used to display ballots and record votes. Because encryption is missing from these hard drives and from the external cartridges, a hacker could reprogram them and alter ballots.

“Potential hackers would also be unhindered by the voting machine’s lack of internet connectivity. Some types of malware, such as Stuxnet, can take advantage of air-gapped networks and vector through physical access to a machine. The lack of full-disk encryption on the DRE machine makes it easily exploitable, requiring only a simple device to reprogram the compact hard drive,” Symantec explains.

What’s more, the security company reveals, the behind-the-scenes data tabulation represents an even greater opportunity for an attacker. Votes are typically collected in simple storage cartridges (they function as USB drives) and physically transferred to a central database for tabulation, allowing an attacker to alter the information on them or to upload malware on them, to alter the voter database once these cartridges reach tabulation computers (presumed outdated as well).

Another manner in which attackers could compromise the election, Symantec says, is misinformation using social networks, broadcast media, or YouTube channels. “If voters were to follow the poll leader, they might not choose to go through the trouble of voting in an election if it looked like they were in for a landslide victory,” the security researchers say.

In the end, Symantec notes, it’s up to state governments, federal organizations, and voting machine manufacturers to improve the security of election equipment and to adopt stronger security measures to ensure the integrity of the voting process. The discovered vulnerabilities can be resolved with existing security technology: chip cards should have asymmetric encryption, storage cartridges should be “write once, read many,” voting machines’ hard drives should be properly secured and have SSL certifications and public and private key encryption.

“The recent Arizona and Illinois database attacks prove malicious actors are seeking opportunities to access the election system. Yet, few incentives exist to modernize voting security. States can take advantage of Department of Homeland Security guidance and services to inspect voting systems for bugs and vulnerabilities, on top of the security measures voting machine manufacturers should be implementing,” Symantec concludes.

Related: FBI Warns of Attacks on State Election Systems

Related: Evidence Links Russia to Second Democratic Party Hack

Related: Second Database Exposing Voter Records Found Online

Copyright 2010 Respective Author at Infosec Island]]>
Minimize “Dwell Time” to Cut the Cost of Data Center Breaches Thu, 20 Oct 2016 08:49:00 -0500 Not a day goes by, it seems, without a high-profile data breach in the news. The incessant, daily drip of leaked emails from the Democratic National Committee is only the latest sobering reminder: in spite of the millions that organizations spend on preventive security measures, no one is invulnerable to breaches.

Not only are breaches occurring with stunning frequency, but their costs are going up as well. The Ponemon Institute’s 2016 study of 383 organizations found that the average cost of a data breach rose from $3.79 to $4 million over the previous year.

If enterprises are serious about curtailing those costs, it’s time to shift their focus to one of the chief culprits driving up the cost of breaches: dwell time.

What is “dwell time”?

Dwell time refers to the length of time a threat actor lingers in a victim’s environment before it is detected. While dwell time may be tricky to quantify, most cybersecurity researchers estimate that it averages around 150 days. In its seventh annual M-Trends report, Mandiant measured it at 146 days. A recent global Ponemon study put the average at 98 days for financial institutions and as much as 197 days for retailers. However varied these numbers may be, they all tell us the same thing: attackers are being allowed too much time to do their dirty work.

In the highly publicized Target breach of 2013, where over 100 million customers were exposed — and cost the retailer over $500 million — the actual theft of credit card data went undetected for around two weeks. But the real news was that the attackers lurked inside the company’s network for months before they started ex-filtrating the actual credit card data.

Today’s data centers are particularly vulnerable to dwell time. The movement to software-defined data centers and cloud technologies has created a security gap that includes both lack of visibility into and lack of controls of network data flows, enabling malware to move laterally within data centers undetected once it has breached perimeter defenses. Traditional security measures focused on prevention around north-south traffic are not designed for scaling and securing internal data center traffic. They have difficulty keeping up with the pace of change in these dynamic, virtualized environments, allowing attackers to lurk undetected for days, weeks or months.

Dwell time and cost

It stands to reason that the longer it takes to detect and contain a data breach, the more damage it can inflict and the costlier it becomes to resolve. As noted in the Ponemon Institute’s 2016 Cost of Data Breaches study, “Time to identify and contain a data breach affects the cost…(and) our study shows the relationship between how quickly an organization can identify and contain data breach incidents and financial consequences.” More specifically, the study found that when a breach was identified within 100 days, the average cost was $5.83 million per breach. However, when a breach went undetected for 100 days or more, the average cost went up to $8.01 million, or nearly 40% higher.

So how will minimizing dwell time help contain costs? Consider all the direct and indirect costs of a data breach – notifying customers, regulatory disclosures, setting up customer hotlines, offering credit monitoring for victims, professional fees for crisis management, legal costs, lawsuits and settlements. Breach investigation and remediation by outside experts is also a big expense. And that’s not to mention the value of the assets or intellectual property that has been stolen or compromised.

Virtually all of these costs are exacerbated by dwell time – which means that curtailing dwell time should help cut costs in a variety of ways. For example, detecting and stopping a breach before a lot of data has been ex-filtrated will reduce the losses from IP theft. If relatively few customer records are compromised, it will cost less to notify, accommodate and settle with customers. If the internal security team detects a breach before much damage is done, the need for external experts to scope, investigate and repair the damage may be reduced if not eliminated. And a company that beats the media to the story about the breach, proactively explaining clearly the measures it has taken to minimize the impact, will likely see less damage to its reputation.

Minimizing dwell time needs to be a priority of security teams. One could argue this is the most important metric for incident response. As a recent SANS Institute survey (PDF) of security professionals pointed out, “IR teams should be evaluating themselves on metrics such as incident detection or dwell time to determine how quickly they can detect and respond to incidents in the environment. Through well- crafted assessments, teams should find weaknesses in responsiveness and focus on strengthening those areas.”

What will it take?

There’s no question that strong perimeter defenses are essential, but it’s clearly time to place equal if not greater emphasis on earlier breach detection and faster incident response within the data center.

Technology exists today that can dramatically reduce the time it takes to detect, confirm and contain a breach from months to minutes, thereby minimizing dwell time and the resulting costs. To prevent attackers from moving freely within the flow of east-west traffic, the ability to create security policies at the application level is essential. Security teams can then leverage automation to monitor all data center traffic and investigate anomalies that indicate a potential breach.

Distributed deception is a technique that employs a variety of lures throughout the environment, including decoy workstations, servers, infrastructure, devices, applications and other elements, to automatically engage any suspicious activity detected. It is a powerful tool for identifying threat actors without them realizing it, allowing teams to instantly distinguish actual attacks from false positives and prioritize incidents based on severity. Automation can also help quickly identify systems impacted by a breach without the need for outside investigators.

Give security teams the upper hand

In-house security teams often say they lack the resources or staff to monitor everything that goes on inside the data center. They don’t have time to chase down every incident, which more often than not turns out to be a false positive. New technologies that leverage automation can multiply the effectiveness of security personnel, enabling them to monitor the environment and detect more live breaches with fewer people and resources. 

Enterprises may not be able to prevent every breach, but they can minimize the impact of those that break through. A solution that minimizes dwell time and accelerates remediation will go a long way towards mitigating the ever-increasing cost of today’s inevitable data breaches. Because in incident response, time truly is money.

About the author: Dave Burton is vice president of marketing for GuardiCore, an innovator in internal data center security focused on delivering more accurate and effective ways to stop advanced threats through real-time breach detection and response.

Copyright 2010 Respective Author at Infosec Island]]>
Preview: SecurityWeek's 2016 ICS/SCADA Cyber Security Conference – Oct. 24-27 Tue, 18 Oct 2016 22:46:00 -0500 Security professionals from various industries will gather next week at the 2016 edition of SecurityWeek’s ICS Cyber Security Conference, the longest-running event of its kind. The conference takes place on October 24-27 at the Georgia Tech Hotel & Conference Center in Atlanta, Georgia.

This year’s keynote speaker is Admiral Michael S. Rogers, Director of the U.S. National Security Agency (NSA) and Commander of the U.S. Cyber Command.

The event kicks off on Monday with a series of open and advanced workshops focusing on operational technology (OT), critical infrastructure, SCADA systems, and management. Participants will have the opportunity to learn not only how an organization can be protected against attacks, but also how attackers think and operate when targeting control systems.

ICS Cyber Security Conference

On Tuesday, following his keynote, Admiral Rogers will take part in a conversation and questions session. On the same day, Yokogawa’s Jeff Melrose will detail drone attacks on industrial sites, ICS cybersecurity expert Mille Gandelsman will disclose new vulnerabilities in popular SCADA systems.

In addition to an attack demo targeting a Schweitzer SEL-751A feeder protection relay, the day will feature several focused breakout sessions and a panel discussion on risk management and insurance implications.

The third day of the event includes presentations on PLC vulnerabilities,attacks against air-gapped systems,cyberattack readiness exercises, and management issues.

ICS Cyber Security Conference

Also on Wednesday, ExxonMobil Chief Engineer Don Bartusiak will detail the company’s initiative to build a next-generation process control architecture. Breakout sessions will focus on risk management, incident response, safety and cybersecurity programs, emerging technologies, and the benefits of outside cybersecurity services in the automation industry.

On the last day of the ICS Cyber Security Conference, attendees will have the opportunity to learn about the implications of the Ukrainian energy hack on the U.S. grid, practical attacks on the oil and gas industries, and how technologies designed for video game development and engineering can be used to simulate cyberattacks and evaluate their impact.

Speakers will also detail the status of ICS in developing countries, the need for physical security, the implications associated with the use of cloud technologies in industrial environments, and the implementation of a publicly accessible database covering critical infrastructure incidents. 

Produced by SecurityWeek, the ICS Cyber Security Conference is the conference where ICS users, ICS vendors, system security providers and government representatives meet to discuss the latest cyber-incidents, analyze their causes and cooperate on solutions. Since its first edition in 2002, the conference has attracted a continually rising interest as both the stakes of critical infrastructure protection and the distinctiveness of securing ICSs become increasingly apparent.

Register Now

Copyright 2010 Respective Author at Infosec Island]]>
Nitol Botnet Uses New Evasion Techniques Tue, 18 Oct 2016 07:21:05 -0500 The Nitol botnet was recently observed employing new evasion techniques in distribution attacks that leverage malicious macro-based documents, Netskope security researchers warn.

Historically, malware authors have been using various methods to bypass sandbox analysis, yet those behind the Nitol botnet have found a novel, smart technique for that. They are using both the obfuscation of the macro code and a multi-stage attack methodology to ensure that endpoint machines are compromised.

According to Netskope Threat Research Labs, the malicious macro-enabled documentsobserved in said distribution attacks were password protected, meaning that they would bypass sandbox entirely. Because the process of entering the password is complex and requires user interaction, automated analysis technology can’t easily emulate the event, the security researchers explain.

Additionally, the malicious code was using delayed execution to evade detection, but not the usual sleep or stalling methods seen in other malware. Instead, these macro-based malware documents were using the “ping” utility to delay the execution: the malware would invoke the “ping -n 250” command and would wait for the ping process to complete the execution, which could take up to 5 minutes, enough to bypass sandboxes that are configured with a lower time threshold for executing samples.

While the use of the ping command isn’t new, it mostly served as means to ensure that Internet connectivity was available. However, using the ping command to delay the execution of a malware variant is a novel technique, researchers say.

The macro code used in this attack would download and execute a VBScript (VBS) file, which in turn downloads and executes second stage payload. The VBScript file was obfuscated, but analysis revealed not only that it was responsible for launching the “ping” utility for execution delay, but also that it would connect to the “hxxp://” domain to download the second stage payload and save it to the disk with a “.qsb” extension.

The payload is XOR encoded, but the VBScript decodes it and writes its content to a file with “.fyn” extension (a Windows executable file - PE), after which it executes it. The code checks if the execution environment is VMware using process enumeration, and also checks for active debugging using GetTickCount.

Next, the code searches for the default browser, after which it creates a browser process in suspended mode, and unmaps and writes the browser process memory with a UPX compressed file. This UPX file, Netskope security researchers have discovered, is the Nitol botnet binary.

Nitol is an old botnet that had its command and control (C&C) servers sinkholed before, but which was seen active earlier this year, when it fueled a record-breaking 8.7 gigabits per second (Gbps) layer 7 distributed denial of service (DDoS) attack.

According to Netskope, the Nitol binary observed in the recent attack attempted to connect to, a domain currently inactive. However, the same C&C server was observed earlier this year being used by the Hydracrypt ransomware. What remains to be seen, researchers say, is whether the Nitol binaries are used as a placeholder for future threats or cybercriminals are only testing a new attack methodology.

Related: Malware Increasingly Abusing WMI for Evasion

Related: Ursnif Banking Trojan Uses New Sandbox Evasion Techniques

Copyright 2010 Respective Author at Infosec Island]]>
33 Million Evony User Accounts Emerge Online Sun, 16 Oct 2016 19:21:52 -0500 Over 33 million accounts from online gaming platform Evony have emerged online after hackers reportedly gained access to the platform’s main database in June this year.

The data dump has already emerged on Leaked Source, which reveals that a total of 33,407,472 users might have been affected by the leak. Each of the records included in the leak, they say, included a username, email address, password, and IP address, as well as various other internal data fields.

What’s worrying is that the gaming platform wasn’t using advanced protection when the user passwords were involved. According to Leaked Source, “passwords were stored using unsalted MD5 hashing,” and were also stored “in unsalted SHA1 next to the MD5.”

Looking at the list of the most used passwords, “123456” emerges on top, with 714,466 occurrences, showing once again how little thought many users give to their account’s security. “password” is also present on the list, on the fifth position, along with “111111” on the sixth and “qwerty” on the ninth. “123123”, “abc123”, “000000”, and “evony1” are also some easy-to-guess passwords used by gamers.

Because the passwords were so easy to retrieve, Leaked Source also revealed some other interesting stats, such as a list of the longest passwords used on the platform. The longest of them is 49 characters long, but uses only words written in lowercase.

With 7,464,078 occurrences, was the most used email domain, followed by with 6,493,345 occurrences and with 3,593,315. The list also shows over 1 million emails, which doesn’t come as a surprise, given that Evony was launched several years ago (the copyright on the main page still reads 2010-2012).

As it turns out, this isn’t the first data breach that Evony experiences this year. The platform’s forum was hacked in August, which reportedly led to the compromise of some 938 thousand user accounts. In a forum post, Evony prompted users to reset their passwords “considering the nature of security on the internet,” and “even though all forum passwords are encrypted.”

We have contacted Evony for additional details on the 33 million accounts hack and we will update the story as soon as we receive a reply.

Earlier this year, numerous other large data breaches were brought to light, though none as recent as the Evony one. Dropbox (68 million), LinkedIn (167 million), Myspace (360 million), Tumblr (65 million), (43 million), and VK (170 million) were all breached several years ago, but information on the stolen data emerged only this year. A VerticalScope breach that impacted 45 million happened earlier this year.

Last month, Yahoo! confirmed that hackers managed to breach its network in 2014 and that no less than 500 million users might have been impacted by the incident, one of the largest data breaches in recent history. Earlier this month, the company refuted claims that it had secretly scanned millions of emails to help American intelligence. 

Copyright 2010 Respective Author at Infosec Island]]>
Closing the Cybersecurity Skills Gap — Calling All Women Wed, 12 Oct 2016 08:24:00 -0500 As cyberthreats have become more sophisticated, networks more complex and cybersecurity issues of greater concern at the board level, demand for skilled cybersecurity professionals has soared. Unfortunately, there just isn’t enough talent to fill all of the roles.

According to the 2015 “Global Information Security Workforce Study” (GISWS)from the (ISC)² Foundation, the information security workforce will reach a 1.5 million shortfall by 2020. Another study by (ISC)² on women in securityrevealed that in 2015, 90 percent of the information security positions worldwide were filled by men. This is despite women nearly closing the gap between men and women in terms of relevant undergraduate degrees and holding higher academic degrees than their male counterparts.

Somewhere between graduation and the career path, women are turning away (or being turned away) from roles in information security. Closing the cybersecurity skills gap will depend heavily on the inclusion and participation of an untapped pool of talented women. To get there, it will require a mix of practicality and inspiration.

1. Show Them the Money

Gartner predicts 2016 will see worldwide information security spending reach $81.6 billion. cybersecurity Ventures also projects $1 trillion will be spend globally on cybersecurity from next year to 2021, according to their Q3 2016 Market Report. The presents a great opportunity for women looking for stable employment in a field with continued growth and high wages.

Women are already taking advantage of high-growth divisions within cybersecurity. According to (ISC)2, 20 percent of women hold roles in governance, risk and compliance (GRC), which the foundation sees as a division with solid projected growth and importance.

“Women, therefore, have positioned themselves wisely in an InfoSec profession that should not be defined by sheer headcount, but in the roles of those that are shaping the future practice of InfoSec.”

2. We Need You

Solving cybersecurity issues requires a mix of talent. STEM-skills, technical knowledge, critical thinking, product management, understanding of organizational behavior, planning and communication are key. But in an age where networks and threat landscapes change constantly, organizations must seek out fresh perspectives and creative approaches to combat current challenges and lay the foundation to meet future ones. Recruiting women not just for diversity’s sake but for diversity of thought will create more agile, innovative security teams.

Job descriptions that include “softer” skills such as collaboration, objectives management and openness to new methods, in addition to technical knowledge, make clear that organizations are seeking well- rounded applicants. Hiring managers should avoid inflating the job requirements unnecessarily as women are less likely than men to apply for positions where they feel that they are not 100 percent qualified (men put that threshold at about 60 percent).

3. End the Token Speech

Increasing the visibility of women in cybersecurity can improve community within the industry and also influence younger women who perhaps have the much-needed skills but are undecided in their career path. Many cybersecurity conferences often have speaking opportunities for women in the industry to give insight on establishing their career in a male-dominated field.

Unfortunately, too often these are some of the only presentations given by women at such conferences. What would be more useful to the women in the audience would be to see women presenting on their work rather than their career. This is not to suggest career talks given by women for women should be eliminated; rather, they should not be used as a sort of “affirmative action check box” for conferences.

Women need to see that though they are underrepresented in cybersecurity, they are not an anomaly. They, like women before them, have value to contribute to the industry and their efforts can shape it for years to come.

Copyright 2010 Respective Author at Infosec Island]]>
Differential Privacy vs. End-to-end Encryption – It’s Privacy vs. Privacy! Wed, 12 Oct 2016 07:12:00 -0500 Written in collaboration with Yunhui Long.

In this time and age, where companies brew money using user data, consumer privacy is at stake. Incessant identity thefts and phishing attacks, and revelations about mass government surveillance have resulted in privacy paranoia among consumers. Consumers have thus come to prefer products and services with stronger privacy postures. To this end, two major privacy technologies have gained immense attention -- End-to-end Encryption and Differential Privacy. While both the technologies strive to protect user privacy, interestingly, when put together, the whole is smaller than the sum of its parts.

Firstly, what exactly are end-to-end encryption and differential privacy?

What is End-to-end Encryption?

End-to-end encryption (E2EE) is a popular privacy technology for instant-messaging services. With this technology, only the communicating users can read the messages. Technically speaking, this works by encoding the sender’s message in such a way that only the receiver has the key to decode it.

Just in the past three years, various messaging apps have implemented end-to-end encryption. Notably, this shield not only protects users from external eavesdroppers, but also ensures that even the company offering the instant-messaging service cannot access the data.

What is Differential Privacy?

Intuitively, differential privacy is a technique that can reveal interesting patterns in a large dataset, while still protecting privacy of individual data entries.

To understand the technique, consider a database of salaries of all Software Engineers in the Silicon Valley. Let us say that an analyst is allowed to access the average of the salaries. Denote the average value by avg1. Let us say that a new item v is added to the database; let the new average be avg2. The analyst can easily decode what v is, just by knowing avg1 and avg2. [v = avg2 * (N+1) – avg1 * N], where N is the total number of salaries in the original database.

Differential privacy avoids such scenarios. More specifically, differential privacy is a statistical learning tool that works by adding carefully computed mathematical noise to the statistical aggregate. In the above example, the noise term added to the average salary does not allow the analyst to learn information about the exact salary of any individual software engineer. The noise term is large enough to mask individual data items, but small enough to allow any patterns in the dataset to appear.

Differential Privacy to Protect User Privacy

Until recently, differential privacy had been a topic of theoretical research without much application in real-world scenarios. Clearly, differential privacy can bring a significant value to the table: In the today’s consumer-driven economy, it’s crucial for businesses to learn and adapt to consumer behavior. Thus, collecting and studying patterns in consumer data has become key ingredient for success survival. The ability to extract patterns from large datasets, while still protecting privacy of individual data points seems to be a boon.

An application of differential privacy: Consider a company C providing an end-to-end encrypted instant-messaging service. A desirable feature of an instant-messaging service is smart autocomplete. To provide this feature, all the data that C needs is just the English dictionary. Now, consider a smart autocomplete feature that also suggests trending slangs even before you have heard of those slangs. Note that the suggestions are specifically based on a population’s messaging behavior. So, clearly, this feature needs consumer data. In such a case, differential privacy might be used to collect and process consumer data, while still preserving individual privacy.

Methodologies for implementing differential privacy: Unfortunately, differential privacy had been confined only to theoretical research, and there isn’t much work on how to employ this in practice. Thus, the exact methodologies of implementing this technology large scale is unclear. A specific interesting question is what exact methodology one should use to sample the noise terms. There are two major methodologies in the literature:

  1. A prevalent methodology is to first collect the exact data points, compute an aggregate (such as, total count or average) of the collected data points, and then add noise to the aggregate. This necessitates the users sending their exact information to the company C. Thus, while this methodology protects user privacy from the public, the company still gets access to the exact user information. This is undesirable.
  2. Thankfully, there is another methodology which, although less prevalent, seems to fit the bill: It involves adding noise to the data points at the user end before the data is collected and sent to C’s cloud storage. Then C would aggregate the noised data points. This helps preserve some privacy from C too. A significant research in this area came from Google Research --RAPPOR methodology, and it involves the so-called tools of ‘hashing’ and ‘sub-sampling’.

However, the devil is in the details.

The Devil in the Details

Detail 1: RAPPOR-like techniques would require C to know a set of candidate strings of which C is computing the usage frequency. For concreteness, in the case of trending slangs, C would actually need to know the slangs, that the users are sending through the messaging service, to determine the frequency.

Detail 2: Recall that the conversations are end-to-end encrypted. Also recall that the objective of end-to-end encryption is to have no door through which C can obtain user data (so that C may not be coerced to reveal user data even by government surveillance warrants).

The devil: Detail 1 implied that C needs to look into user data, Detail 2 recalled that the data is already encrypted. In other words, to learn the candidate strings used in differential privacy techniques, the company may need to see the unencrypted content of individuals' conversations, which is against the very intention of end-to-end encryption.

This shows that the two privacy technologies fundamentally tussle with each other. In fact, we have seen that one can seriously backpedal the other. Thus, any methodology that will make these privacy technologies work together will be incredibly non-trivial and ground-breaking.

In summary, although end-to-end encryption and differential privacy offer strong user privacy protection, these two technologies interact in interesting ways, one fundamentally backpedaling the effect of the other. In this light, while differential privacy is a promising tool, implementing and deploying it while retaining the privacy of end-to-end encryption is challenging.

Copyright 2010 Respective Author at Infosec Island]]>
Cyber Resilience Remains Vital to Sustaining Brand Reputation Fri, 07 Oct 2016 07:19:00 -0500 Each year, we spend more money and time combatting the evil forces of cyber space: state-sponsored operatives, organized crime rings, and super-hackers armed with black-ops tech. The attack methods are mutating constantly, growing more cancerous and damaging. Massive data breaches and their ripple effects compel organizations of all sizes to grapple with risk and security at a more fundamental level.

The harm done to brand reputation can be long lasting and hard to control. Breached companies are liable for significant restitution to customers and suppliers, face closer scrutiny and higher fines from regulators, and often struggle with sudden drop in sales or loss of business.

The appearance of negligence, repeat attacks or unpredictable fallout from a breach can significantly unravel public goodwill that took decades to build. The trust dynamic that exists amongst suppliers, customers and partners is a high profile target for cybercriminals and hacktivists. The Sony breach is an example of the myriad ways a security breach can damage even the most established, global brand.

Take it to the Board of Directors

Information risk must be elevated to a board-level issue and given the same attention afforded to other risk management practices. Organizations face a daunting array of challenges interconnected with cybersecurity: the insatiable appetite for speed and agility, the growing dependence on complex supply chains, and the rapid emergence of new technologies.

Cyber security chiefs must drive collaboration across the entire enterprise, bringing business and marketing needs into alignment with IT strategy. IT must transform the security conversation so it will resonate with leading decision-makers while also supporting the organization’s business objectives.

Cyber Resilience is Crucial

Every business, no matter the size, must assume they will eventually incur severe impacts from unpredictable cyber threats. Planning for resilient incident response in the aftermath of a breach is imperative.

Traditional risk management is insufficient.

It’s important to learn from the cautionary tales of past breaches, not only to build better defenses, but also better responses. Business, government, and personal security are now so interconnected, resilience is important to withstanding direct attacks as well as the ripple effects that pass through interdependent systems.

I strongly urge organizations to establish a crisis management plan that includes the formation of a Cyber Resilience Team. This team, made up of experienced security professionals, should be charged with thoroughly investigating each incident and ensuring that all relevant players communicate effectively. This is the only way a comprehensive and collaborative recovery plan can be implemented in a timely fashion.

Today’s most cyber-resilient organizations are appointing a coordinator (e.g., Director of Cyber Security or a Chief Digital Officer) to oversee security operations and to apprise the board of its related responsibilities.

The new legal aspects of doing business in cyberspace put more pressure on the board and C-suite. For example, an enterprise that cannot prove compliance with HIPAA regulations could incur significant damages even in the absence of a breach, or face more severe penalties after a successful attack.

Key Steps

We no longer hide behind impenetrable walls, but operate as part of an interconnected whole. The strength to absorb the blows and forge ahead is essential to competitive advantage and growth, in cyberspace and beyond.

Here is a quick recap of the next steps that businesses should implement to better prepare themselves:

  • Re-assess the risks to your organization and its information from the inside out. Operate on the assumption thatyour organization is a target and will be breached.
  • Revise cyber security arrangements: implement a cyber-resilience team and rehearse your recovery plan.
  • Focus on the basics: people and technology
  • Prepare for the future: to minimize risk and brand damage, be proactive about security in every business initiative.
Copyright 2010 Respective Author at Infosec Island]]>
Demonstration of Destructive Cyberattack Vector on “Air-gapped” Systems Fri, 30 Sep 2016 08:04:54 -0500 All too often, people claim their systems are air-gapped, and therefore have no cyber vulnerability. But Alternating Current (AC) power cords cross the ostensible “air gap”, and power supplies for laptops, servers, ICSs, etc. have rarely been addressed for cyber security vulnerabilities.

Alex McEachern from Power Standards Laboratory will provide a hands-on demonstration of two types of attack-to-failure of a real, air-gapped ICS at the October ICS Cyber Security Conference ( McEachern’s demonstration will remotely cyber attack and permanently disable a fully air-gapped system – in this case, a server, a router, and a PLC connected only to each other. Well, that's not quite true: all three would be connected to a power outlet, which will be McEachern’s vector of attack. 

Electrical systems, including ICSs, that claim to be fully air-gapped often aren't, says McEachern. In particular, the ICS takes electrical power from a local network, or Uninterruptible Power Supply (UPS). Power supply engineers who work on power disturbances, like McEachern, can demonstrate certain types of events -- as simple as turning the power off and on in a particular pattern -- that can permanently disable typical off-the-shelf power supplies.  In this case, McEachern will use the Internet to initiate the attack, but that isn’t necessary. McEachern will explain the technical basis of both attacks-to-failure. He will initiate, from his PC, both types of attacks on the air-gapped table-top ICS. He will also briefly discuss how to detect and prevent these types of attacks.

Power supply issues can have real impacts. The attackers in the 2015 Ukrainian hack discovered a network connected to a UPS and reconfigured the UPS so that when the attacker caused a power outage, it was followed by an event that would also impact the power in the energy company’s buildings or data centers/closets. The 2010 San Bruno, CA natural gas pipeline rupture was initiated as a result of the replacement of the SCADA UPS that directly led to the overpressure that burst the weak pipe. Given these actual cases, it should be evident that compromising power supplies can have very significant physical impacts.

This demonstration of a destructive attack on an air-gapped system and the protective relay hacking demonstration (see 9/15/16 blog) have several points in common. Both demonstrations involve physics issues that have been known by industry experts for years. Both demonstrations use cyber means (remote access) to exploit these physics issues. Neither attack vector can be detected by network monitoring as these are not traditional malware attacks. Both demonstrations can use the substation protective relays to initiate the cyber attacks.

  Register for the ICS CYber Security Conference Here

Copyright 2010 Respective Author at Infosec Island]]>
What It Will Really Take to Build Trust in Security Companies Wed, 28 Sep 2016 08:00:00 -0500 Why haven't we figured it out yet?

Cyber attacks are now so frequent that they border on uninteresting. And as attacks have increased in the past few years, so has the number of security startups claiming to tackle the problem. But isn’t this direct relationship between the number of cybercrime incidents and the number of tools to stop them counterintuitive? Are these companies really providing a solution, or have they just been capitalizing on a booming market?

The sad fact is that security companies only have solutions that address pieces of the massive problem and many only provide value after a breach has taken place. They have yet to find a way to truly solve the full scope of the problem and earn and retain customers’—and ultimately consumers’—full trust. Even with industry jargon and all promises of unmatched security, security vendors are at a crossroads where they’re unable to put their money where their mouth is because they’re just not confident enough in their own solutions. As the security market turns to a Darwinian climate, we need to find the diamonds in the rough of a noisy market and find solutions that will actually solve the problems companies face every day. The most effective way for companies to prove their worth is by standing behind their product and insuring its results. It’s time to shake up the security industry and increase accountability in order to truly make the digital world a safer place.

Today’s Security Market

Security, like most other traditional infrastructure systems, is fast becoming outdated as computing becomes decentralized, even extending to remote and mobile users across the globe. Traditional defenses no longer work as hackers long ago outsmarted them, creating a constant cat and mouse chase for the industry to catch up with the criminals. Further, this lag in security is exacerbated by a convergence of new forces like the Internet of Things and the big data explosion. Now, human behavior, especially in the workplace, is immersed in technology, leaving CISOs to scramble to address existent problems as well as newly introduced risks.

Vendors have tapped into this notion and jumped on each passing trend that causes CISOs to wring their hands in frustration. And in their panic, CISOs are biting, buying into the promise of specialized cloud security or next-generation “X” without knowing how well the solutions will perform due to their own novelty as well as that of the threats. This perceived traction in the market also prompts investors to buy in. So continues the chain of adoption and investment, with no clear delineation of practical value—critical in a time where even giants armed with the best solutions continue to fall victim to attacks. Building trust in this environment is nearly impossible.

Product Over Trends

Despite the myriad of claims — “leader of X” or “unique solution Y”— that are muddying the waters of the security market, there are diamonds in the rough, security companies truly solving the problems businesses and individuals face every day. How can these solution providers stand out? Where can trust be instilled?

Avoiding the trend label should be the first step. Worrying too much about “keeping up” with the competition, or about not sounding the same or better is a waste of time. Instead, security companies should focus on their strengths and highlight the functionality and effectiveness of their platforms and solutions. Results are what matter, not claims that are simply meant to instigate interest.

Trust Amid the Noise

This brings us to the idea of insurance, it is not enough for a vendor to tell consumers they are the best; third party validation is required to cut through the noise and prove what solutions can actually hold water. Security companies need to prove their efficacy and have some skin in the game themselves. Most large organizations have some form of cyber insurance from third party insurers, but this insurance only has value after a breach. Companies need a solution strong enough to prevent losses in the first place. We need security solutions so powerful that a third-party insurer is confident enough to back the companies’ assertions of unmatched protection.

The topic may be taboo, as time and again we see that 100 percent security is often impossible. But security guarantees are the ultimate differentiator. If a company trusts their own product to not fail, the customer can too—with this level of trust ultimately reaching the average person. It’s a win-win-win. 

Copyright 2010 Respective Author at Infosec Island]]>
Avoid the Breach: Live Webinar 9/27 - Register Now Mon, 26 Sep 2016 08:02:56 -0500 Live Webinar: Tuesday, Sept. 27th at 1PM ET

Please join Centrify and SecurityWeek for this live webinar on Tuesday, Sept. 27th at 1PM ET, when we will discuss guidance from the National Institute of Standards and Technology (NIST) along with best practices and regulation mandates.

The webinar will explore how new technologies such as multi-factor authentication (MFA) address requirements for higher levels of assurance for user authentication, while preventing identity theft and account misuse. 

Register Now


Join this webinar to learn why ensuring only authorized users can access your enterprise’s critical resources is a primary component in today’s security best practices, standards and regulations, and should be top priority to protect your business from data breaches. 

Sponsored By:

Sponsored by Centrify

Copyright 2010 Respective Author at Infosec Island]]>
Going Global: Three Key Strategies for Managing International Firewalls Fri, 23 Sep 2016 08:30:00 -0500 Globalization is the new normal for most organizations today, but it can present some significant challenges - not least when it comes to managing the firewall estate across these large-scale, distributed networks.

A typical, multinational corporation, headquartered in the US may have offices and datacenters in dozens of countries around the globe. Let’s assume the organization takes a proactive, structured and logical approach to cybersecurity, and therefore protects each datacenter with firewalls. Yet all of these firewalls also have to work together cohesively, allowing network traffic to move securely between the international networks and datacenters. How do you manage this? There are three vital issues to consider.

Issue one: a matter of time

A core element of firewall management – in any context – is configuration and in particular the change control process – that is, updating firewall rules when application network connectivity is updated or changed.

However, in global networks, with applications in different countries that need to communicate and share information, this gets a little more complicated.  Imagine one common scenario: an organization has deployed a new application across its global network, so needs to implement firewall policy changes in multiple countries.  While the policy change in itself is easy enough to make, the question becomes – when exactly should it be made?

For many large organizations, policy changes are limited to specific change control windows in order to mitigate the risk of operational downtime for core applications or configuration mistakes.  Firewall policy changes therefore usually take place overnight, or at the weekend – out of high risk hours.  But in a global organization, operating across multiple time zones, those high risk hours vary from country to country.  What’s more, high traffic periods in the calendar vary too – the run-up to the Christmas holidays will be critical to a retailer in Western Europe and the US, while Chinese New Year will impact on retailers in Asia.

So businesses have a choice. They can set a single universal change control window according to when its convenient for the most important location in its network, and hope that the other locations will manage.  This is quicker but riskier.  Alternatively, they can set different change control windows in different countries, and somehow coordinate a staggered firewall change process.  This is unlikely to cause security problems part-way through the process, as legitimate traffic will most likely continue to be blocked somewhere along its path until the change has been fully implemented – but clearly this could be a significant operational issue, blocking different sites from communicating with each other.  This change management process requires careful coordination between an organization’s network operations and application delivery teams.

Ultimately, there is no simple answer to this challenge. A business needs to weigh up the risks and benefits of the two approaches, and choose the most appropriate path for the organization.

Issue two: staying within the law

Another aspect of running multiple datacenters in multiple countries is the question of multiple jurisdictions. Different nations have different laws governing the location and movement of information; Switzerland, for example, requires Swiss banking information to remain inside Switzerland, while the Australian government does not allow government or federal information to leave the country.

These laws have significant technical implications for how international enterprises organize their datacenters, whether on premise or in the cloud. Information must be segmented, siloed and protected with firewalls according to local jurisdictions, and the IT team will normally be required to manage this. Technically all the necessary segmentation can be achieved remotely or even outsourced to a service provider, but it still carries a significant organizational burden – especially for organizations migrating to cloud infrastructures, as they may be nervous about the legislative compliance implications.

We may see this in action if the Bangladesh Bank decides to press charges following the recent $81m heist via the SWIFT wire transfer network. Which police force will they go to?  Can INTERPOL help?  Even if they manage to identify the criminals, who is going to arrest them, or request extradition? 

There are, as yet, no easy answers to these issues. Ultimately organizations need to take responsibility for understanding all of the data protection laws and regulations that apply in every country where you store and transmit data – and they need to translate compliance with those regulations into proper technical, legal and compliance related actions for its IT security strategy and business.

Issue three: who else is connected?

The picture gets more complex still when businesses grant external organizations access to their networks.  At this point, it is important to note that they become part of the organizations’ information security and regulatory compliance posture.  Minimizing the risk of such external connectivity depends on implementing careful network segmentationas well as using additional controls such as web application firewalls, data leak prevention and intrusion detection.

Furthermore at some point in time businesses will have to make changes to their external connections, either due to planned maintenance work by its IT team or the peer’s IT team, or as a result of unplanned outages. Dealing with changes that affect external connections is more complicated than internal maintenance, as it will probably require coordinating with people outside the organization and tweaking existing workflows, while adhering to any contractual or SLA obligations. As part of this process, organizations need to ensure that their information systems allow its IT team to recognize external connections and provide access to the relevant technical information in the contract, while supporting the amended workflows.

Finally organizations should also ensure that they have a contract in place with third party organizations to cover all technical, business and legal aspects of the external connection.

When managing global network infrastructures, it is more important than ever to have full, real-time visibility and control of exactly how firewalls are controlling network traffic across the globe, both to maximize security and compliance, and minimize downtime. 

Copyright 2010 Respective Author at Infosec Island]]>
What Is ID and Verification and Why Is It Such an Integral Part of Digital Life? Fri, 23 Sep 2016 07:00:00 -0500 Identity and verification (ID&V) are two closely linked concepts that play an increasingly critical role in consumers’ day-to-day lives.

Identification systems use a trusted ledger, process or token to identify a person or entity. Verification is answering the question “is this person who they say they are?”  

They are familiar to us in our everyday lives. From showing our passports when entering a country to showing proof of address and identity when applying for a financial product, it’s something we all do.

All of these methods of identification and verification rely on the presentation of a physical document. And, of course, up until the digital commerce revolution, when the vast majority of transactions were carried out face to face, it was a tried and tested method that worked

These processes are something we are all familiar with. From boarding a flight to collecting a parcel from the Post Office, identifying and verifying ourselves has been commonplace for generations.

However, these methods rely on the possession and production of hard copies of various forms of accepted ID, and in the digital economy, face-to-face interactions are increasingly less common.

The Digital Economy

The internet changed how we shop forever. With an estimated 1.61bn online shoppers [1] globally, and £52.25bn spent via e-commerce in the UK in 2015 [2], the last decade and a half has seen e-commerce grow into a well-established, even dominant, method for business and commerce.

Mobile (i.e. unsecured touchscreen devices such as mobile phones and tablets) is rapidly winning the race to become the dominant platform. The ability to shop and carry out transactions on the go is now something we almost take for granted.

Yet all this convenience has come at a cost, and that cost is the challenge of managing ID&V online.

Digital transactions all require ID&V to a greater or lesser extent. Online shopping often requires a password and email address, while financial products, bound by a need to comply with know-your-customer and anti-money laundering legislation, require much greater levels of ID&V.

The problem is that ID&V is more challenging for remote transactions due to a lack of face-to-face interaction.

ID&V in the Digital Age

Remote ID&V is nothing new. Consumers have carried out transactions by mail or telephone (MOTO) for decades. However, these all relied on forms of ID&V such as address and date of birth. Yet, as such information is now readily available online, they can no longer be considered sufficiently robust.

This has driven a need to develop and accept new methods of ID&V with both customers and businesses having to adapt to the new business realities.

The most obvious of this is the password, which comes with its own drawbacks. Having to come up with a secure, eight-character password which includes a capital, a symbol and a number can be a challenge, especially if you can’t use the last five variations. .

This can lead to fundamental problem with digital ID&V if   it is time consuming and challenging then it significantly detracts from the very convenience digital commerce is supposed to bring.

This is why the industry is continually looking for new ways to improve the ID&V experience without it impacting negatively on the user experience. Currently, the hot talking point is biometrics which have the advantage of convenience but as they are seldom independently verified they should not be relied upon solely.  However they can form part of a multi-factor, strong authentication alongside something you are and something you know.


Biometrics are, quite simply, using a human characteristics for ID&V. There are a variety of different forms being currently trialled.

  • Voice recognition – Voice recognition can verify someone in around 15 seconds, quicker than passwords.[3] Yet questions remain about the accuracy of this method. What if someone is in a crowded room or restaurant? Could the technology cancel out the background noise?
  • Facial recognition – Also known as “selfie” authentication. For this to work, the lighting of the photograph will need to be of sufficient quality which isn’t always guaranteed.  
  • Fingerprint recognition –It’s widely used, it’s trusted, it’s easy, but it is not perfect. Fingerprints can be copied by fraudsters using easily obtained chemicals. If a fraudster has your phone and wants access to it, they can.

Where to now?

ID&V is part of our lives and while there might be complaints about the inconvenience that obtrusive security plays in digital commerce, it is still an improvement on how things used to be.

The good news is that it is going to become even more suited for the dominant mobile platform. Despite some issues around biometrics, they will become an integral part of ID&V although it is likely that they will be part of a wider, multifactor ID&V process, incorporating factors such as PIN to give further security.

[1] Statistica, 2016

[2] Retail Research, 2016

[3] China News

Copyright 2010 Respective Author at Infosec Island]]>
How to Choose the Right EDR solution for Your Organization Fri, 23 Sep 2016 05:55:00 -0500 The rise of cyber-attacks has led to a major uptick in breaches in recent years, and not only have these attacks increased in volume, but in sophistication as well. Although the motivation of hackers remains the same – money, information, and more money, their new methods are much more complex, invasive and harder to stop. Cyber-criminals are now attacking the endpoint, bypassing traditional hacks and hitting organizations where it hurts most. The need for an Endpoint Detection and Response (EDR) solution is at its highest for any organization who wants to ensure they are as protected as possible from the threat of attack.   

EDR solutions have been available for several years, but are getting much more attention now, mostly due to the rise in ransomware, a targeted threat than can infect multiple systems within the endpoint. The rise of ransomware is forcing anyone who handles corporate security to reevaluate their security solutions and realize the importance of immediate EDR implementation.     

Traditional antivirus solutions, although very important in their own right, aren’t enough to protect an organization from attacks on the endpoint. In addition, traditional antivirus solutions can only block what they know, and if a threat isn’t recognized, it still has the ability to pass through. A strong EDR, on the other hand, can evaluate software and label it as a threat or can identify it as “goodware,” letting only this permissible category through. This is important as the sophistication of hacks improve.   

It’s clear EDR solutions need to be an organizational asset now and into the future. Here are what organizations need to consider when choosing the right EDR solution:

  • Tradition – Organizations need to choose an EDR solution from a company that has tradition in the cybersecurity space. To meet the demand for EDR solutions and products, start-ups and new companies are popping up all over the cybersecurity space. Yet, they are relying on third party data, as opposed to cybersecurity firms who have the knowledge, history and proprietary data to classify threats as either “goodware,” or “badware.” Several upstart firms offer solutions that will just score the threat, while not formally classifying it as “good” or “bad”. This method of scoring still has the potential of allowing an unknown threat to slip through the cracks. When it comes to something as important as an organizations’ information, CTOs need to have confidence they’re relying on trusted data, and not just estimates.  Although risk tolerance varies from organization to organization, it is something that needs to be defined as part of a security strategy.     
  • Visibility – An EDR solution should run as a managed service based on complex analysis, and organizations need to have visibility into EDR operations and management, yet have the confidence in it as a managed service. This is a highly technical product, and it’s important a firm provides a full service, not just the product.   
  • Implementation Cost – Organizations of all sizes need to consider what it’s going to cost to get an EDR up and running within their system. Everything needs to be considered, including technical resources, services, installation, updates, and support to name a few. It is not simply just the cost of licenses. The more complex the technology, the more things (like hard and soft costs) need to be considered when it comes to price/budgets.   

Information is the bloodline of every organization, and when that information is threatened, the entire organization is threatened. We all know based on major corporate hacks at Sony, Target, and JPMorgan Chase, that it can be devastating not only to the company, but to consumer and client confidence in that brand. As ransomware and other advanced attacks continue to be more commonplace for hackers, having the right EDR solution in place is now more important than ever.   

About the author: Tom Wayne is Panda Security’s Sales Manager for the U.S. and Canada markets. Tom is based in Orlando, Fla.

Copyright 2010 Respective Author at Infosec Island]]>