Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 Augmented Reality Will Compromise the Privacy and Safety of Attack Victims Wed, 08 Jul 2020 00:38:48 -0500 In the coming years, new technologies will further invade every element of daily life with sensors, cameras and other devices embedded in homes, offices, factories and public spaces. A constant stream of data will flow between the digital and physical worlds, with attacks on the digital world directly impacting the physical and creating dire consequences for privacy, well-being and personal safety.

Augmented Reality (AR) technologies will provide new opportunities for attackers to compromise the privacy and safety of their victims. Organizations rushing to adopt AR to enhance products and services will become an attractive target for attackers.

Compromised AR technologies will have an impact on a range of industries as they move beyond the traditional entertainment and gaming markets into areas such as retail, manufacturing, engineering and healthcare. Attackers will perform man-in-the-middle attacks on AR-enabled devices and infrastructure, gaining access to intimate and sensitive information in real-time. Ransomware and denial of service attacks will affect the availability of AR systems used in critical processes such as surgical operations or engineering safety checks. Attacks on the integrity of data used in AR systems will threaten the health and safety of individuals and the reputations of organizations.

As AR begins to pervade many elements of life, organizations, governments and consumers will begin using it more frequently and with greater dependency. AR will bridge the digital and physical realms. But as a relatively immature technology it will present nation states, organized criminal groups, terrorists and hackers with new opportunities to distort reality.

What is the Justification for This Threat?

AR has been heralded as the future visual interface to digital information systems. With 5G networks reducing latency between devices, AR technologies will proliferate across the world, with significant investment in the UK, US and Chinese markets.

The estimated global market value for AR technologies is set to grow from $4bn in 2017 to $60 billion by 2023, with use cases already being developed in the entertainment, retail, engineering, manufacturing and healthcare industries. There are increasing signs that AR will be promoted by major technology vendors such as Apple, which is said to be developing an AR headset for launch in 2020.

Vulnerabilities in devices, mobile apps and systems used by AR will give attackers the opportunity to compromise information, steal highly valuable and sensitive intellectual property, send false information to AR headsets and prevent access to AR systems.

The development of AR technologies across the manufacturing and engineering sectors is being driven by digital transformation and the desire for lower operational costs, increased productivity and streamlined processes. As AR systems and devices become the chosen medium for displaying schematics, blueprints and manuals to workers, attackers will be able to manipulate the information provided in real-time to compromise the quality and safety of products, as well as threatening the lives of users.

Many industries will become dependent on AR technologies for their products and services. For example, within air traffic control, AR displays are being evaluated as an aid to understanding aircraft movements in conditions of poor visibility. In the logistics and transport industries, AR will build upon systems such as GPS and voice assistants. With the help of Internet of Things (IoT) sensors, AI technologies, 5G and edge computing, AR systems will be able to overlay information to drivers in real-time. This will include demonstrating where live traffic accidents are happening, assisting during poor weather conditions, providing accurate journey times, and highlighting vehicle performance.

If the integrity or availability of data used in such systems is compromised, it will lead to significant operational disruption as well as risks to health and safety.

The healthcare industry is already a major target for cyber-attacks and the adoption of immature and vulnerable AR technologies in medical administration and surgical environments is likely to accelerate this trend. Medical professionals will be able to access sensitive records such as medical history, medication regimens and prescriptions through AR devices. This will create a greater attack surface as data is made available on more devices, resulting in a growing number of breaches and thefts of sensitive personal information.

AR promises much, but organizations will soon find themselves targeted by digital attacks that distort the physical world, disrupting operations and causing significant financial and reputational damage.

How Should Your Organization Prepare?

Organizations should be wary of the risks posed by AR. Many of the opportunities that AR ushers in will need to be risk assessed, with mitigating controls introduced to ensure that employees and consumers are safe and that privacy requirements are upheld.

In the short term, organizations should enhance vulnerability scanning and risk assessments of AR devices and software. They should also ensure that AR systems and devices that have records relating to personal data are secure. Additionally, create work-arounds, business continuity plans and redundancy processes in the event of failure of critical AR systems and devices.

In the long term, limit data propagation and sharing across AR environments. Organizations should also ensure that security requirements are included when procuring AR devices and purchase comprehensive insurance coverage for AR technology. Finally, establish and maintain skillsets required for individuals in roles that are reliant upon AR technology.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Ending the Cloud Security Blame Game Wed, 08 Jul 2020 00:34:00 -0500 Like many things in life, network security is a continuous cycle. Just when you’ve completed the security model for your organization’s current network environment, the network will evolve and change – which will in turn demand changes to the security model. And perhaps the biggest change that organizations’ security teams need to get to grips with is the cloud.

This was highlighted by a recent survey, in which over 75% of respondents said the cloud service provider is entirely responsible for cloud security. This rather worrying finding was offset by some respondents stating that security is also the responsibility of the customer to protect their applications and data in the cloud service, which shows at least some familiarity with the ‘shared responsibility’ cloud security model. 

What exactly does ‘shared responsibility’ mean? 

In reality, the responsibility for security in the cloud is only shared in the same way that an auto manufacturer installs locks and alarms in its cars. The security features are certainly there: but they offer no protection at all unless the vehicle owner actually activates and uses them.  

In other words, responsibility for security in the public cloud isn’t really ‘shared’.  Ensuring that applications and data are protected rests entirely on the customer of those services. Over recent years we’ve seen how several high-profile companies unwittingly exposed large volumes of data in AWS S3 buckets. These issues were not caused by problems in Amazon: they were the result of users misconfiguring the Amazon S3 services they were using, and not using proper controls when uploading sensitive data to the services. The data was placed in the buckets protected by only weak passwords (and in some cases, no password at all).

Cloud exposure

It’s important to remember that cloud servers and resources are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server that stores sensitive data, it is still likely to be protected by other security measures by default. It will probably sit behind the main corporate gateway, or other firewalls used to segment the network internally. Its databases will be accessible only from well-defined network segments. Users logging into it will have their accounts controlled by the centralized passwords management system. And so on.

In contrast, when you provision a server in the public cloud, it may easily be exposed to and accessible from any computer, anywhere in the world. Apart from a password, it might not have any other default protections in place. Therefore, it’s up to you to deploy the controls to protect the public cloud servers you use, and the applications and data they process. If you neglect this task and a breach occurs, the fault will be yours, not the cloud provider’s.

This means that it is the responsibility of your security team to establish perimeters, define security policies and implement controls to manage connectivity to those cloud servers. They need to set up controls to manage the connection between the organization’s public cloud and on-premise networks, for example using a VPN, and consider whether encryption is needed for data in the cloud. These measures will also require a logging infrastructure to record actions for management and audits, to get a record of what changes were made and who made them.

Of course, all these requirements across both on-premise and cloud environments add significant complexity to security management, demanding that IT and security teams use multiple different tools to make network changes and enforce security. However, using a network security policy management solution will greatly simplify these processes, enabling security teams to have visibility of their entire estate and enforce policies consistently across public clouds and the on-premise network from a single console.

The solution’s network simulation capabilities can be used to easily answer questions such as: ‘is my application server secure?’, or ‘is the traffic between these workloads protected by a security gateway?’ It can also quickly identify issues that could block an application’s connectivity (such as misconfigured or missing security rules, or incorrect routes) and then plan how to correct the connectivity issue across the relevant security controls. What’s more, the solution keeps an audit trail of every change for compliance reporting.

Remember that in the public cloud, there’s almost no such thing as ‘shared responsibility.’ Security is primarily your responsibility – with help from the cloud provider. But with the right approach to security management, that responsibility and protection is easy to maintain, without having to play the blame game.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Edge Computing Set to Push Security to the Brink Sat, 13 Jun 2020 07:29:00 -0500 In the coming years, the requirement for real-time data processing and analysis will drive organizations to adopt edge computing in order to reduce latency and increase connectivity between devices – but adopters will inadvertently bring about a renaissance of neglected security issues. Poorly secured edge computing environments will create multiple points of failure, and a lack of security oversight will enable attackers to significantly disrupt operations.

Organizations in industries such as manufacturing, utilities, or those using IoT and robotics will be dependent upon edge computing to connect their ever-expanding technical infrastructure. However, many will not have the visibility, security or analysis capabilities that have previously been associated with cloud service providers – information risks will be transferred firmly back within the purview of the organization. Attackers will exploit security blind spots, targeting devices on the periphery of the network environment. Operational capabilities will be crippled by sophisticated malware attacks, with organizations experiencing periods of significant downtime and financial damage.

Poor implementation of edge computing solutions will leave organizations open to attack. Nation states, hacking groups, hacktivists and terrorists aiming to disrupt operations will target edge computing devices, pushing security to the brink of failure and beyond.

What is the Justification for This Threat?

As the world moves into the fourth industrial revolution, the requirement for high-speed connectivity, real-time data processing and analytics will be increasingly important for business and society. With the combined IoT market size projected to reach $520 billion by 2021, the development of edge computing solutions alongside 5G networks will be required to provide near-instantaneous network speed and to underpin computational platforms close to where data is created.

The transition of processing from cloud platforms to edge computing will be a requirement for organizations demanding speed and significantly lower latency between devices. With potential use cases of edge computing ranging from real-time maintenance in vehicles, to drone surveillance in defense and mining, to health monitoring of livestock, securing this architecture will be a priority.

With edge computing solutions, security blind spots will provide attackers with an opportunity to access vital operational data and intellectual property. Moreover, organizations will be particularly susceptible to espionage and sabotage from nation states and other adversarial threats. Edge computing environments, by their nature, are decentralized and unlikely to benefit from initiatives such as security monitoring. Many devices sitting within this type of environment are also likely to have poor physical security while also operating in remote and hostile conditions. This creates challenges in terms of maintaining these devices and detecting any vulnerabilities or breaches.

Organizations that adopt edge computing will see an expansion of their threat landscape. With many organizations valuing speed and connectivity over security, the vast number of IoT devices, robotics and other technologies operating within edge computing environments will become unmanageable and hard to secure.

Edge computing will underpin critical national infrastructure (CNI) and many important services, reinforcing the necessity to secure them against a range of disruptive attacks and accidental errors. Failures in edge computing solutions will result in financial loss, regulatory fines and significant reputational damage. An inability to secure this infrastructure will be detrimental to the operational capabilities of the business as attackers compromise both physical and digital assets alike. Human lives may also be endangered, should systems in products such as drones, weaponry and vehicles be compromised.

How Should Your Organization Prepare?

Organizations that are planning to adopt edge computing should consider if this architectural approach is suitable for their requirements.

In the short term, organizations should review physical security and potential points of failure for edge computing environments in the context of operational resilience. Carry out penetration testing on edge computing environments, including hardware components. Finally, identify blind spots in security event and network management systems.

In the long term, generate a hybrid security approach that incorporates both cloud and edge computing. Create a secure architectural framework for edge computing and ensure security specialists are suitably trained to deal with edge computing-related threats.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Make It So: Accelerating the Enterprise with Intent-Based Network Security Sat, 13 Jun 2020 05:24:00 -0500 Sometimes, it seems that IT and security teams can’t win. They are judged on how quickly they can deploy their organization’s latest application or digital transformation initiative, but they’re also expected to safeguard those critical applications and data in increasingly complex hybrid networks – and in an ever more sophisticated threat landscape. That’s not an easy balancing act. 

When an enterprise rolls out a new application, or migrates a service to the cloud, it can take days, or even weeks, to ensure that all the servers and network segments can communicate with each other, while blocking access to hackers and unauthorized users. This is because the network fabric can include hundreds of servers and devices (such as firewalls and routers) as well as virtualized devices in public or private clouds.

When making changes to all these devices, teams need to ensure that they don’t disrupt the connectivity that supports the application, and don’t create any security gaps or compliance violations. But given the sheer complexity of today’s networks, it’s not too surprising that many organizations struggle with doing this. Our 2019 survey of managing security in hybrid and multi-cloud environments found that over 42% of organizations had experienced application or network outages caused by simple human errors or misconfigurations. 

What’s more, most organizations already have large network security policies in place with thousands, or even millions of policy rules deployed on their firewalls and routers. Removing any of these rules is often a very worrisome task, because the IT teams don’t have an answer to the big question of “why does this rule exist?”

The same question arises in many other scenarios, such as planning a maintenance window or handling an outage (“which applications are impacted when this device is powered off?”, “who should be notified”?), dealing with an insecure rule flagged by an audit, or limiting the blast radius of a malware attack (“What will be impacted if we remove this rule”?).

Intent-based networking (IBN) promises to solve these problems. Once security policies are properly annotated with the intent behind them, these operational tasks become much clearer and can be handled efficiently and with minimal damage. Instead of “move fast and break things” (which is unattractive in a security context, because “breaking” might mean “become vulnerable”) – wouldn’t it be better to “move fast and NOT break things”?

Intentions versus reality

As such, it’s no surprise that IBN is appealing to larger enterprises: it has the potential to ensure that networks can quickly adapt to the changing needs of the business, boosting agility without creating additional risk. However, while there are several IBN options available today, the technology is not yet fully mature. Some solutions offer IBN capabilities only in single-vendor network environments, while others have limited automation features. 

This means many current solutions are of limited use in the majority of enterprises which have hybrid network environments. To satisfy security and compliance demands, an enterprise’s network management and automation processes must cover its entire heterogeneous fabric, including all security devices and policies (whether in the data center, at its perimeter, across on-premise networks or in the cloud) to enable true agility without compromising protection.

So how can enterprises with these complex, hybrid environments align their network and security management processes closely to the needs of the business? Can they automate the management of business-driven application and network changes with straightforward, high level ‘make it so’ commands?

Also, where would the “intent” information come from? In an existing “brown-field” environment, how can we find out, in retrospect, what was the intent behind the existing policies?

The answer is that it is possible to do all this with network security policy management (NSPM) solutions. These can already deliver on IBN’s promise of enabling automated, error-free handling of business-driven changes, and faster application delivery across heterogenous environments – without compromising the organizations’ security or compliance postures. 

Intent-based network security

The right solution starts with the ability to automatically discover and map all the business applications in an enterprise, by monitoring and analyzing the network connectivity flows that support them. Through clustering analysis of netflow traffic summaries, modern NSPM solutions can automatically identify correlated business applications, and label the security policies supporting them – thereby automatically identifying the intent.

NSPM solutions can also identify the security devices and policies that support those connectivity flows across heterogeneous on-premise, SDN and cloud environments. This gives a ‘single source of truth’ for the entire network, storing and correlating all the application’s attributes in a single pane of glass, including configurations, IP addresses and policies.

With this holistic application and network map, the solution enables business application owners to request changes to network connectivity for their business applications without having to understand anything about the underlying network and security devices that the connectivity flows pass through.

The application owner simply makes a network connectivity request in their own high-level language, and the solution automatically understands and defines the technical changes required directly on the relevant network security devices. 

As part of this process the solution assesses the change requests for risk and compliance with the organization’s own policies, as well as industry regulations. If the changes carry no significant security risk, the solution automatically implements them directly on the relevant devices, and then verifies the process has been completed – all with zero touch. 

This means normal change requests are processed automatically — from request to implementation — in minutes, with little or no involvement of the networking team. Manual intervention is only required if a problem arises during the process, or if a request is flagged by the solution as high risk, while enabling IT, security and application teams to continuously monitor the state of the network and the business applications it supports. 

Network security management solutions realize the potential of IBN, as they: 

  1. Offer an application discovery capability that automatically assigns the intent to existing policies
  2. Translate and validate high-level business application requests into the relevant network configuration changes.
  3. Automate the implementation of those changes across existing heterogenous network infrastructure, with the assurance that changes are processed compliantly.
  4. Maintain awareness of the state of the enterprise network to ensure uptime, security and compliance. 
  5. Automatically alert IT staff to changes in network and application behaviors, such as an outage or break in connectivity, and recommend corrective action to maintain security and compliance.

These intent-based network security capabilities allow business application owners to express their high-level business needs, and automatically receive a continuously maintained, secure and continuously compliant end-to-end connectivity path for their applications. They also enable IT teams to provision, configure and manage networks far easier, faster and more securely. This achieves the delicate balance of meeting business demands for speed and agility, while ensuring that risks are minimized.

About the author: Professor Avishai Wool is the CTO and Co-Founder of AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Threat Horizon 2022: Cyber Attacks Businesses Need to Prepare for Now Fri, 01 May 2020 14:32:41 -0500 The digital and physical worlds are on an irreversible collision course. By 2022, organizations will be plunged into crisis as ruthless attackers exploit weaknesses in immature technologies and take advantage of an unprepared workforce. At the same time, natural forces will ravage infrastructure.

Over the coming years organizations will experience growing disruption as threats from the digital world have an impact on the physical. Invasive technologies will be adopted across both industrial and consumer markets, creating an increasingly turbulent and unpredictable security environment. The requirement for a flexible approach to security and resilience will be crucial as a hybrid threat environment emerges.

The impact of threats will be felt on an unprecedented scale as ageing and neglected infrastructure is attacked, with services substantially disrupted due to vulnerabilities in the underlying technology. Mismanagement of connected assets will provide attackers with opportunities to exploit organizations.

A failure to understand the next generation of workers, the concerns of consumers and the risk posed by deceptive technology will erode the trust between organizations, consumers and investors. As a result, the need for a digital code of ethics will arise in order to protect brand reputation and profitability.

Organizations will have to adapt quickly to survive when digital and physical worlds collide. Those that don’t will find themselves exposed to threats that will outpace and overwhelm them.

At the Information Security Forum, we recently released Threat Horizon 2021, the latest in an annual series of reports that provide businesses a forward-looking view of the increasing threats in today’s always-on, interconnected world. In Threat Horizon 2021, we highlighted the top three threats to information security emerging over the next two years, as determined by our research.

Let’s take a quick look at these threats and what they mean for your organization:


New technologies will further invade every element of daily life with sensors, cameras and other devices embedded in homes, offices, factories and public spaces. A constant stream of data will flow between the digital and physical worlds, with attacks on the digital world directly impacting the physical and creating dire consequences for privacy, well-being and personal safety.

Augmented Attacks Distort RealityThe development and acceptance of AR technologies will usher in new immersive opportunities for businesses and consumers alike. However, organizations leveraging this immature and poorly secured technology will provide attackers with the chance to compromise the privacy and safety of individuals when systems and devices are exploited.

Behavioral Analytics Trigger A Consumer Backlash: Organizations that have invested in a highly connected nexus of sensors, cameras and mobile apps to develop behavioral analytics will find themselves under intensifying scrutiny from consumers and regulators alike as the practice is deemed invasive and unethical. The treasure trove of information harvested and sold will become a key target for attackers aiming to steal consumer secrets, with organizations facing severe financial penalties and reputational damage for failing to secure their information and systems.

Robo-Helpers Help Themselves to Data: A range of robotic devices, developed to perform a growing number of both mundane and complex human tasks, will be deployed in organisations and homes around the world. Friendly-faced, innocently-branded, and loaded with a selection of cameras and sensors, these constantly connected devices will roam freely. Poorly secured robo-helpers will be weaponized by attackers, committing acts of corporate espionage and stealing intellectual property. Attackers will exploit robo-helpers to target the most vulnerable members of society, such as the elderly or sick at home, in care homes or hospitals, resulting in reputational damage for both manufacturers and corporate users.


The technical infrastructure upon which organizations rely will face threats from a growing number of sources: man-made, natural, accidental and malicious. In a world where constant connectivity and real-time processing is vital to doing business, even brief periods of downtime will have severe consequences. It is not just the availability of information and services that will be compromised – opportunistic attackers will find new ways to exploit vulnerable infrastructure, steal or manipulate critical data and cripple operations.

Edge Computing Pushes Security to the Brink:In a bid to deal with ever-increasing volumes of data and process information in real time, organizations will adopt edge computing – an architectural approach that reduces latency between devices and increases speed – in addition to, or in place of, cloud services. Edge computing will be an attractive choice for organizations, but will also become a key target for attackers, creating numerous points of failure. Furthermore, security benefits provided by cloud service providers, such as oversight of particular IT assets, will also be lost.

Extreme Weather Wreaks Havoc on Infrastructure:Extreme weather events will increase in frequency and severity year-on-year, with organizations suffering damage to their digital and physical estates. Floodplains will expand; coastal areas will be impacted by rising sea levels and storms; extreme heat and droughts will become more damaging; and wildfires will sweep across even greater areas. Critical infrastructure and data centers will be particularly susceptible to extreme weather conditions, with business continuity and disaster recovery plans pushed to breaking point.

The Internet of Forgotten Things Bites Back: IoT infrastructure will continue to expand, with many organizations using connected devices to support core business functions. However, with new devices being produced more frequently than ever before, the risks posed by multiple forgotten or abandoned IoT devices will emerge across all areas of the business. Unsecured and unsupported devices will be increasingly vulnerable as manufacturers go out of business, discontinue support or fail to deliver the necessary patches to devices. Opportunistic attackers will discover poorly secured, network-connected devices, exploiting organizations in the process.


Bonds of trust will break down as emerging technologies and the next generation of employee’s tarnish brand reputations, compromise the integrity of information and cause financial damage. Those that lack transparency, place trust in the wrong people and controls, and use technology in unethical ways will be publicly condemned. This crisis of trust between organizations, employees, investors and customers will undermine organizations’ ability to conduct digital business.

Deepfakes Tell True Lies: Digital content that has been manipulated by AI will be used to create hyper-realistic copies of individuals in real-time – deepfakes. These highly plausible digital clones will cause organizations and customers to lose trust in many forms of communication. Credible fake news and misinformation will spread, with unwary organizations experiencing defamation and reputational damage. Social engineering attacks will be amplified using deepfakes, as attackers manipulate individuals with frightening believability.

The Digital Generation Become the Scammer’s Dream: Generation Z will start to enter the workplace, introducing new information security concerns to organizations. Attitudes, behaviors, characteristics and values exhibited by the newest generation will transcend their working lives. Reckless approaches to security, privacy and consumption of content will make them obvious targets for scammers, consequently threatening the information security of their employers.

Activists Expose Digital Ethics Abuse: Driven by huge investments in pervasive surveillance and tracking technologies, the ethical element of digital business will enter the spotlight. Activists will begin targeting organizations that they deem immoral, exposing unethical or exploitative practices surrounding the technologies they develop and who they are sold to. Employees motivated by ethical concerns will leak intellectual property, becoming whistle-blowers or withdrawing labor entirely. Brand reputations will suffer, as organizations that ignore their ethical responsibilities are placed under mounting pressure.

Preparation Must Begin Now

Information security professionals are facing increasingly complex threats—some new, others familiar but evolving. Their primary challenge remains unchanged; to help their organizations navigate mazes of uncertainty where, at any moment, they could turn a corner and encounter information security threats that inflict severe business impact.

In the face of mounting global threats, organization must make methodical and extensive commitments to ensure that practical plans are in place to adapt to major changes in the near future. Employees at all levels of the organization will need to be involved, from board members to managers in non-technical roles.

The three themes listed above could impact businesses operating in cyberspace at break-neck speeds, particularly as the use of the Internet and connected devices spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large, even if they seem distant. The future arrives suddenly, especially when you aren’t prepared.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Why the Latest Marriott Breach Should Make Us "Stop and Think" About Security Behaviors Tue, 07 Apr 2020 02:07:01 -0500 Marriott International has experienced their second data breach after two franchise employee logins were used to access more than five million guest records beginning in January. Contact details, airline loyalty program account numbers, birth dates and more were collected -- but likely not Bonvoy loyalty account numbers, PINs or payment information.

As noted, this is the second breach that Marriott has undergone in recent times, the first being through its acquired Starwood brand of hotels back in 2018 when it lost a large amount of personal information relating to its customers.  So here we go again. While this breach may not be as serious this time around, the big question is what will this do for customer trust in Marriott’s brand and reputation. 

“Fool me once, shame on you, fool me twice, shame on me” comes to mind.

Most organizations who have gone through a breach review their security procedures and policies – no one wants it to happen to them again – traditionally extra funding is provided to deal with necessary remediation, which of itself can run into millions of dollars when, at the most basic level, funding a personal information monitoring service for victims along with the inevitable fines and cost of brand rebuild are taken into account. 

Therefore, the issue that Marriott will need to address is how did this happen again, within a short period of time of the last breach and for some, particularly those accustomed to the European GDPR notification period, the question may also be and why did it take a month from discovery for Marriott to notify those affected? 

Well the answer to the second question is simple, the U.S. has no national data breach notification requirement, and the patchwork quilt of 48 state laws that exist typically require notification within 30 to 45 days – this is clearly quite a bit longer that the mandatory 72-hour GDPR breach notification period in Europe.  As for the bigger question of how did it happen again, well only time will tell, but for me this highlights a key challenge for many organisations, not just in the hospitality sector, namely that of how do you secure your third party suppliers?

The breach occurred at one of Marriott’s franchise properties by accessing the login credentials of two employees at the property.  From a security standpoint this shines a light on two key challenges for security professionals today: the third party supplier and awareness about the insider threat.  Unfortunately, third parties are becoming more of a vulnerability than ever before.

Organizations of all sizes need to think about the consequences of a trusted third party, in this case a franchisee, providing accidental, but harmful, access to their corporate information. Information shared in the supply chain can include intellectual property, customer or employee data, commercial plans or negotiations, and logistics. To address information risk, breach or data leakage across third parties, organizations should adopt robust, scalable and repeatable processes – obtaining assurance proportionate to the risk faced. Whether or not this was the case with Marriott remains to be seen. 

Supply chain information risk management should be embedded within existing procurement and vendor management processes, so supply chain information risk management becomes part of regular business operations.  Will this also help address the insider threat?  Well it should certainly help raise awareness but the reality is that the insider threat is unlikely to diminish in the coming years. Efforts to mitigate this threat, such as additional security controls and improved vetting of new employees, will remain at odds with efficiency measures.  

Organizations need to shift from promoting awareness of the problem to creating solutions and embedding information security behaviors that affect risk positively. The risks are real because people remain a ‘wild card’ and our businesses today exist on sharing of critical information with third party providers.

Many organizations recognize people as their biggest asset, yet many still fail to recognize the need to secure ‘the human element’ of information security. In essence, people should be an organization’s strongest control. Instead of simply making people aware of their information security responsibilities and how they should respond, the answer for businesses of all sizes is to embed positive information security behaviors that will result in “stop and think” behavior becoming a habit and part of an organization’s information security culture.

While many organizations have compliance activities which fall under the general heading of ‘security awareness’, the commercial driver should be risk, and how new behaviors can reduce that risk. For some, that message may come too late and it may take a breach or two to drive the message home.  

The real question is for how much longer will consumers accept that the loss of their data is a cost of doing business before voting with their feet and taking their business to more trusted providers?

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Examining Potential Election Vulnerabilities: Are They Avoidable? Tue, 07 Apr 2020 01:58:56 -0500 In the U.S and global communities, election security is a large concern because so many aspects of it can be insecure and open to attacks that may shift public opinion or be used for personal gain. Not only does the complexity of the U.S. government raise concerns about security, campaigns also have weak points that make it a target for attacks.

Limited IT Resources Put Campaigns and Voters at Risk

Given limited IT budgets, volunteers who often work directly with voters, sometimes use their own personal devices and applications to communicate with other team members and supporters; they also have access to key private data belonging to candidates and team members. These personal devices are also used to access campaign systems such as the Voter Activation Network (NGP VAN) that include voter information to support operations such as phone banking and door-to-door canvassing. Without proper security controls, these personal devices can be used by adversaries to put both the campaign and voters at risk. Additionally, the threat of fake news has evolved with the advent of deepfake technology, which in recent times has been combined with artificial intelligence (AI), video and audio to create media that appears to be authentic but is not. 

Although security controls such as two-factor authentication (2FA) are helpful, campaigns and voters may still be at risk. Abel Morales, a security engineer at Exabeam, recommends that campaigns use user and entity behavior analysis (UEBA) to detect anomalous authentications. “By monitoring staffers’ behaviors and detecting anomalies from their typical workflows, IT would be able to reduce the impact of threats introduced through social engineering, phishing and other malicious techniques.” This method also can be used to detect voter anomalies as well.

The continuing threat of ransomware attacks and nation-state attacks 

Ransomware attacks on voter databases and systems can facilitate payments in exchange for voter information. Ransomware encrypts data until a ransom is paid and could also be used to manipulate voting results or lock administrators out of critical data during an election therefore compromising voter confidence. Additionally, the increase in nation-state attacks are another major concern. Some officials believe that foreign influence on our elections will more likely come through social media to shape public opinion towards whatever direction serves their specific goals. In particular, the FBI is worried that Russia will use social media to cause further division between the political parties or hack campaign websites to spread misinformation.   

Does the government’s structure make election security more difficult?.

The intricacies of the U.S. voting system also affect the security of elections because state and local governments are not forced to use the federal government’s testing standards. State and local governments have the option to adopt these security standards, use their own, or a hybrid. Also, testing for state and local governments can be completed by private companies or local universities, as there is no single federal test certification program. This deviation from the federal standard is also seen in the lack of mandatory audits to verify the integrity of the machines and testing procedures, and the management of the voter registration database system which contains voter records. Many of these database systems are outdated and ill-equipped to handle today’s cybersecurity threats, making it easier for adversaries to delete or add voters. Although these differences can be detrimental to the security of elections, they make it difficult for attackers to launch a large-scale, coordinated attack. 

The makeup of the voting machine market is a huge risk

Three companies make up more than 90 percent of the voting machine market, suggesting that a compromise of just one of these three companies could have a significant impact on any election. Manipulation is not a formidable task given many of these machines are running outdated software with existing vulnerabilities. As transitioning to machines running newer Windows operating systems in time for the 2020 election may not be possible, Microsoft has committed to providing free updates for all certified voting machines in operation running on Windows 7.

Internet-connected devices increase risk

Our U.S. voting system is comprised of many different types of devices with varying functions including tallying and reporting votes. Security experts note that web-based systems such as election-reporting websites, candidate websites and voter roll websites are easier to attack compared to a voting machine. Many of these systems are IoT devices that have their own unique security challenges. Often, they are shipped with factory-set, hardcoded passwords; they’re unable to be patched or updated; and have outdated protocols and lack encryption. They are also susceptible to botnets that can exploit large numbers of devices in a short period. IoT attacks could also compromise a user’s browser to manipulate votes and cut power to polling stations

Proactive responses to help understaffed election IT teams

To prevent targeted attacks, campaign IT tech teams and staffers are performing training courses to learn how to detect and report suspicious emails. The DNC has created a security checklist for campaigns with recommendations, and the Center for Internet Security has also developed a library of resources to help campaigns including a Handbook for Elections Infrastructure Security. Machine-based learning systems enable limited teams to operate 50 percent more efficiently through automation – which is essential given the scale and number of elections. Security orchestration, automation, and response (SOAR) as part of a modern SIEM can also orchestrate remediation in response to an identified anomaly through playbooks. SOAR automatically identifies and prioritizes cybersecurity risks and responds to low-level security events, which is extremely useful for state and local government agencies that operate with small cybersecurity teams.  

Republicans and Democrats unite to offer a helping hand

In late 2019, recognizing the seriousness of election attacks and the lack of security resources, former campaign managers for Hillary Clinton and Mitt Romney launched a non-profit organization, Defending Digital Campaigns (DDC), which offers free to low-cost security technology and services to federal election campaigns. Some experts predict that the 2020 election will be one of the most anticipated digital security events in U.S. history. Given the complexity of the election process and voting system, security automation, behavior analytics and security education can be a part of the solution for managing a secure voting process.


About the author: Tim Matthews brings over 20 years of experience building and running software marketing teams and a focus on the security market. Prior to Exabeam, he was Vice President of Marketing at Imperva, where he led a worldwide marketing team.

Copyright 2010 Respective Author at Infosec Island]]>
Google Skips Chrome 82, Resumes Stable Releases Sun, 29 Mar 2020 11:14:03 -0500 Google is on track to resume the roll-out of stable Chrome releases next week, but says it will skip one version of the browser.

Last week, the Internet search giant said it was pausing upcoming releases of the browser, following an adjusted work schedule due to the COVID-19 (coronavirus) pandemic, and that both Chrome and Chrome OS releases would be affected.

At the time, the company revealed it would focus on the stability and security of releases, and that it would prioritize security updates for Chrome 80.

Now, Google says it is ready to resume pushing releases to the Stable channel as soon as the next week, with security and critical fixes meant for version 80 of the browser.

Moving forth, the company is planning the release of Chrome 81 in early April, but says it would then jump directly to Chrome 83, which is set to arrive in mid-May, thus skipping Chrome 82.

“M83 will be released three weeks earlier than previously planned and will include all M82 work as we cancelled the M82 release (all channels),” Google said.

This week, the company will resume the Canary, Dev and Beta channels, with Chrome 83 moving to Dev.

“We continue to closely monitor that Chrome and Chrome OS are stable, secure, and work reliably. We’ll keep everyone informed of any changes on our schedule,” the Internet giant said.

The company hasn’t shared any details on when Chrome 84 releases would start arriving, but said it would provide the information in a future update.

Following Google’s announcement last week, Microsoft said it would pause stable Edge releases, to align with the Chromium Project. Today, the Redmond-based tech company announced that Edge build 83.0.461.1 was released to the Dev channel.

“As you can see, this is the first update from major version 83.  This is a slight deviation from our normal schedule due to current events,” Microsoft says, adding that version 81 is heading for the Stable channel soon.

Related: Google Patches High-Risk Chrome Flaws, Halts Upcoming Releases

RelatedChrome 80 Released With 56 Security Fixes

Related: Chrome Will Block Insecure Downloads on HTTPS Pages

Copyright 2010 Respective Author at Infosec Island]]>
Benchmarking the State of the CISO in 2020 Fri, 27 Mar 2020 11:14:31 -0500 Driving digital transformation initiatives while safeguarding the enterprise is a mammoth task. In some aspects, it might even sound counter-intuitive when it comes to opening up IT infrastructure, or converging IT and OT networks to allow external parties such as partners and customers to closely interact with the organization to embrace new business models and collaboration (think cloud applications, APIs, sensors, mobile devices, etc.).

Although new technology is being adopted quickly, especially web frontends, applications and APIs, much of the underlying IT infrastructure as well as the supporting processes and governance models are somewhat legacy, and struggle to keep up.

For its 2020 CISO Benchmark Report, Cisco surveyed some 2,800 CISOs and other IT decision-makers from 13 countries, how they cope with that, and they came up with a number of interesting findings.

Cyber-threats are a global business risk

The World Economic Forum says business leaders view cyber-attacks as the #2 global risk to business in advanced economies, taking a back seat only to financial crises. Not surprisingly,89 percent of the respondents in the Cisco study say their executives still view security as a high priority, but this number is down by 7 percent from previous years.

Nine out of ten respondents felt their company executives had solid measures for gauging the effectiveness of their security programs. This is encouraging, as clear metrics are key to a security framework, and it’s often difficult to get diverse executives and security players to agree on how to measure operational improvement and security results.

Leadership matters

The share of companies that have clarified the security roles and responsibilities on the executive team has risen and fallen in recent years, but it settled at 89 percent in 2020. Given that cyber-security is being taken more seriously and there is a major need for security leaders at top levels, the need to continue clarifying roles and responsibilities will remain critical.

The frequency with which companies are building cyber-risk assessments into their overall risk assessment strategies has shrunk by five percent from last year. Still, 91 percent of the survey respondents reported that they’re doing it. Similarly, 90 percent of executive teams are setting clear metrics to assess the effectiveness of their security programs, although this figure too is down by six percent from last year.  

Cloud protection is not solid

It’s almost impossible for a company to go digital without turning to the cloud. The Cisco report found that in 2020, over 83 percent of organizations will be managing (internally or externally) more than 20 percent of their IT infrastructure in the cloud. But protecting off-premises assets remains a challenge.

A hefty 41percent of the surveyed organizations say their data centers are very or extremely difficult to defend from attacks. Thirty-nine percent report that they struggle to keep applications secure. Similarly, private cloud infrastructure is a major security issue for organizations; half of the respondents said it was very or extremely difficult to defend.

The most problematic data of all is data stored in the public cloud. Just over half (52 percent) of the respondents find it very or extremely challenging to secure.Another 41 percent of organizations find network infrastructure very or extremely challenging to defend.

Time-to-remediate scores most important

The Cisco study enquired about the after-effects of breaches using measures such as downtime, records, and finances. How much and how often are companies suffering from downtime? It turns out that organizations across the board issued similar answers. Large enterprises (10,000 or more employees) are more likely to have less downtime (between zero and four hours) because they typically have more technology, money, and people available to help respond and recover from the threats. Small to mid-sized organizations made up most of the five- to 16-hour recovery timespans. Potentially business-killing downtimes of 17-48 hours were infrequent among companies of all sizes.

After a security incident, rapid recovery is critical to keeping disruption and damages to a minimum. As a result, of all the metrics, time-to-remediate (also known as “time-to-mitigate”) scores are the ones most important when reporting to the C-suite or the company’s board of directors, the study concludes.

Automating security is not optional – it’s mandatory

The total number of daily security alerts that organizations are faced with is constantly growing. Three years ago, half of organizations had 5,000 or fewer alerts per day. Today, that number is only 36 percent. The number of companies that receive 100,000 or more alerts per day has risen to 17 percent this year, from 11 percent in 2017. Due to the greater alert volumes and the considerable resources needed to process them, investigation of alerts is at a four-year low: just under 48 percent of companies say they can keep up. That number was 56 percent in 2017, and it’s been shrinking every year since. The rate of legitimate incidents (26 percent) has remained more or less constant, which suggests that a lot of investigations are coming up with false positives.

Perhaps the biggest side-effect of this never-ending alert activity is cyber-security fatigue. Of the companies that report that it exists among their ranks, 93 percent of them receive more than 5,000 security warnings every day.

A sizeable majority (77 percent) of Cisco’s survey respondents expect to implement more automated security solutions to simplify and accelerate their threat response times. No surprise here. These days, they basically have no choice but to automate.

Vigilance pays dividends

Organizations that had 100,000 or more records affected by their worst security incident increased to 19 percent this year, up four percent from 2019. The study also found that a major breach can impact nine critical areas of a company, including operations and brand reputation, finances, intellectual property, and customer retention.

Three years ago, 26 percent of the respondents said their brand reputation had taken a hit from a security incident; this year, 33 percent said the same. This is why, to help minimize damages and recover fast, it’s key to incorporate crisis communications planning into the company’s broader incidence response strategy.

Finally, the share of survey respondents that reported that they voluntarily disclosed a breach last year (61 percent) is the highest in four years.The upshot is that overall, companies are actively reporting breaches. This may be due to new privacy legislation (GDPR and others), or because they want to maintain the trust and confidence of their customers. In all likelihood, it’s both.

In conclusion, the CISO Benchmark report shows a balance of positives and negatives. Organizations are looking to automate security processes to accelerate response times, security leadership is strengthening and setting metrics to improve overall protection, and more breaches are being identified and reported.  But there’s still work to be done to embed security into everything organizations do as they evolve their business.

About the author: Marc Wilczek is Chief Operating Officer at Link11, an IT security provider specializing in DDoS protection, and has more than 20 years of experience within the information and communication technology (ICT) space.

Copyright 2010 Respective Author at Infosec Island]]>
Cyberattacks a Top Concern for Gov Workers Tue, 03 Mar 2020 08:30:41 -0600 More than half of city and state employees in the United States are more concerned about cyberattacks than they are of other threats, a new study discovered.

Conducted by The Harris Poll on behalf of IBM, the survey shows that over 50% of city and state employees are more concerned about cyberattacks than natural disasters and terrorist attacks. Moreover, three in four government employees (73% of the respondents) are concerned about impending ransomware threats.

With over 100 cities across the U.S. reported as being hit with ransomware in 2019, the concern is not surprising. However, the survey suggests that ransomware attacks might be even more widespread, as 1 in 6 respondents admitted that their department was impacted.

Alarmingly though, despite the increase in the frequency of these attacks, only 38% of the surveyed government employees said they received general ransomware prevention training, and 52% said that budgets for managing cyberattacks haven’t seen an increase.

“The emerging ransomware epidemic in our cities highlights the need for cities to better prepare for cyber-attacks just as frequently as they prepare for natural disasters,” said Wendi Whitmore, VP of threat intelligence at IBM Security.

While 30% of the respondents believe their employer is not investing enough in prevention, 29% believe their employer is not taking the threat of a cyberattack seriously enough. More than 70% agreed that responses and support for cyberattacks should be on-par with those for natural disasters.

On the other hand, when asked about their ability to overcome cyberattacks, 66% said their employer is prepared, while 74% said they were confident in their own ability to recognize and prevent an attack.

“The data in this new study suggests local and state employees recognize the threat but demonstrate over confidence in their ability to react to and manage it. Meanwhile, cities and states across the country remain a ripe target for cybercriminals,” Whitmore also said.

The respondents also expressed concerns regarding the impending 2020 election in the U.S., with 63% admitting concern that a cyberattack could disrupt the process.

While half of them say they expect attacks in their community to increase in the following year, six in ten even expect for their workplace to be hit. Administrative offices, utilities and the board of elections were considered the most vulnerable.

Employees in education emerged as those less prepared to face a cyberattack, with 44% saying they did not receive basic cyber-security training, and 70% admitting to not receiving adequate training on how to respond to cyberattacks.

The survey was conducted online, from January 16 through February 3, 2020, among 690 employees who work for state or local government organizations in the United States. All respondents were adults over 18, employed full time or part time.

Related: Christmas Ransomware Attack Hit New York Airport Servers

Related: Ransomware Attack Hits Louisiana State Servers

Related: Massachusetts Electric Utility Hit by Ransomware

Copyright 2010 Respective Author at Infosec Island]]>
Hackers Target Online Gambling Sites Wed, 19 Feb 2020 20:10:35 -0600 Threat Actor Targets Gambling and Betting in Southeast Asia

Gambling and betting operations in Southeast Asia have been targeted in a campaign active since May 2019, Trend Micro reports. 

Dubbed DRBControl, the adversary behind the attacks is using a broad range of tools for cyber-espionage purposes, including publicly available and custom utilities that allow it to elevate privileges, move laterally in the compromised environments, and exfiltrate data. 

The intrusion begins with spear-phishing Microsoft Word files, with three different document versions identified: they embed an executable, a BAT file, and PowerShell code, respectively. Two very similar variations of the employed phishing content were observed.

The first two document versions execute the same payload onto the target system, and the third one is believed to be leading to the same piece of malware too. 

DRBControl employed two previously unknown backdoors in this campaign, but also used known malware families, such as the PlugX RAT, the Trochilus RAT, and the HyperBro backdoor, along with various custom post-exploitation tools, Trend Micro explains in a detailed report (PDF).

Both of the backdoors use DLL side-loading through the Microsoft-signed MSMpEng.exe, with the malicious code then injected into the svchost.exe

Written in C++, the first of the threat actor’s backdoors can bypass user account control (UAC), achieve persistence via a registry key, sends out information such as hostname, computer name, user privileges, Windows version, current time, and a campaign identifier. 

A recent version of the malware was observed using Dropbox for command and control (C&C), with multiple repositories employed to store the infected machine’s information, store commands and post-exploitation tools, and store files exfiltrated from the machine. 

The Dropbox-downloaded backdoor has keylogging functions and can receive commands to enumerate drives and files, execute files, move/copy/delete/rename files, upload to Dropbox, execute commands, and run binaries via process hollowing. 

Also written in C++, the second backdoor too has UAC bypass and keylogging capabilities. The security researchers discovered an old version of this backdoor being delivered by a Word document from July 2017, suggesting that DRBControl has been active for a long time. 

Post exploitation tools employed by the threat actor include a clipboard stealer, a network traffic tunnel EarthWorm, public IP address retriever, NBTScan tool for enumerating NetBIOS shares, brute-force tool, and an elevation of privilege tool for exploiting CVE-2017-0213. Multiple password dumpers, tools for bypassing UAC, and code loaders were also identified. 

The use of the same domain in one of the backdoors, a PlugX sample, and Cobalt Strike allowed the researchers to link DRBControl to all three malware families. Additionally, the researchers identified connections with Winnti (via mutexes, domain names, and issued commands) and Emissary Panda (the HyperBro backdoor appears to be exclusive to Emissary Panda). 

This cyber-espionage campaign was targeted at gambling and betting companies in Southeast Asia, with no attacks in other parts of the world being confirmed to date. 

“The threat actor described here shows solid and quick development capabilities regarding the custom malware used, which appears to be exclusive to them. The campaign exhibits that once an attacker gains a foothold in the targeted entity, the use of public tools can be enough to elevate privileges, perform lateral movements in the network, and exfiltrate data,” Trend Micro concludes. 

RelatedNew APT10 Activity Detected in Southeast Asia Copyright 2010 Respective Author at Infosec Island]]>
When Data Is Currency, Who’s Responsible for Its Security? Tue, 11 Feb 2020 13:13:38 -0600 In a year that was all about data and privacy, it seems only fitting that we closed out 2019 in the shadow of a jumbo data leak where more than a billion records were found exposed on a single server.

Despite this being one of the largest data exposures from a single source in history, it didn’t cause nearly the public uproar that one might expect from a leak involving personal information such as names, email addresses, phone numbers, LinkedIn and Facebook profiles. Instead, this quickly became yet another case of consumer information being mishandled, impacting many of the same consumers that have been burned several times already by companies they trusted.

What’s different about this leak – and what should have given consumers and businesses alike pause – is the way in which this case highlights a more complex problem with data that exists today.

There’s no question that data is a very valuable asset. Organizations have done a great job figuring out how to capture consumer data over the last decade and are now beginning to use and monetize it. The problem is, that data can also be used in many different ways to inflict serious pain on victims in their personal and business lives. So, when that data goes through someone’s hands (business or individual), how much responsibility do they – and those up the lifecycle chain – have for where it ends up?

Beginning at the consumer level, users can opt out of sharing data and should do so at any chance they get if they are concerned about having their information exposed. The good news is that new regulations like the GDPR and CCPA are making this easier to do retroactively than ever before. The challenge is that the system isn’t perfect. Aliases and other databases can still be difficult to opt out of because although they may have information captured, errors like misspellings can prevent consumers from getting to their own data.

With this particular incident, we also caught a glimpse of the role that data enrichment, aggregators and brokers play in security. Although it didn’t come directly from their own servers, the exposed data was likely tied to enrichment firms People Data Labs (PDL) and OxyData. While several data brokers today are taking more responsibility and offering security and privacy education to their customers, it was alarming to see that neither data broker in this case could rule out the possibility that their data was mishandled by a customer. In fact, rather than pushing for a solution, Oxydata seemed to shirk responsibility entirely when speaking with WIRED.

Data brokers need to own up to this challenge and look at better screening of their customers to ensure their use of data has valid purposes. A case study by James Pavur, DPhil student at Oxford University, underscored these failings in the system when he used GDPR Subject Access Requests to obtain his data from about 20 companies, many of which didn't ask for sufficient ID before sharing the information. He went on to try and get as much data as possible about his fiancée, finding he could access a range of sensitive data, including everything from addresses and credit card numbers to travel itineraries. None of this should be possible with proper scredaening in place.

Ultimately, whoever owns the server where the leak originated is the one that will be held legally and fiscally responsible. But should data brokers be emulating the shared responsibility model in use by cloud services like AWS? Either way, by understanding the lifecycle of data and taking additional responsibility upstream, we can begin to cut down on the negative impact when exposures like this inevitably occur.

About the author: Jason Bevis is the vice president of Awake Security Labs at Awake Security. He has extensive experience in professional services, cybersecurity MDR solutions, incident response, risk management and automation products.

Copyright 2010 Respective Author at Infosec Island]]>
SEC Shares Cybersecurity and Resiliency Observations Thu, 30 Jan 2020 14:09:56 -0600 The U.S. Securities and Exchange Commission (SEC) this week published a report detailing cybersecurity and operational resiliency practices that market participants have adopted. 

The 10-page document (PDF) contains observations from the SEC's Office of Compliance Inspections and Examinations (OCIE) that are designed to help other organizations improve their cybersecurity stance.

OCIE examines SEC-registered organizations such as investment advisers, investment companies, broker-dealers, self-regulatory organizations, clearing agencies, transfer agents, and others.

Through its reviews, OCIE has observed approaches that some organizations have taken in areas such as governance and risk management, access rights and controls, data loss prevention, mobile security, incident response and resiliency, vendor management, and training and awareness. 

Observed risk management and governance measures include senior level engagement, risk assessment, testing and monitoring, continuous evaluation and adapting to changes, and communication. Practices observed in the area of vendor management include establishing a program, understanding vendor relationships, and monitoring and testing. 

Strategies related to access rights and controls that were observed include access management and access monitoring. Utilized data loss prevention measures include vulnerability scanning, perimeter security, patch management, encryption and network segmentation, and insider threat monitoring, among others. 

In terms of mobile security, organizations adopted mobile device management (MDM) applications or similar technology, implemented security measures, and trained employees. Strategies for incident response include inventorying core business operations and systems, and assessing risk and prioritizing business operation. 

By sharing these observations, SEC hopes to determine organizations to review their practices, policies and procedures and assess their level of preparedness. 

The presented measures should help any organization become more secure, OCIE says, admitting that “there is no such thing as a “one-size fits all” approach.” In fact, it also points out that not all of these practices may be appropriate for all organizations. 

“Through risk-targeted examinations in all five examination program areas, OCIE has observed a number of practices used to manage and combat cyber risk and to build operational resiliency. We felt it was critical to share these observations in order to allow organizations the opportunity to reflect on their own cybersecurity practices,” Peter Driscoll, Director of OCIE, said. 

RelatedCyber Best Practices Requires a Security-First Approach

Related: Best Practices for Evaluating and Vetting Third Parties

Related: Perception vs. Reality in Federal Government Security Practices

Copyright 2010 Respective Author at Infosec Island]]>
What Does Being Data-Centric Actually Look Like? Fri, 17 Jan 2020 09:46:22 -0600 “Data-centric” can sometimes feel like a meaningless buzzword. While many companies are vocal about the benefits that this approach, in reality, the term is not widely understood.

One source of confusion is that many companies have implemented an older approach – that of being “data-driven” – and just called this something else. Being data-centric is not the same as being data-driven. And, being data-centric brings new security challenges that must be taken into consideration. 

A good way of defining the difference is to talk about culture. In Creating a Data-Driven Organization, Carl Anderson starts off by saying, “Data-drivenness is about building tools, abilities, and, most crucially, a culture that acts on data.” In short, being data-driven is about acquiring and analyzing data to make better decisions.

Data-centric approaches build on this but change the managerial hierarchy that informs it. Instead of data teams collecting data, management teams making reports about it, and then CMOs taking decisions, data centrism aims to give everyone (or almost everyone) direct access to the data that drives your business. In short, creating a data-driven culture is no longer enough: instead, you should aim to make data the core of your business by ensuring that everyone is working with it directly.

This is a fairly high-level definition of the term, but it has practical implications. Implementing a data-centric approach includes the following processes.

1. Re-Think Your Organizational Structure

Perhaps the most fundamental aspect of data-centric approaches is that they rely on innovative (and sometimes radical) management structures. As Adam Chicktong put it a few years ago, these structures are built around an inversion of traditional hierarchies: instead of decisions flowing from executives through middle management to data staff, in data-centric approaches everyone’s “job is to empower their team do their job and better their career”.

This has many advantages. In a recent CMO article, Maile Carnegie talked about the ‘frozen middle’ where middle-management is inherently structured to resist change. By looking closely at your hierarchy and identifying departments and positions likely to resist change, you’ll be able to streamline the structure to allow transformation to more easily filter through the business. As she puts it, “Increasingly, most businesses are getting to a point where there are people in their organization who are no longer are experts in a craft, and who have graduated from doing to managing and basically bossing other people around and shuffling PowerPoints.”

2. Empowering the Right People

Once these novel managerial structures are in place, the focus must necessarily shift toward empowering, rather than managing, staff. Effectively employing a data-centric approach means giving the right people access to the data that underpins your business, but also allowing them to affect the types of data you are collecting. 

Let’s take access first. At the moment, many businesses (and even many of those that claim to be data-driven) employ extremely long communicative chains to work with the data they collect. IT staff report their findings, ultimately, to the executive level, who then disseminate this to marketing, PR, risk and HR departments. One of the major advantages of new data infrastructures, and indeed one of the major advantages of cloud storage, is that you can grant these groups direct access to your cloud storage solution. 

Not only does this cut down the time it takes for data to flow to the "correct" teams, making your business more efficient. If implemented skillfully, it can also be a powerful way of eliciting input from them on what kinds of data you should be collecting. Most businesses would agree, I think, that executives don't always have a granular appreciation for the kind of data that their teams need. Empowering these teams to drive novel forms of data collection short-circuits these problems by encouraging direct input into data structures.

3. Process Not Event

Third, transitioning to a data-centric approach entails not just a change in managerial structure, responsibility, and security. At the broadest level, this approach requires a change in the way that businesses think about development.

Nowadays, running an online business is not as simple as identifying a target audience, creating a website, and waiting to see if it is effective. Instead, the previously rigid divide between the executive, marketing, and data teams means that every business decision should be seen as a process, not an event.

4. Security and Responsibility

Ultimately, it should also be noted that changing your managerial structure in this way, and empowering teams to take control of your data collection processes, also raises significant problems when it comes to security.

At a basic level, it’s clear that dramatically increasing the number of people with access to data systems simultaneously makes these systems less secure. For that reason, implementing a data-centric approach must also include the implementation of extra security measures and tools. 

These include managerial systems to ensure responsible data retention, but also training for staff who have not worked with data before, and who may not know how to take basic security steps like using secure browsers and connecting to the company network through a VPN when using public WiFi. On the other hand, data centrism can bring huge benefits to the overall security of organizations. 

Alongside the approach’s contribution to marketing and operational processes, data-centric security is also now a field of active research. In addition, the capability to share emerging threats with almost everyone in your organization greatly increases the efficacy of your cybersecurity team.

Data-centric approaches are a powerful way of increasing the adaptability and profitability of your business, but you should also note that becoming truly data-centric involves quite radical changes in the way that your business is organized. Done correctly, however, this transition can offer huge advantages for almost any business.

About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.

Copyright 2010 Respective Author at Infosec Island]]>
The Big 3: Top Domain-Based Attack Tactics Threatening Organizations Fri, 17 Jan 2020 09:37:38 -0600 Nowadays, businesses across all industries are turning to owned websites and domains to grow their brand awareness and sell products and services. With this dominance in the e-commerce space, securing owned domains and removing malicious or spoofed domains is vital to protecting consumers and businesses alike. This is especially important because domain impersonation is an increasingly popular tactic among cybercriminals. One example of this is ‘look-a-like’ urls that trick customers by mimicking brands through common misspellings, typosquatting and homoglyphs. With brand reputation and customer security on the line, investing in domain protection should be a top priority for all organizations.

Domain-based attacks are so popular, simply because of how lucrative they can be. As mentioned above, attackers often buy ‘look-alike’ domains in order to impersonate a specific brand online. To do this, bad actors can take three main approaches: copycatting, piggybacking and homoglyphs/typosquatting. From mirroring legitimate sites to relying on slight variations that trick an untrained eye, it’s important to understand these top tactics cybercriminals use so you can defend your brand and protect customers. Let’s explore each in more detail.

1. Copycatting Domains

One tactic used by bad actors is to create a site that directly mirrors the legitimate webpage. Cybercriminals do so by copying a top-level domain (TLD), or TLD, that the real domain isn’t using, or by appending multiple TLDs to a domain name. With these types of attacks, users are more likely to be tricked into believing they are interacting with the legitimate organization online. This simplifies the bad actor’s journey as the website appears to be legitimate, and will be more successful than an attack using a generic, throwaway domain. To amplify these efforts, bad actors will also use text and visuals that customers would expect to see on a legitimate site, such as the logo, brand name, and products. This sense of familiarity and trust puts potential victims at ease and less aware of the copycat’s red flags. 

2. Piggybacking Name Recognition

The first approach attackers utilize is spoofed or look-alike domains that help them appear credible by piggybacking off the name recognition of established brands. These domains may be either parked or serving live content to potential victims. Parked domains are commonly leveraged to generate ad revenue, but can also be used to rapidly serve malicious content. They are also often used to distribute other brand-damaging content, like counterfeit goods.

3. Tricking Victims with Homoglyphs and Typosquatting

This last tactic has two main methods --  typosquatting and homoglyphs -- and looks for ways to trick unsuspecting internet users where they are unlikely to look or notice they are being spoofed. 

  • Typosquatting involves the use of common URL misspellings that either a user is likely to make on their own accord or that users may not notice at all, i.e. adding a letter to the organization’s name. If an organization has not registered domains that are close to their legitimate domain name, attackers will often purchase them to take advantage of typos. Attackers may also infringe upon trademarks by using legitimate graphics or other intellectual property to make malicious websites appear legitimate.
  • With homoglyph, the basic principles of domain spoofing remain the same, but an attacker may substitute a look-a-like character of an alphabet other than the Latin alphabet -- i.e., the Cyrillic “а” for the Latin “a.” Although these letters look identical, their Unicode values is different and as such, they will be processed differently by the browser. With over 100,000 Unicode characters in existence, bad actors have an enormous opportunity. Another benefit of this type of attack is that they can be used to fool traditional string matching and anti-abuse algorithms. 

Why domain protection is necessary

Websites are a brand’s steadfast in the digital age, as they are often the first source of engagement between a consumer, partner, prospective employee and your organization. Cyberattackers see this as an opportunity to capitalize on that interaction. If businesses don’t take this problem seriously, their brand image, customer loyalty and ultimately financial results will be at risk. 

While many organizations monitor domains related to their brand in order to ensure that their brand is represented in the way it is intended, this is challenging for larger organizations composed of many subsidiary brands. Since these types of attacks are so common and the attack surface is so large, organizations tend to feel inundated with alerts and incidents. As such, it is crucial that organizations proactively and constantly monitor for domains that may be pirating their brand, products, trademarks or other intellectual property.

About the author: Zack Allen is both a security researcher and the director of threat intelligence at ZeroFOX. Previously, he worked in threat research for the US Air Force and Fastly.

Copyright 2010 Respective Author at Infosec Island]]>
Security Compass Receives Funding for Product Development and Expansion Fri, 17 Jan 2020 08:39:00 -0600 Toronto, Canada-based Security Compass has received additional funding from growth equity investment firm FTV Capital. The amount has not been disclosed, indicating that it is likely to be on the smaller side.  

According to the security firm, the purpose of the cash injection is to allow it to enhance its product portfolio and accelerate a planned global expansion.  

The company was founded by Nish Bhalla in 2005. Former COO Rohit Sethi becomes the new CEO. Bhalla remains on the Board, and is joined by Liron Gitig and Richard Liu from FTV Capital.  

Long-serving Sethi was Security Compass' first hire, and was an integral part of the creation of the company's SD Elements platform -- now the focus of the firm' operations. SD Elements helps customers put the Sec into DevOps without losing DevOps's development agility.   

"The strong trends towards agile development in DevOps," he says, "increased focus on application security and on improving risk management are on course for collision. Security Compass is uniquely positioned to help organizations address the inherent conflicts. With FTV's investment, we're poised to accelerate our growth while maintaining the culture of excellence we've worked so hard to build."  

The worldwide growth in security and privacy regulations, such as GLBA, FedRAMP, GDPR, CCPA and many others, requires that security is built into the whole product development lifecycle. "Security Compass' SD Elements solution," says FTV Capital partner Gitig, "is uniquely focused on the software stack, enabling DevOps at scale by helping enterprises develop secure, compliant code from the start."  

He continued, "SD Elements provides both engineering and non-engineering teams with a holistic solution for managing software security requirements in an efficient and reliable manner, alleviating meaningful friction in the software development life cycle, accelerating release cycles and improving business results. We are excited to work with the Security Compass management team in its next phase of global growth as a trusted information security partner."  

Security Compass claims more than 200 enterprise customers in banks, federal government and critical industries use its solutions to manage the risk of tens of thousands of applications.  

RelatedChef Launches New Version for DevSecOps Automated Compliance 

RelatedChatOps is Your Bridge to a True DevSecOps Environment 

RelatedShifting to DevSecOps Is as Much About Culture as Technology and Methodology   

Copyright 2010 Respective Author at Infosec Island]]>