Put Your S3 Buckets to the Test to Ensure Cloud Fitness

Friday, October 13, 2017

Tim Prendergast

74a3faa048e151b7a9b61267399d79e3

A poignant aspect of many of the headline-grabbing data breaches is the relative ease with which hackers were able to get to sensitive data. We think of hackers running wildly complex algorithms and plotting with sophisticated schemes, but when you encounter a data repository named "Access Keys", and it doesn't require a password, it turns out your job is pretty easy.

AWS S3 buckets are getting the lion's share of the blame for many of these breaches. But like any asset, S3 buckets simply operate according to how they're configured and managed. And therein is a problem that's representative of so much of the vulnerabilities faced by cloud users. Misconfigurations, poorly constructed access policies, lack of controls; these are just some of the issues that can open a cloud environment to bad actors, and all of this work is directed by humans, with the S3 buckets just doing what they're told. In an environment that's as dynamic as a typical enterprise cloud, humans aren't necessarily going to be able to keep track of every aspect of every asset. For these assets to function optimally and securely, organizations have to apply active management along with continuous scrutiny to ensure they operate optimally and with effective security controls.

Within a cloud environment, there are so many factors all repeatedly and simultaneously occurring. IT teams have to think both about being active and reactive in order to effectively deal with vulnerabilities and attacks. To support those efforts, they have processes and tools that prevent, monitor, and remediate, all in an effort to constantly thwart risk. While the potential for incoming issues is massive, the work required to mitigate that risk can be fairly simple, but like exercise and regular check-ups, they have to be done regularly and with purpose.

We know that default settings from AWS tend to be fairly permissive; some of the problem in so many breaches relates to this permissive nature. But no customer should operate something so important to their environment without customizing it to their own needs. And no matter what their infrastructure needs are, the privacy of their data and that of their customers requires that they put their S3 buckets through fitness tests to ensure they are aware and in control of how those buckets are functioning. Enterprises that want to effectively secure S3 buckets must recognize the liability involved if these get breached. There are some key aspects to how S3 objects and buckets operate, and security teams should be familiar with AWS settings and functionality before they move forward with implementing a security plan. These include access to buckets, user rights within buckets, and versioning and logging capabilities.

Access to your stored data is the logical initial place to start. There are settings in AWS that allow you to determine who can view lists of your S3 buckets, and who can see and edit your Access Control Lists (ACLs). If your buckets have those settings set to give “All AWS Users” access, you are setting yourself up to be compromised. With global ACL permissions on, you allow anyone to grant wide permissions to your content, at best, you give them a detailed treasure map of which buckets may contain interesting data.

At the same time, while the breaches that make the news are all about hackers getting access to remove data, hackers putting data into your S3 buckets can be equally dangerous to your organization. If the Global PUT permission is enabled on any of your S3 buckets it means that anyone can place information into your S3 buckets. This may seem harmless, but someone with malicious intent could place content that would be harmful or embarrassing to your business into your buckets. It is best to only allow authorized users and systems to PUT to your S3 buckets. With the right permissions, a bad actor can also apply "global delete" to your repository which would wipe all the data contained therein. Requiring multi-factor authentication (MFA) in order to use that capability can ensure that CloudTrail logs and other sensitive data cannot be removed by an unauthorized user.

AWS customers should also be aware that the default settings do not enable versioning of S3 objects by default. Versioning is incredibly important; in the event of an object being overwritten or deleted, versioning keeps an instance of the object available to “roll back” to as a method of recovery. Additionally, with audit logging of your S3 buckets enabled, you will be able to get the details of all bucket activity. The logs are an important tool when troubleshooting issues, or investigating an incident. Logging cannot be enabled retroactively, so it is important to collect your audit logs as you set up your S3 buckets if you wish to keep tabs on bucket/object activity.

Advice must be followed by action in order to become, and remain, fit in the cloud. While these measures are critical to attain the basic level of security for your S3 buckets, they are always going to be a target because they store sensitive data. So, continuous awareness through automated monitoring will provide the necessary control needed to identify and fix vulnerabilities, and provide the right layer of control to maintain safe and effective business operations.

Possibly Related Articles:
27511
Enterprise Security Security Awareness
Cloud Security AWS AWS S3 bucket S3 bucket
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.