1) Enable CloudTrail and AWS Config on your buckets
Why:
- Have you ever wondered who accessed a specific file or who changed the permissions in your S3 bucket? Or have you ever received a higher than expected S3 bill? Well with CloudTrail you can find out this information. Note: you’ll need S3 data events turned on in CloudTrail ahead of time in order to have object level logs. I’d also recommend sending these logs to an S3 bucket with LifeCycle enabled. That way you can automatically delete logs that are older than a time specified by you, e.g. delete logs after 120 days.
- AWS config is an amazing service. Once enabled, it will automatically back up every S3 bucket policy configuration you have set up. Then it will build a timeline of all changes made to your S3 buckets, easily telling you who made what change and when! Isn’t that cool?! Then if you or someone else ever accidentally deletes a bucket policy from one of your buckets you can easily restore it with one click from the AWS Config service!
How:
- Information on enabling AWS CloudTrail with data events can be found here: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html
- Information on enabling AWS Config can be found here: https://docs.aws.amazon.com/config/latest/developerguide/gs-console.html
Additionally: If you want an even easier way to digest S3 and CloudFront access logs (access logs are different than CloudTrail logs), take a look at this wonderful service, it speaks for itself: https://www.s3stat.com/
2) Do not provide anonymous public access to your bucket. Keep your buckets private as much as possible
Why:
- You lose the ability to know who accessed your bucket and what accessed your bucket when you allow anonymous access.
- You can and will rack up large bandwidth costs with your S3 billing. This is due to the fact than malicious actors can purposefully redownload your large objects over and over again at your expense.
What should I do:
- For any public buckets, in the S3 console you’ll see an orange public tag pointing them out. Audit these S3 buckets, figure out their use cases. Consider utilizing CloudFront or CloudFlare in front of your bucket to protect against unauthorized access. Using a CDN will allow you to have advanced security controls such as referral, single sign on, and more. You’ll also end up saving money as requests will be cached by the CDN. Bandwidth usage with CloudFront with S3 will actually save you money and speed up your web requests.
- Once you have your bucket set up with a CDN and with better authentication, enable S3 block public access. This is a great feature that comes enabled by default when new S3 buckets are created in the S3 web console. This feature will prevent public ACLs and public bucket policies from being applied to your bucket.
More information about securing your files in S3 can be found here: https://aws.amazon.com/premiumsupport/knowledge-center/secure-s3-resources/
3) Use ACLs sparingly
Why:
- ACLs are extremely difficult to audit. Imagine you have a bucket with over 10,000+ objects in it, any single one of those objects could have its own public read ACL. There is no easy way for you to know which object may have a public ACL unless you look at the ACLs for every single object. S3 does not have an API that will list out all of the public objects in your bucket.
- They are confusing and unwieldy. You cannot use them with IAM user names, you cannot reliably use them with email addresses
- ACLs have been around since S3’s creation. They are an old and unscalable way of granting access to objects and your bucket.
What should I do?
- Update your code / workflows and audit your objects to ensure they aren’t being given public ACLs.
- Utilize S3 Block Public Access to override public ACLs on objects and to prevent new public ACLs from being set.
- Utilize S3 bucket policies for cross account access as needed and IAM policies and roles for everything else. If you need to share your S3 bucket with a different AWS account, you should set up an IAM role that the other AWS account can assume. That way you do not need to modify S3 bucket policies and you can audit IAM roles much easier than ACLs and even bucket policies.
4) Utilize S3 versioning
Why:
- Nothing is worse when using a computer than losing your work. Especially the files, photos, and data that you spent quality time building. To make matters worse, work that you thought was backed up but was not, stings even more.
- S3 versioning will protect against typical accidental deletions made in the web console as well as from many apps that interact with S3. When versioning is turned on, deletes made to an object will create a delete marker on top of your object. In basic terms, you’re essentially creating a 404 redirect rule saying that this object no longer exists. You can still access the object if you make a versioned GET, you can also just remove the delete marker and GET the object as you normally would.
What / How:
- Important: Before enabling versioning on a bucket and leaving it alone, strongly consider implementing a LifeCycle policy that will remove previous versions. This will help prevent having too many old versions of an object, those older versions will still charge you so watch out!
- To learn how to enable versioning on a bucket, check out this great guide from AWS: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-versioning.html
5) S3 Object Lock
What / Why:
S3 object lock is a great feature but it can be difficult to fully understand. I would simplify it into this, there are two main ways to use it, the first is to have immutable object locking on your objects. Meaning nobody, not even AWS can modify or permanently delete your object’s data. You can still create delete markers on top of the data but you cannot delete it until the retention policy has been reached. The second way is to set up what is referred to as a Governance mode. This will allow you to enable object locking BUT power users or roles with admin permissions that are granted the S3 Object Lock override permission can override the lock and permanently delete the data.
Object lock is a fantastic way to have peace of mind about sensitive data not being lost or tampered with. For example, if you don’t have object lock or versioning enabled, and you are not granting delete permissions to any of your IAM users, a clever user can still create data loss by uploading objects with existing names.
Popular use cases for object lock include:
- Protecting tax and financial documents from being tampered or lost
- Protecting evidence used in case law.
- E.g. you can set up an object lock rule that prevents the permanent deletion of an object or many objects for 10 years.
How:
IMPORANT: I strongly recommend testing S3 Object Lock out with a new bucket and some test files before using it on your production buckets. You definitely don’t want to hard lock yourself out from deleting large objects for 5+ years as you’ll be guaranteeing S3 that you’ll be paying for them!
More information about object lock can be found here: https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
Should you have any questions about this please drop a comment below or send me a message using the contact form.
Happy trails.
Did you know: S3 was AWS’ first public available service?! It had a soft launch in the spring of 2006, it was shortly followed by the launch of SQS which guarantees a standard message of at least one delivery.