I want to enable public access to objects in my mediastore container, something like in s3
...
s3Client.createBucket(CreateBucketRequest
.builder()
.bucket(bucketName).acl(BucketCannedACL.PUBLIC_READ).acl(BucketCannedACL.PUBLIC_READ_WRITE)
...
To enable public access, you need to add a stanza to the Container Policy. The additional stanza will resemble this example below. Modify it using your AWS account number, region, and container name in the Resource ARN.
{
"Sid": "PublicReadOverHttps",
"Effect": "Allow",
"Principal": "*",
"Action": [
"mediastore:GetObject",
"mediastore:DescribeObject"
],
"Resource": "arn:aws:mediastore:myregion:9999999999:container/MyContainer/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "true"
}
}
}
Related
I have multiple IAM role (up to 100) required to use this KMS key.
Instead of listing all the IAM role in the KMS key policy. Is there any way I can wildcard or condition it?
{
"Sid": "Enable IAM Role",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxxx:role/a1",
"AWS": "arn:aws:iam::xxxxxxxxxx:role/a2",
"AWS": "arn:aws:iam::xxxxxxxxxx:role/a3"
............
"AWS": "arn:aws:iam::xxxxxxxxxx:role/a100"
},
"Action": "kms:*",
"Resource": "*"
}
I tried using arn:aws:iam::xxxxxxxxxx:root or using condition with stringLike, sourceArn,"arn:aws:iam::xxxxxxxxxx:role/a*"
but none of them work.
Would like to ask around if there is any alternative instead of listing all the iam role down?
This will help you
{
"Sid": "Enable IAM Role",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "kms:*",
"Resource": "*",
"Condition": {
"ArnLike": {
"aws:PrincipalArn": "arn:aws:iam::xxxxxxxxxx:role/a1*"
}
}
}
I am trying to set up cross account Postgres RDS IAM authentication. My use case is a python code that is containerized and executed by AWS Batch on the top of the ECS engine connects to the Postgres RDS in another AWS account. I tried to follow the route (single role in the account where DB connection is originated) that is described here but the connection fails with:
2020-06-12 19:41:10,363 - root - ERROR - Error reading data from data DB: FATAL: PAM authentication failed for user "db_user"
I also found this one and tried to set up something similar (a role per respective account but no EC2 instance as a connection source). Unfortunately it failed with the same error. Does anyone know any other AWS documentation that might match my use case?
I managed to sort it out with help of AWS support folks. These are the actions that I had to do:
Add the following policy to the IAM role applied to AWS Batch job (AWS account A):
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::ACCOUNT_B_ID:role/ecsTaskExecutionRole"
}
}
With a following trust policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Add the following IAM role within the AWS account that is used for RDS hosting (AWS account B):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:<region>:ACCOUNT_B_ID:dbuser:{rds-resource-id}/{batch-user}"
]
}
]
}
With a following trust policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_A_ID:root",
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Update the code that is executed within the AWS Batch container:
sts_client = boto3.client('sts')
assumed_role_object=sts_client.assume_role(
RoleArn="arn:aws:iam::ACCOUNT_B_ID:role/ROLE_TO_BE_ASSUMED",
RoleSessionName="AssumeRoleSession1"
)
credentials=assumed_role_object['Credentials']
client = boto3.client(
'rds',
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
region_name=REGION )
#client = boto3.client('rds')
token = client.generate_db_auth_token(DBHostname=ENDPOINT, Port=PORT, DBUsername=USR, Region=REGION)
I have a web application who gets videos from an s3 bucket. That bucket has a policy to only allow the access from certain domains. I now need an ionic app to access the same bucket, is there any way I can add this option to the policy?
Here is the policy as I have it now
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*.mp4",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"https://www.example.com/*"
]
}
}
}
]
}
I've tried adding file://* to the urls array but won't work.
You can use User-Agent to identify request coming from your app.
Here is code for the Bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*.mp4",
"Condition": {
"StringLike": {
"aws:UserAgent": "*Any name"
}
}
}
]
}
Also you have to add in your config.xml
<preference name="AppendUserAgent" value="Any name" />
I'm deploying some REST apis using API Gateway and Lambda Functions. Because of some architectural restrictions, the API must be available only by REST endpoints. On top of the API's I need to implement a GraphQL interface to allow part of our users to query this data. To deploy the GraphQL endpoints I'm using AWS AppSync. Based on that restrictions, I created the AppSync HTTP DataSource pointing to API Gateway stage url (https://api-gateway-api-id.execute-api.eu-central-1.amazonaws.com). It worked fine. Then I secured the API Gateway REST endpoint to use AWS_IAM, created a role for the datasource with permissions to invoke-api on the selected api inovocation arn and configured the HTTP Datasource using aws cli.
For example, here is my Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
And here is the policy attached to this role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:eu-central-1:9999999999:api-gateway-api-id/*/*/*"
}
]
}
And after all of that I updated my data source from aws cli with the following config:
{
"dataSource": {
"dataSourceArn": "arn:aws:appsync:eu-central-1:99999999999:apis/appsync-pi-id/datasources/Echo",
"name": "Echo",
"type": "HTTP",
"serviceRoleArn": "arn:aws:iam::99999999999:role/roleName",
"httpConfig": {
"endpoint": "https://api-gateway-api-id.execute-api.eu-central-1.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "eu-central-1",
"signingServiceName": "appsync"
}
}
}
}
}
Now when I try to make a query, I get the following error:
Credential should be scoped to correct service: 'execute-api'
As I understand, the correct service to be used to formulate the signature is the execute-api. I have some experience creating AWSV4 Signatures and knows that for this case it would be this one.
Somebody knows where I'm making a mistake?
With help from Ionut Trestian I found the error. I changed the configuration to use a different signatureService, like the following:
{
"dataSource": {
"dataSourceArn": "arn:aws:appsync:eu-central-1:99999999999:apis/appsync-pi-id/datasources/Echo",
"name": "Echo",
"type": "HTTP",
"serviceRoleArn": "arn:aws:iam::99999999999:role/roleName",
"httpConfig": {
"endpoint": "https://api-gateway-api-id.execute-api.eu-central-1.amazonaws.com",
"authorizationConfig": {
"authorizationType": "AWS_IAM",
"awsIamConfig": {
"signingRegion": "eu-central-1",
"signingServiceName": "execute-api"
}
}
}
}
}
Apparently I didn't understand correctly the configuration values. In my defense, I didn't found any documentation regarding this options, only a few examples scattered through the web. :-)
In case anyone else ends up here as I did wondering what else can be placed as a signingServiceName (I was looking for s3 specifically), I found this helpful blog post https://blog.iamjkahn.com/2019/12/invoking-even-more-aws-services-directly-from-aws-appsync.html
My team has a pipeline which runs under an execution IAM role. We want to deploy code to AWS through CloudFormation or the CDK.
In the past, we would upload some artifacts to S3 buckets before creating/updating our CloudFormation stack, using the execution IAM role.
We recently switched to the CDK, and are trying to get as much automated with using CDK Deploy as possible, but are running into a lot of permission items we need to add which we didn't have prior (for instance, cloudformation:GetTemplate).
We don't want to just grant * (we want to follow least privilege) but I can't find any clear documented list.
Is there a standard list of permissions that CDK Deploy relies on? Are there any "nice to have's" beyond a standard list?
The CDK v2 now brings and assumes its own roles. No more manual permission management required. You only need to grant permission to assume the cdk roles:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/cdk-*"
]
}
]
}
These roles are created via cdk bootstrap, which then of course requires the permission to create the roles and policies. After the bootstrapping though, this no longer is required. So you could run this manually with a privileged role.
Apparently CDK proceeds if any of the cdk roles cannot be assumed. So it's still possible to manually manage a CDK policy as below, but it might now requires additional permissions.
Be aware, the CFN role has the Administrator policy attached.
Previous answer for CDK v1:
I'm using below policy to deploy CDK apps. Besides CFN full access and S3 full access to the CDK staging bucket, it grants permission to do everything through CloudFormation.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudformation:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Condition": {
"ForAnyValue:StringEquals": {
"aws:CalledVia": [
"cloudformation.amazonaws.com"
]
}
},
"Action": "*",
"Resource": "*",
"Effect": "Allow"
},
{
"Action": "s3:*",
"Resource": "arn:aws:s3:::cdktoolkit-stagingbucket-*",
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm::*:parameter/cdk-bootstrap/*"
}
]
}
You might want to add some explicit denies for things you don't want to allow.
Also, be aware that above condition does not mean the principal is limited to things possible with CloudFormation. A potential attack vector would be to create a custom CFN resource, backed by a Lambda function. When creating resources through that custom resource you then could do anything in the Lambda, because it is triggered via CloudFormation.
When you use lookups (those are the .fromXxx(...) methods), the CDK will make read/list requests to the related service at runtime - while the CDK synth is running, not the CloudFormation deploy. Which permissions you need, of course, depends on the lookups you have in your code. For example, if you would have a Vpc.fromLookup() you should allow the action ec2:DescribeVpcs. Of course you could attach the ReadOnlyAccess policy, if you have no concerns about accessing sensitive content.
Since I couldn't find any documentation anywhere I had to do some trial and error to get this to work.
Apart from the permissions you need to create the actual resources you define in your stack, you need to give the following:
cloudformation:DescribeStacks
cloudformation:CreateChangeSet
cloudformation:DescribeChangeSet
cloudformation:ExecuteChangeSet
cloudformation:DescribeStackEvents
cloudformation:DeleteChangeSet
cloudformation:GetTemplate
To the stack ARN you are creating, as well as the bootstrap stack:
arn:aws:cloudformation:${AWS::Region}:${AWS::AccountId}:stack/CDKToolkit/*
You also need s3 permissions to the bucket that the boostrap added (otherwise you get that dreaded Forbidden: null error):
s3:*Object
s3:ListBucket
s3:GetBucketLocation
to
arn:aws:s3:::cdktoolkit-stagingbucket-*
CDK has two phases the bootstrap and the synth/deploy.
In the case of bootstrap the IAM role or profile used must have the following policy permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StsAccess",
"Effect": "Allow",
"Action": [
"sts:AssumeRole",
"iam:*Role*"
],
"Resource": [
"arn:aws:iam::${AWS_ACCOUNT_ID}:role/cdk-*"
]
},
{
"Action": [
"cloudformation:*"
],
"Resource": [
"arn:aws:cloudformation:${AWS_REGION}:${AWS_ACCOUNT_ID}:stack/CDKToolkit/*"
],
"Effect": "Allow"
},
{
"Sid": "S3Access",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
},
{
"Sid": "ECRAccess",
"Effect": "Allow",
"Action": [
"ecr:SetRepositoryPolicy",
"ecr:GetLifecyclePolicy",
"ecr:PutImageScanningConfiguration",
"ecr:DescribeRepositories",
"ecr:CreateRepository",
"ecr:DeleteRepository"
],
"Resource": [
"arn:aws:ecr:${AWS_REGION}:${AWS_ACCOUNT_ID}:repository/cdk-*"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter*",
"ssm:PutParameter*",
"ssm:DeleteParameter*"
],
"Resource": "arn:aws:ssm:${AWS_REGION}:${AWS_ACCOUNT_ID}:parameter/cdk-bootstrap/*"
}
]
}
While in the case of deployment, the role or profile must have the following as mandatory permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::*:role/cdk-*"
]
}
]
}
Plus with all the permissions needed for the infrastructure you're deploying.
The thing I can recommend is to use two different roles so that you have more security, and in case you're using GitHub workflow to take advantage of the OpenIdConnect.
The bootstrap policy could be improved by restricting permissions but the documentation is lacking so I do not delve into specific aspects (example s3)
We also needed to add below permissions as well.
ssm:PutParameter
ecr:SetRepositoryPolicy
ecr:GetLifecyclePolicy
ecr:PutImageScanningConfiguration
ssm:GetParameters
ecr:DescribeRepositories