Not able to upload file to s3 bucket which has enabled kms policy - amazon-kms

I am trying to upload a file to a cross account s3 bucket which has enabled kms key.
below is the code:
InstanceProfileCredentialsProvider iamCredentials = new InstanceProfileCredentialsProvider();
AmazonS3 s3client = new AmazonS3Client(iamCredentials);
FileInputStream stream = new FileInputStream("/home/tomcat/Test.txt");
ObjectMetadata objectMetadata = new ObjectMetadata();
int bytesAvailable = stream.available();
byte[] fileBytes = new byte[bytesAvailable];
Long contentLength = Long.valueOf(fileBytes.length);
objectMetadata.setContentLength(contentLength);
objectMetadata.setSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm());
PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,"/Test/Test.txt",stream, objectMetadata).withCannedAcl(CannedAccessControlList.BucketOwnerFullControl);
putObjectRequest.withSSEAwsKeyManagementParams(new SSEAwsKeyManagementParams(kmsKeyId));
s3client.putObject(putObjectRequest);
Here I am using iam role to connect to external s3 bucket.
iam policy:
arn:aws:iam::xxxxxxxxxxxxxx:role/role1 (xxxxxxxxxxxxxx is the local account id)
----------------------------------------
role1_kms_access_policy
-----------------------
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:EnableKeyRotation",
"kms:EnableKey",
"kms:Decrypt",
"kms:ListKeyPolicies",
"kms:UntagResource",
"kms:ListRetirableGrants",
"kms:GetKeyPolicy",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:ListResourceTags",
"kms:ReEncryptFrom",
"kms:ListGrants",
"kms:GetParametersForImport",
"kms:DescribeCustomKeyStores",
"kms:ListKeys",
"kms:TagResource",
"kms:GetKeyRotationStatus",
"kms:Encrypt",
"kms:ScheduleKeyDeletion",
"kms:ListAliases",
"kms:GenerateDataKey",
"kms:ReEncryptTo",
"kms:DescribeKey",
"kms:ConnectCustomKeyStore"
],
"Resource": "*"
}
]
}
external bucket kms policy:
{
"Effect": "Allow",
"Principal": {
enter code here
"AWS": "arn:aws:iam::xxxxxxxxxxxxxx:role/role1"
},
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*"
],
"Resource": "*"
}
When running the code if i pass kms key of external bucket to variable kmsKeyId then exception is occurring:
com.amazonaws.services.s3.model.AmazonS3Exception: Key 'arn:aws:kms:us-east-1:xxxxxxxxxxxxxx:key/{kmsKeyId}' (here kmskey id is the value of passed km key) does not exist (Service: Amazon S3; Status Code: 400; Error Code: KMS.NotFoundException; Request ID: BQ3VWZK5VC88GXV1; S3 Extended Request ID: csDK/X8MjPHNuV4NrziYoUPBbqZG+Jp269IgBXFTnPQYxjJBgLailWtY7F0JjDLsNyHNO60xeVE=), S3 Extended Request ID: csDK/X8MjPHNuV4NrziYoUPBbqZG+Jp269IgBXFTnPQYxjJBgLailWtY7F0JjDLsNyHNO60xeVE=
And if we pass whole arn to kmskeyId then it shows access denied exception.

Related

How can I connect to endpoint when attempting to export data from RDS to S3?

Objective
My objective is to export data from a Postgres RDS Instance to an s3 Bucket. I just want to prove that the concept works on my VPC, so I am using dummy data.
What I have tried so far
I followed the docs here using the console and cli.
Created an s3 bucket (I chose to block all public access)
Created an RDS Instance with the following settings:
Created on 2 public subnets
Public accessibility: No
Security group rules for outbound: CIDR/IP - Inbound 0.0.0.0/0
Security group rules for inbound: CIDR/IP - Inbound 0.0.0.0/0
Created a policy as shown in the example:
aws iam create-policy --policy-name rds-s3-export-policy --policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3export",
"Action": [
"S3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-s3-bucket/*"
]
}
]
}'
Created an IAM Role like:
aws iam create-role --role-name rds-s3-export-role --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'
Attached the policy to the role like:
aws iam attach-role-policy --policy-arn your-policy-arn --role-name rds-s3-export-role
Added the IAM Role to the DB like:
aws rds add-role-to-db-instance \
--db-instance-identifier my-db-instance \
--feature-name s3Export \
--role-arn your-role-arn \
--region your-region
Did all the requirements within PSQL like:
CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
CREATE TABLE sample_table (bid bigint PRIMARY KEY, name varchar(80));
INSERT INTO sample_table (bid,name) VALUES (1, 'Monday'), (2,'Tuesday'), (3, 'Wednesday');
SELECT aws_commons.create_s3_uri(
'dummy-data-bucket-path',
'',
'us-west-2'
) AS s3_uri_1 \gset
What does not work
When I try to make the actual export by:
SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM sample_table', :'s3_uri_1');
I get the error:
ERROR: could not upload to Amazon S3
DETAIL: Amazon S3 client returned 'Unable to connect to endpoint'.
CONTEXT: SQL function "query_export_to_s3" statement 1
Other things I have tried:
I have tried using Access analyzer for S3 but my bucket does not seem to appear in the list. I believe as the bucket itself does not have a policy attached to it.
How can I debug this issue? What am I doing wrong? I am happy to share further details if needed.
What I see from the documentation you are following does not assume that you are running this whole setup inside a VPC.
So for connecting from within the VPC(as you have blocked all the public access) , you need to have an endpoint policies for Amazon S3 attached.
for example from documentation sample policy :
The following is an example of an S3 bucket policy that allows access to a specific bucket, my_secure_bucket, from endpoint vpce-1a2b3c4d only.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Principal": "*",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::my_secure_bucket",
"arn:aws:s3:::my_secure_bucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-1a2b3c4d"
}
}
}
]
}

AWS - S3 to RDS(postgres) import using aws_s3 extension (provided by RDS) is failing

I have successfully created a role with policy attached to that role which allows required actions on the bucket. Policy document is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "s3import",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
And then i attached this role to my RDS instance with feature s3Import.
This is the command i ran.
SELECT aws_s3.table_import_from_s3(
'table name',
'',
'DELIMITER ''|''',
aws_commons.create_s3_uri(
'bucket-name',
'file.csv',
'region')
);
I am getting this error:
SQL Error [XX000]: ERROR: HTTP 404. Requested file does not exist.
Is anything missing here ?
Based on the comments.
Based on the error message provided, the issue was not due to access deny to S3, but rather due to wrong file name used in create_s3_uri.
The solution was to use the correct file name.

How to provision RDS postgres db users with AWS IAM auth using terraform?

By checking this AWS blog: https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/ I noticed that I need to create a DB user after login with the master username and password:
CREATE USER {dbusername} IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS';
I can see terraform has mysql_user to provision mysql db users: https://www.terraform.io/docs/providers/mysql/r/user.html
However, I couldn't find postgres_user. Is there a way to provision postgres user with IAM auth?
In Postgres, a user is called a "role". The Postgres docs say:
a role can be considered a "user", a "group", or both depending on how it is used
So, the TF resource to create is a postgresql_role
resource "postgresql_role" "my_replication_role" {
name = "replication_role"
replication = true
login = true
connection_limit = 5
password = "md5c98cbfeb6a347a47eb8e96cfb4c4b890"
}
To enable IAM user to assume the role, follow the steps in the AWS docs.
From those instructions, you would end up with TF code looking something like:
module "db" {
source = "terraform-aws-modules/rds/aws"
// ...
}
provider "postgresql" {
// ...
}
resource "postgresql_role" "pguser" {
login = true
name = var.pg_username
password = var.pg_password
roles = ["rds_iam"]
}
resource "aws_iam_user" "pguser" {
name = var.pg_username
}
resource "aws_iam_user_policy" "pguser" {
name = var.pg_username
user = aws_iam_user.pguser.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:${var.region}:${data.aws_caller_identity.current.account_id}:dbuser:${module.db.this_db_instance_resource_id}/${var.pg_username}"
]
}
]
}
EOF
}

Mocking multiple AWS services with moto

I'm trying to mock the creation of a compute environment, which requires some other resources, namely an IAM instance profile and service role. However, when I create those IAM resources and then attempt to use them in the compute environment creation, things fail with:
<Message>Role arn:aws:iam::123456789012:instance-profile/InstanceProfile not found</Message>
The code is below:
#mock_batch
#mock_iam
def test_create_compute_environment(lims_objs):
client = boto3.client("batch")
iam = boto3.resource("iam")
service_role = iam.create_role(
RoleName="BatchServiceRole", AssumeRolePolicyDocument="AWSBatchServiceRole"
)
instance_profile = iam.create_instance_profile(
InstanceProfileName="InstanceProfile"
)
instance_profile.add_role(RoleName=service_role.name)
for elem in iam.instance_profiles.all():
print(elem, elem.arn)
for elem in iam.roles.all():
print(elem)
response = client.create_compute_environment(
computeEnvironmentName="compute_environment",
type="MANAGED",
state="ENABLED",
computeResources={
"type": "EC2",
"minvCpus": 0,
"maxvCpus": 256,
"desiredvCpus": 2,
"instanceTypes": ["optimal"],
"imageId": "test",
"subnets": [],
"securityGroupIds": [],
"ec2KeyPair": "",
"instanceRole": instance_profile.arn,
"tags": {},
},
serviceRole=service_role.arn,
)
In the test, I can see the prints for the IAM objects, so I know they are being created. Are these just not shared across moto mocks?
iam.InstanceProfile(name='InstanceProfile') arn:aws:iam::123456789012:instance-profile/InstanceProfile
iam.Role(name='BatchServiceRole')
I know this may not be the complete working example if we can get past the instance profile, but this is where it's stuck now.
Any insight is much appreciated. Thanks so much!
I was able to get past this, and I hope this can help others. Briefly, I created fixtures and passed services around where I needed them.
#pytest.fixture()
def vpc():
mock = mock_ec2()
mock.start()
ec2 = boto3.resource("ec2")
vpc = ec2.create_vpc(CidrBlock="172.16.0.0/16")
vpc.wait_until_available()
ec2.create_subnet(CidrBlock="172.16.0.1/24", VpcId=vpc.id)
ec2.create_security_group(
Description="Test security group", GroupName="sg1", VpcId=vpc.id
)
yield vpc
mock.stop()
#pytest.fixture()
def iam_resource():
mock = mock_iam()
mock.start()
yield boto3.resource("iam")
mock.stop()
#pytest.fixture()
def batch_client():
mock = mock_batch()
mock.start()
yield boto3.client("batch")
mock.stop()
#pytest.fixture()
def batch_roles(iam_resource) -> Tuple[Any, Any]:
iam = iam_resource
service_role = iam.create_role(
RoleName="BatchServiceRole",
AssumeRolePolicyDocument=json.dumps(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "batch.amazonaws.com"},
"Action": "sts:AssumeRole",
}
],
}
),
)
instance_role = iam.create_role(
RoleName="InstanceRole",
AssumeRolePolicyDocument=json.dumps(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole",
}
],
}
),
)
return service_role, instance_role
#pytest.fixture()
def batch_compute_environments(
lims_objs, batch_client, batch_roles, vpc
):
...
I was then able to mock out test submitting jobs using these and other fixtures, created in the same way as the above.
def test_submit_batch(
lims_objs,
batch_client,
batch_compute_environments,
batch_job_definitions,
batch_queue,
capsys,
):
client = batch_client
for (env, assay), lims in lims_objs.items():
name = f"pytest_batch_job_{env.name}_{assay.name}"
res = client.submit_job(
jobName="pytest_" + name,
jobQueue=lims.get_aws_name("job_queue"),
jobDefinition=lims.get_aws_name("job_definition"),
parameters={
"assay": "...",
"runid": name,
"reqid": "pytest",
"env": env.name,
},
)
assert res["ResponseMetadata"]["HTTPStatusCode"] == requests.codes.ok
...

How to set aws cloudwatch retention via Terraform

Using Terraform to deploy API Gateway/Lambda and already have the appropriate logs in Cloudwatch. However I can't seem to find a way to set the retention on the logs via Terraform, using my currently deployed resources (below). It looks like the log group resource is where I'd do it, but not sure how to point log stream from api gateway at the new log group. I must be missing something obvious ... any advice is very much appreciated!
resource "aws_api_gateway_account" "name" {
cloudwatch_role_arn = "${aws_iam_role.cloudwatch.arn}"
}
resource "aws_iam_role" "cloudwatch" {
name = "#{name}_APIGatewayCloudWatchLogs"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "api_gateway_logs" {
name = "#{name}_api_gateway_logs_policy_attach"
roles = ["${aws_iam_role.cloudwatch.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"
}
resource "aws_api_gateway_method_settings" "name" {
rest_api_id = "${aws_api_gateway_rest_api.name.id}"
stage_name = "${aws_api_gateway_stage.name.stage_name}"
method_path = "${aws_api_gateway_resource.name.path_part}/${aws_api_gateway_method.name.http_method}"
settings {
metrics_enabled = true
logging_level = "INFO"
data_trace_enabled = true
}
}
yes, you can use the Lambda log name to create log resource before you create the Lambda function. Or you can import the existing log groups.
resource "aws_cloudwatch_log_group" "lambda" {
name = "/aws/lambda/${var.env}-${join("", split("_",title(var.lambda_name)))}-Lambda"
retention_in_days = 7
lifecycle {
create_before_destroy = true
prevent_destroy = false
}
}