Access denied to S3 when using COPY command with IAM role - amazon-redshift

I have the following copy command:
copy ink.contact_trace_records
from 's3://mybucket/mykey'
iam_role 'arn:aws:iam::accountnum:role/rolename'
json 'auto';
Where the role has the full access to S3 and full access to RS policies (I know this is not a good idea, but I'm just losing it here :) ). The cluster has the role attached. The cluster has VPC enhanced routing. I get the following:
[2019-05-28 14:07:34] [XX000][500310] [Amazon](500310) Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] Details:
[2019-05-28 14:07:34] -----------------------------------------------
[2019-05-28 14:07:34] error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] code: 8001
[2019-05-28 14:07:34] context: S3 key being read : s3://redacted
[2019-05-28 14:07:34] query: 7794423
[2019-05-28 14:07:34] location: table_s3_scanner.cpp:372
[2019-05-28 14:07:34] process: query3_986_7794423 [pid=13695]
[2019-05-28 14:07:34] -----------------------------------------------;
What am I missing here? The cluster has full access to S3, it is not even in a custom VPC, it is in the default one. Thoughts?

Check the object owner. The object is likely owned by another account. This is the reason for S3 403s in many situations where objects cannot be accessed by a role or account that has full permissions. The typical indicator is that you can list the object(s) but cannot get or head them.
In the following example I'm trying to access a Redshift audit log from a 2nd account. The account 012345600000 is granted access to the bucket owned by 999999999999 using the following policy:
{"Version": "2012-10-17",
"Statement": [
{"Action": [
"s3:Get*",
"s3:ListBucket*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-audit-logs",
"arn:aws:s3:::my-audit-logs/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::012345600000:root"
]}}]
}
Now I try to list (s3 ls) then copy (s3 cp) a single log object:
aws --profile 012345600000 s3 ls s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/
# 2019-05-28 14:49:46 376 …connectionlog_2019-05-25T19:03.gz
aws --profile 012345600000 s3 cp s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Then I check the object ownership from the account that owns the bucket.
aws --profile 999999999999 s3api get-object-acl --bucket my-audit-logs --key AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz
# "Owner": {
# "DisplayName": "aws-cm-prod-auditlogs-uswest2", ## Not my account!
# "ID": "b2b456ce30a967fb1877b3c9594773ae0275fee248e3ebdbff43d66907b89144"
# },
I then copy the object in the same bucket with --acl bucket-owner-full-control. This makes me the owner of the new object. Note the changed SharedLogs/ prefix.
aws --profile 999999999999 s3 cp \
s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
--acl bucket-owner-full-control
Now I can download (or load to Redshift!) the new object that is shared from the same bucket.
aws --profile 012345600000 s3 cp s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# download: …

Related

Errors with BigQuery Sink Connector Configuration

I am trying to ingest data from MySQL to BigQuery. I am using Debezium components running on Docker for this purpose.
Anytime I try to deploy the BigQuery sink connector to Kafka connect, I am getting this error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nFailed to construct GCS client: Failed to access JSON key file\nAn unexpected error occurred while validating credentials for BigQuery: Failed to access JSON key file\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
It shows it's an issue with the service account key (trying to locate it)
I granted the service account BigQuery Admin and Editor permissions, but the error persists.
This is my BigQuery connector configuration file:
{
"name": "kcbq-connect1",
"config": {
"connector.class": "com.wepay.kafka.connect.bigquery.BigQuerySinkConnector",
"tasks.max" : "1",
"topics" : "kcbq-quickstart1",
"sanitizeTopics" : "true",
"autoCreateTables" : "true",
"autoUpdateSchemas" : "true",
"schemaRetriever" : "com.wepay.kafka.connect.bigquery.retrieve.IdentitySchemaRetriever",
"schemaRegistryLocation":"http://localhost:8081",
"bufferSize": "100000",
"maxWriteSize":"10000",
"tableWriteWait": "1000",
"project" : "dummy-production-overview",
"defaultDataset" : "debeziumtest",
"keyfile" : "/Users/Oladayo/Desktop/Debezium-Learning/key.json"
}
Can anyone help?
Thank you.
I needed to mount the service account key in my local directory to the Kafka connect container. That was how I was able to solve the issue. Thank you :)

Unable to import s3 data to RDS(Postgresql) using aws_commons extension

I am trying to import s3 file(CSV file) to Postgre sql rds instance.
I am running s3 import queries from rds instance as per AWS docs but i am getting below error
ERROR: syntax error at or near ":"
LINE 5: :'s3_uri',
^
SQL state: 42601
Character: 76
Query i used for downloading s3 file is below:
SELECT aws_s3.table_import_from_s3(
't1',
'',
'(format csv)',
:'s3_uri',
);
Running query as per AWS doc: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Procedural.Importing.html#USER_PostgreSQL.S3Import.Overview
SELECT aws_s3.table_import_from_s3(
't1',
'',
'(format csv)',
:'s3_uri',
);
I had the same error. Solved it with these steps:
TLDR: when you're importing data with aws_commons, you want to provide the credentials and path to the s3 asset you're working with rather than a variable that stores that information. So, assuming you've already created an IAM role and policy to allow your database to access s3, then you run
SELECT aws_s3.table_import_from_s3('table_name',
'"col1", "col2", "col3"',
'(format csv)',
's3_bucket_name',
'file_name.csv',
'region',
'access_key',
'secret_key');
upgrade postgres to version 14.4 (though 12.4 also works)
install aws_commons CREATE EXTENSION aws_s3 CASCADE;
Create IAM role aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document '{"Version": "2012-10-17", "Statement": [{"Effect": "Allow", "Principal": {"Service": "rds.amazonaws.com"}, "Action": "sts:AssumeRole"}]}'
Create an IAM policy for the IAM role aws iam create-policy \ --policy-name $POLICY_NAME \ --policy-document '{"Version": "2012-10-17", "Statement": [{"Sid": "s3import", "Action": ["s3:GetObject", "s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::${BUCKET_NAME}", "arn:aws:s3:::${BUCKET_NAME}/*"]}]}'
Attach the policy aws iam attach-role-policy \ --policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/$POLICY_NAME \ --role-name $ROLE_NAME
Note:
Steps 1 - 5 worked for allowing data exports from postgres, importing from s3 was failing until I added step 6.
Create the VPC endpoint for the S3 service aws ec2 create-vpc-endpoint \ --vpc-id $VPC_ID \ --service-name com.amazonaws.$REGION.s3 --route-table-ids $ROUTE_TABLE_ID
The route table id related to the VPC where the endpoint is created can be retrieved through the command
aws ec2 describe-route-tables | jq -r '.RouteTables[] | "\(.VpcId) \(.RouteTableId)"'
run SELECT aws_s3.table_import_from_s3('table_name','"col1", "col2", "col3"','(format csv)','s3_bucket_name', 'file_name.csv', 'region', 'access_key','secret_key');
References:
Importing data from Amazon S3 into an RDS for PostgreSQL DB instance
Examples of importing data from Amazon S3 into an RDS for PostgreSQL DB instance
Stack Overflow
:' is not a valid syntax. :'s3_uri', including the leading colon, need to be replaced with a s3 URI. When you are done, there should be no literal colon there outside the quotes.
Try add \n before :
SELECT aws_s3.table_import_from_s3( 't1', '', '(format csv)',
:'s3_uri', );

Hashicorp Vault reading creds - failed to find entry for connection with name: db_name

I dont know if I did something wrong or not.
But here is my configuration.
// payload.json
{
"plugin_name": "postgresql-database-plugin",
"allowed_roles": "*",
"connection_url": "postgresql://{{username}}:{{password}}#for-testing-vault.rds.amazonaws.com:5432/test-app",
"username": "test",
"password": "testtest"
}
then run this command:
curl --header "X-Vault-Token: ..." --request POST --data #payload.json http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/config/postgresql
roles configuration:
// readonlypayload.json
{
"db_name": "test-app",
"creation_statements": ["CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";"],
"default_ttl": "1h",
"max_ttl": "24h"
}
then run this command:
curl --header "X-Vault-Token: ..." --request POST --data #readonlypayload.json http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/roles/readonly
Then created a policy:
path "database/creds/readonly" {
capabilities = [ "read" ]
}
path "/sys/leases/renew" {
capabilities = [ "update" ]
}
and run this to get the token:
curl --header "X-Vault-Token: ..." --request POST --data '{"policies": ["db_creds"]}' http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/auth/token/create | jq
executed this command to get the values:
VAULT_TOKEN=... consul-template.exe -template="config.yml.tpl:config.yml" -vault-addr "http://ip_add.us-west-1.compute.amazonaws.com:8200" -log-level debug
Then I receive this errors:
URL: GET http://ip_add.us-west-1.compute.amazonaws.com:8200/v1/database/creds/readonly
Code: 500. Errors:
* 1 error occurred:
* failed to find entry for connection with name: "test-app"
Any suggestions will be appreciated, thanks!
EDIT: Tried also this command on the server
vault read database/creds/readonly
Still returning
* 1 error occurred:
* failed to find entry for connection with name: "test-app"
For those coming to this page via Googling for this error message, this might help:
Unfortunately the Vault database/role's parameter db_name is a bit misleading. The value needs to match a database/config/ entry, not an actual database name per se. The GRANT statement itself is where the database name is relevant, the db_name is just a reference to the config name, which may or may not match the database name. (In my case, the configs have other data such as environment prefixing the DB name.)
In case this issue not yet resolved
vault is not able to find the db name 'test-app' in postgres, or authentication to the db 'test-app' with given credential fails, so
connection failure happened.
login to postgres and check if the db 'test-app' exists by running \l.
for creating role in postgres you should use the default db 'postgres'. Try to change name from 'test-app' to 'postgres' and check.
Change connection_url in payload.json:
"connection_url": "postgresql://{{username}}:{{password}}#for-testing-vault.rds.amazonaws.com:5432/postgres",

Accessing Google Cloud Storage from Grails Application

from grails application I would like to create a blob in bucket.
I already created bucket in google cloud, created service account and gave owner access to the bucket to the same service account. Later created service account key project-id-c4b144.json and it holds all the credentials.
StorageOptions storageOptions = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials
.fromStream(new FileInputStream("/home/etibar/Downloads/project-id-c4b144.json"))) // setting credentials
.setProjectId("project-id") //setting project id, in reality it is different
.build()
Storage storage=storageOptions.getService()
BlobId blobId = BlobId.of("dispatching-photos", "blob_name")
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build()
Blob blob = storage.create(blobInfo, "Hello, Cloud Storage!".getBytes(StandardCharsets.UTF_8))
When I run this code, I get a json error message back.
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Caller does not have storage.objects.create access to bucket dispatching-photos.",
"reason" : "forbidden"
} ],
"message" : "Caller does not have storage.objects.create access to bucket dispatching-photos."
}
| Grails Version: 3.2.10
| Groovy Version: 2.4.10
| JVM Version: 1.8.0_131
google-cloud-datastore:1.2.1
google-auth-library-oauth2-http:0.7.1
google-cloud-storage:1.2.2
Concerning the service account that json file corresponds to -- I'm betting either:
A) the bucket you're trying to access is owned by a different project than the one where you have that account set as a storage admin
or B) you're setting permissions for a different service account than what that json file corresponds to

S3ServiceException when using AWS RedshiftBasicEmitter

I am using the sample AWS kinesis/redshift code from GitHub. I ran the code in an EC2 instance and ran into the following exception. Note that the emitting from Kinesis to S3 actually succeeded. But the emitting from S3 to Redshift failed. As both emitters in the same program used the same credentials, I am very puzzled why only one of them failed!?
I understand most people getting “The AWS Access Key Id you provided does not exist in our records” exception probably may have issue setting up the S3 key pair properly. But it does not seem to be the case here as emitting to S3 succeeded. If the credentials do not have read access, it should throw an authorization error instead.
Please comment if you have any insight.
Mar 16, 2014 4:32:49 AM com.amazonaws.services.kinesis.connectors.s3.S3Emitter emit
INFO: Successfully emitted 31 records to S3 in s3://mybucket/495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter executeStatement
SEVERE: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mfpredshift prefix=49536256597873342637951299872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------
Mar 16, 2014 4:32:50 AM com.amazonaws.services.kinesis.connectors.redshift.RedshiftBasicEmitter emit
SEVERE: java.io.IOException: org.postgresql.util.PSQLException: ERROR: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
Detail:
-----------------------------------------------
error: S3ServiceException:The AWS Access Key Id you provided does not exist in our records.,Status 403,Error InvalidAccessKeyId,Rid 5TY6Y784TT67,ExtRid qKzklJflmmgnhtttthbce+8T0NIR/sdd4RgffTgfgfdfgdfgfffgghgdse56f,CanRetry 1
code: 8001
context: Listing bucket=mybucket prefix=495362565978733426345566872055061454326385819810529281-49536256597873342638068737503047822713441029589972287489
query: 3464108
location: s3_utility.cpp:536
process: padbmaster [pid=8116]
-----------------------------------------------
I encountered the same errors. I'm using IAM role to get credentials. In my case, it was solved by modify RedshiftBasicEmitter to add ;token=TOKEN to CREDENTIALS parameter (finally I created my own IEmitter).
See http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html