I am running a small go application inside ec2 instance. It access Amazon SQS as a consumer. I have configured keys at ~/.aws/credential file. The EC2 instance has been assigned an IAM role.
Can my go application use the IAM role assigned to the EC2 instance?
If yes, how that can be done using configurations without a code change ?
If role is configured, should I still provide keys in somewhere ?
If you used github.com/aws/aws-sdk-go-v2/config and config.LoadDefaultConfig() method to retrieve AWS credentials,
Yes. Your application will retrieve temporary credentials with IAM Role you assigned.
aws-sdk-go-v2 will retrieve credentials from instance metadata. Detailed retrieving process is described AWS official docs here. "How do roles for EC2 instances work" section describes the process as below.
When the application runs, it obtains temporary security credentials from Amazon EC2 instance metadata, as described in Retrieving Security Credentials from Instance Metadata. These are temporary security credentials that represent the role and are valid for a limited period of time.
With some AWS SDKs, the developer can use a provider that manages the temporary security credentials transparently. (The documentation for individual AWS SDKs describes the features supported by that SDK for managing credentials.)
Alternatively, the application can get the temporary credentials directly from the instance metadata of the EC2 instance. Credentials and related values are available from the iam/security-credentials/role-name category (in this case, iam/security-credentials/Get-pics) of the metadata. If the application gets the credentials from the instance metadata, it can cache the credentials.
Also you can refer to here about aws-sdk-go-v2's credential retrieval order.
You don't have to provide key. aws-sdk-go-v2 will retrieve it from EC2 instance metadata.
Related
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I want to give access to my developer to my MongoDB which is hosted by an EC2 Instance on AWS.
He should be able to make mongodump, upload the new backend and do some changes on our control Panel.
I created an IAM User with EC2FullAccess Permissions - I have seen that he was able to add his own IP to the Security Group so he could connect.
I don't feel so comfortable with that - what should I do, to secure myself that he has just enough access to do the necessary work:
Upload new code to server
Do MongoDB dump
I don't want him to be able to switch off/delete my instance or be able to delete my database at all.
Looking at your use case, you do not need to give any EC2 permissions, your developer does not even need IAM user, he can simply have the IP of the instance and the login credentials to the EC2 Instance, that should be suffice to log in to the instance and make the required changes. No need for an IAM user or AWS Console access.
IAM roles are for the purpose of accessing a service on behalf of another. Say, you want to access AWS DynamoDB or S3 from EC2 instance. In this case, an IAM role with required permissions attached to EC2 will server the purpose.
IAM User is for users who need access to AWS services either through Console or through API (programmatic). AWS credentials are required to access the service.
In your case, MongoDB is installed on EC2 and your developer needs access to "the server on which MongoDB is installed" and is not required any access of "AWS EC2 Service".
As correctly pointed out in answer by #X-Men, IAM role or IAM user is not at all required. What required is, your developer to have the IP of server and credentials to login to that server. Username-password or username-key.
Restriction which you need on developer related to MongoDB are to be configured on MongoDB itself and not on EC2 level.
I am using EC2 Instance profile credentials for allowing the AWS EC2 instance to access other AWS services.
Recently, I implemented MongoDB Client-Side Field-Level Encryption for which the AWS KMS has been used as KMS Providers. The MongoDB Documentation for CSFLE mentions that the KMS Provider should have secret key and access key that maps to an IAM User.
This way I will have to create another IAM User and then maintain those credentials separately. A simpler way (and more secure) would have been to use the DefaultCredentialsProvider from software.amazon.awssdk:auth and that could have used the credentials from the instance profile that could have given access to the KMS. But this does not work for me and MongoClient fails as KMS rejects the security token used.
Is there any reason behind not allowing this way of accessing KMS?
As all projects, initial implementation of CSFLE had a scope. This scope did not include the ability to use instance roles for credential identification.
I suggest you submit your request to https://feedback.mongodb.com/ for consideration.
I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.
Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.