Is there a way to know which IAM service role was used in the creation of a stack? I was able to create stacks before but it fails with permissions issues now, want to figure out what changed that prevents me from creating stacks.
By default, an Amazon CloudFormation stack runs with the permissions of the user that creates the stack.
Alternatively an IAM Role can be specified during stack creation. If the user has permission to use that IAM Role, then the stack will be created using the permissions of that Role.
See: AWS CloudFormation Service Role - AWS CloudFormation
The describe_stacks() API call can be used to obtain the RoleARN that is used by a stack.
Related
I am running a small go application inside ec2 instance. It access Amazon SQS as a consumer. I have configured keys at ~/.aws/credential file. The EC2 instance has been assigned an IAM role.
Can my go application use the IAM role assigned to the EC2 instance?
If yes, how that can be done using configurations without a code change ?
If role is configured, should I still provide keys in somewhere ?
If you used github.com/aws/aws-sdk-go-v2/config and config.LoadDefaultConfig() method to retrieve AWS credentials,
Yes. Your application will retrieve temporary credentials with IAM Role you assigned.
aws-sdk-go-v2 will retrieve credentials from instance metadata. Detailed retrieving process is described AWS official docs here. "How do roles for EC2 instances work" section describes the process as below.
When the application runs, it obtains temporary security credentials from Amazon EC2 instance metadata, as described in Retrieving Security Credentials from Instance Metadata. These are temporary security credentials that represent the role and are valid for a limited period of time.
With some AWS SDKs, the developer can use a provider that manages the temporary security credentials transparently. (The documentation for individual AWS SDKs describes the features supported by that SDK for managing credentials.)
Alternatively, the application can get the temporary credentials directly from the instance metadata of the EC2 instance. Credentials and related values are available from the iam/security-credentials/role-name category (in this case, iam/security-credentials/Get-pics) of the metadata. If the application gets the credentials from the instance metadata, it can cache the credentials.
Also you can refer to here about aws-sdk-go-v2's credential retrieval order.
You don't have to provide key. aws-sdk-go-v2 will retrieve it from EC2 instance metadata.
I am testing the CloudSQL IAM automatic authenticationby using IAM service account users. The goal is to deploy a backend service running in cloud with an service account (SA), which can connect onto a CloudSQL database without using password auth.
So this is what I did:
Create a cloud SQL database demo-db via gcloud console
Create a service account sa via gcloud console
Create a backend service and run it in k8s with sa as the account, with the help of cloud-sql-jdbc-socket-factory
. And make the backend service having liquibase schema migration so it can create tables
Create a IAM service account user user-sa in demo-db via gcloud console
Create a normal built-in user user-db (with a password) in demo-db via gcloud console (for my local login/psql to the db)
Deploy the backend to production, and it connected (with user-sa) and created tables in demo-db (with liquibase)
And this is a problem I have now:
When I use cloud-sql-proxy to login the demo-db locally via psql, with user user-db and the password, I realise that I cannot view or select the table created by the backend service (via user-sa).
Then how can I view the data in the database as a developer?
PS: Fow now I don't have access to user-sa or sa's secret/key files. As it is managed by our infra. I only have ownership of demo-db and I could give access right of my db to user-sa ...
I had the same issue on AWS's Postgres RDS.
You (as the backend creation service) basically have to create a role, place your deployment role (user-sa) into that role and also your app user (user-db).
Then your deployment script will have to use ALTER <OBJECT> OWNER TO <ROLE>. Now every role or user in that role will have access. You'll have to do this for functions, tables, etc
Another alternative is to set default permissions via ALTER DEFAULT PRIVILEGES, but note that those only take effect on new objects. If you add a new db role afterward and want to grant it permissions to a table that already existed, you'd still have to add explicit permissions for the new role.
* Note that in postgres, a role and user are interchangeable in commands. In my view, a role does not log in, but it can be used to hold a set of permissions. Instead of assigning permissions to individual users, assign them to a group/role, then grant the user membership to the group/role. Ensure that your inheriting permissions is allowed on the users and roles for this to work.
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I would need to have a CloudSQL instace created with particular service account. Trying API call instances.insert:
POST https://www.googleapis.com/sql/v1beta4/projects/{project}/instances
{
"serviceAccountEmailAddress": "<my account>#managed-gcp.iam.gserviceaccount.com",
"name": "pvtest20200611-3",
"settings": {
"tier": "db-n1-standard-1"
},
"databaseVersion": "MYSQL_5_7"
}
The instance is created but it has a generated svc account (e.g. p754990076948-kf1bsf#gcp-sa-cloud-sql.iam.gserviceaccount.com) instead of mine.
For my SA, I have storage admin/storage object admin roles assiged (this is what I would need newly created instances to always have). I also added cloudsql admin role. When I thought it was a role problem so even tried the Project Editor role, but this didn't work.
I have tried MySQL and Postgres db types.
Would you know why is not my account picked up, why is CloudSQL engine always assigning it's own?
What are requirement/setup for custom SA to work with CloudSQL instance?
When you create an instance in Cloud SQL, it will use the default one during the creation, so, you won't be able to set a custom one during the creation.
It's possible, however, for you to give access and permissions for a Service Account after the creation. As explained in the official documentation Granting roles to a service account for specific resources, you can provide specific permissions to your Service Account. You can try using the gcloud command as follows:
gcloud projects add-iam-policy-binding my-project-123 \
--member serviceAccount:my-sa-123#my-project-123.iam.gserviceaccount.com \
--role roles/editor
Besides that, you can also check all your available Service Accounts using this link here, to verify if your custom one is there and even add the permissions via UI, if you think it's better via this way.
Let me know if the information helped you!
I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.