I have an application with a Postgres CloudSQL instance where the tables were set up by a developer's IAM account. Thus they are owned by that IAM account, and I'd now like to set them so they are owned by a IAM service account instead. However, I'm having a hard time getting REASSIGN OWNED to work, just getting permission issues instead.
I found this somewhat similar documentation entry on how to reassign ownership by doing GRANT "user#domain" TO "serviceaccount#project.iam". However, as soon as I do this, the service account is no longer able to log in, getting FATAL: Cloud SQL IAM user authentication failed for user serviceaccount#project.iam. Revoking the grant allows login again, but I'm then unable to reassign ownership...
When testing on a separate instance, I did get the procedure to work between user accounts, and also from a service account to a user account, but not user account to service account.
Is there a either a way for a service account to be able to log in while GRANTed on the user account, or some other way to do ownership reassignments?
Related
I configured okta snowflake SSO. I assigned users as well. I configures scim which has permission to create users, deactivate users, sync password. After i configure scim i am having errors for existing users Automatic provisioning of user to app snowflake failed. Error while creating user. Conflict. Error reported by remote server. User exist with given user name. Same thing happening when I am assigning the app to existing user with same user name. Is there any way to fix it or is it best to remove scim.
In order for the merge to be successful, the login mapping needs to be exactly the same (the rest gets updated by okta). So make sure users can login via SSO first.
You also need to transfer ownership manually. Documentation provides this command:
use role accountadmin;
grant ownership on user <user_name> to role okta_provisioner;
Snowflake SCIM doc
I am testing the CloudSQL IAM automatic authenticationby using IAM service account users. The goal is to deploy a backend service running in cloud with an service account (SA), which can connect onto a CloudSQL database without using password auth.
So this is what I did:
Create a cloud SQL database demo-db via gcloud console
Create a service account sa via gcloud console
Create a backend service and run it in k8s with sa as the account, with the help of cloud-sql-jdbc-socket-factory
. And make the backend service having liquibase schema migration so it can create tables
Create a IAM service account user user-sa in demo-db via gcloud console
Create a normal built-in user user-db (with a password) in demo-db via gcloud console (for my local login/psql to the db)
Deploy the backend to production, and it connected (with user-sa) and created tables in demo-db (with liquibase)
And this is a problem I have now:
When I use cloud-sql-proxy to login the demo-db locally via psql, with user user-db and the password, I realise that I cannot view or select the table created by the backend service (via user-sa).
Then how can I view the data in the database as a developer?
PS: Fow now I don't have access to user-sa or sa's secret/key files. As it is managed by our infra. I only have ownership of demo-db and I could give access right of my db to user-sa ...
I had the same issue on AWS's Postgres RDS.
You (as the backend creation service) basically have to create a role, place your deployment role (user-sa) into that role and also your app user (user-db).
Then your deployment script will have to use ALTER <OBJECT> OWNER TO <ROLE>. Now every role or user in that role will have access. You'll have to do this for functions, tables, etc
Another alternative is to set default permissions via ALTER DEFAULT PRIVILEGES, but note that those only take effect on new objects. If you add a new db role afterward and want to grant it permissions to a table that already existed, you'd still have to add explicit permissions for the new role.
* Note that in postgres, a role and user are interchangeable in commands. In my view, a role does not log in, but it can be used to hold a set of permissions. Instead of assigning permissions to individual users, assign them to a group/role, then grant the user membership to the group/role. Ensure that your inheriting permissions is allowed on the users and roles for this to work.
I'd like to limit the privileges afforded to any given user that I create via the Google Terraform provider. By default, any user created is placed in the cloudsqlsuperuser group, and any new database created has that role/group as owner. This gives any user created via the GCP console or google_sql_user Terraform resource total control over any database that is (or was) created in a similar fashion.
So far, the best we've been able to come up with is creating and altering a user via a single-run k8s job. This seems circuitous, at best, especially given that that resource must then be manually imported later if we want to manage it via Terraform.
Is there a better way to create a user that has privileges limited to a single, application-specific database?
I was puzzled by this behaviour too. Its probably not the answer you want but if you can use GCP IAM accounts the user gets created in the PostgreSQL instance with NO roles.
There are 3 types of account you can create from "gcloud sql users create" or terraform module "google_sql_user"
"CLOUD_IAM_USER", "CLOUD_IAM_SERVICE_ACCOUNT" or "BUILT_IN"
The default is the built_in type if not specified.
CLOUD_IAM_USER and CLOUD_IAM_SERVICE_ACCOUNTS get created with NO roles.
We are using these as integration with IAM is useful in lots of ways (no managing passwords at database level is a major plus esp. when used in conjunction with SQL Auth Proxy).
BUILT_IN accounts (ie old school need a postgres username and password) for some reason are granted the "cloudsqlsuperuser" role.
In the absence of being allowed the superuser role on GCP this is about as privileged as you can get so to me (and you) seems a bizarre default.
I am admin for our AWS environment, and wanted to use PowerShell, but I get these errors whenever I try to do anything
Get-EC2Region : You are not authorized to perform this operation.
or
Get-CSDomain : User: arn:aws:iam::123456789012:user/Jane.Doe is not authorized to perform: cloudsearch:DescribeDomains on resource: arn:aws:cloudsearch:eu-west-1:123456789012:domain/*
In my personal AWS account, everything works fine. We had a look at our policies, and us four admins can all do everything using the webconsole.
I have regenerated my access keys just in case that might be it, but there was no change.
So I guess my questions is:Do we need to implement some CLI specific policies to allow access via PowerShell?
You need to make sure the user you are using the correct AWS user credentials and the correct IAM policy to allow the given user to do the operation.
There are no CLI specific policies to PowerShell. The user simply has not been granted those permissions.
A good test would be to grant the user ec2:* and cloudsearch:* and confirm. then you can tighten down the permissions, having confirmed that the user can be successfully given a more permissive set of permissions.
I want to get details of Azure Subscription of my client. But I do not want to ask for special permission from client.
What I need is the bare minimum things from my client so that I can login from powershell or rest api and read status of runbook jobs.
If i login from admin account of the subscription than I can easily get those details. But you understand it is not possible to have admin account credential of my client.
Please suggest some workaround.
What you need to do is create a user in Azure Active Directory and grant that user specific rights using either the Azure Portal or PowerShell\Cli\SDK.
Say read all, or read properties of desired automation account. If you would want like a super minumim, you would need to create a custom role first.
https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-custom-roles/
If your client placed specific resources within a Resource Group, they may grant you permissions on just that Resource Group (including read-only permissions). This would allow you to have access to needed resources, without having access to other areas of their subscription.