"Access Denied. Provided scope(s) are not authorized" error when trying to make objects public using the REST API - rest

I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:
Access denied. Provided scope(s) are not authorized.
Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).
Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.
Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?

Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache

When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.
Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.
And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:
gcloud beta compute instances set-scopes $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$COMPUTE_ZONE \
--scopes=https://www.googleapis.com/auth/devstorage.read_write \
--service-account=$SERVICE_ACCOUNT
One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.

Follow the documentation you provided, taking into account these points:
Access Control system for the bucket has to be Fine-grained (not uniform).
In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.
Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.
Create the json file as indicated in the documentation.
Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:
curl -X POST --data-binary #JSON_FILE_NAME.json \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
"https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"

You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes
Either select "Allow full access to all Cloud APIs" or select the fine-grained approach

So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.

Related

Keycloak Huge JWT Token of master admin

We have a realm per customer, multi-tenant architecture. (Expected to have around 500 realms) There is a service account we use, a client in the master realm that will manage the customer realms. The problem is the huge JWT token for the master realm admin user/service account, who manage all the realms that increases in size as the number of realms increases. This is due to the 20+ client roles of each new realm. What are the different options we have to keep the token size low?
EDIT:
ps: Reducing roles is not an option either. The service account is an admin of the keycloak admin portal and needs to manage all the realms, so it needs manage-realm etc roles for all the realms. Keyclaok admin portal will not for example allow to delete the realm if it doesnt have delete-realm role.
To reduce the size of the token you can use the following strategies:
restrict the roles that the client access;
use client scopes to narrow down the claims that will be added to the token.
For the first option, you can go to the client in question, tab Scope, disable Full Scope Allowed, and then choose only the roles that the client is really interested.
For the second option, you can go to Client Scopes, create a scope, save it, then go to the Scope tab, add the roles that will be part of that scope. Then everytime you make a request for a token to the client in question you can send the scope as a parameter of that request. This way you will only include in the token the roles that belong to that scope.
I think you are running into this https://issues.redhat.com/browse/KEYCLOAK-1268.
If you look through the discussion there you will see
Tokens created for admin-cli or security-admin-console no longer have
any roles embedded within them. The Admin REST API now checks the
"aud" claim. If the audience is one of those clients, then it ignores
claims in token and just accesses the UserModel directly to determine
if admin has specific permissions.
Use admin-cli or security-admin-console for your client_id. Otherwise you will get every role there is for every realm there is in your token.
In order to use either one of those clients with a token request, you have to make it confidential and enable Direct Access Grants.
Then something like this will work and give you a managable token
curl -i -X POST \
-H "Content-Type:application/x-www-form-urlencoded" \
-d "grant_type=password" \
-d "client_id=admin-cli" \
-d "client_secret=<admin-cli-secret>" \
-d "username=my_admin_user" \
-d "password=<my_admin_user_password>" \
'http://master.foo.com/auth/realms/master/protocol/openid-connect/token'

Keycloak token exchange across realms

We use Keycloak 12.02 for this test.
The idea is that we have a lot of customers, that we all have in their own realms. We want to be able to impersonate a user in any non-master realm for an admin/support user in the master realm.
The flow would be to:
login using a super-user/password to login into the master realm
get a list of all available realms and their users
craft a request to exchange the current access token with a new access token for that specific user.
It is the last step I cannot get to work.
Example:
Login to master realm
token=$(curl -s -d 'client_id=security-admin-console'
-d 'username=my-super-user' -d 'password=my-super-pass' \
-d 'grant_type=password' \
'https://login.example.net/auth/realms/master/protocol/openid-connect/token' | jq -r .access_token)
(we now have an access token for the super-user in the master realm)
The Keycloak server has enabled token exchange (-Dkeycloak.profile.feature.token_exchange=enabled) as described here https://www.keycloak.org/docs/latest/securing_apps/#_token-exchange.
Attempt to impersonate a user in another realm (not master):
curl -s -X POST "https://login.example.net/auth/realms/some_realm/protocol/openid-connect/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=urn:ietf:params:oauth:grant-type:token-exchange" \
-d 'client_id=some_client' \
-d "requested_subject=some_user" \
-d "subject_token=$token"
However, this does not work. The result is: {"error":"invalid_token","error_description":"Invalid token"}
(Doing this inside a single realm work)
What am I doing wrong here? This seems like a very normal feature to utilize in a real-life deployment, so any help is much appreciated!
UPDATE:
First of all, I found the very same use-case here: https://lists.jboss.org/pipermail/keycloak-user/2019-March/017483.html
Further, I can get it to work by working through some major hoops. As described above, one can use the broker client in the master realm as an identity provider:
Login as super-user adminA -> TokenA
use TokenA to get a new external token, TokenExt from the master identity provider.
Use TokenExt to do a token exchange for the user you want to impersonate
The caveat with the above is that the user adminA is created in each of the realms you log into with this method, so still not ideal.
as far as I know what you are describing is not possible. I'm wondering where you are, more than a year afterwards... did you solve your issue?
Before going further, note that I have found Keycloak discourse a good forum for Keycloak questions: https://keycloak.discourse.group/
Second, this is what I understand: for Keycloak, 2 realms or 2 different Keycloaks is the same. There is nothing common, they are 2 completely different id providers. So any reasoning that supposes shared trust or shared users between realms will not work.
For logging in to the other realm, you need a token that is trusted. There is no reason for the other realm to trust the master realm. The way to set that trust is to set up the master realm client as an identity provider to the other realm (I understand that this is what you do not want to do), so that tokens signed by the master realm will be trusted by the other realm.
And once you have that set up, I have not seen any other way of exchanging than having the token exchange create a federated "admin" user in the other realm (I configure it to be created each time from scratch, to avoid any synch). Also, 2 mappings are going to come in to play, the ID provider mapping, and the client mapping, for creating the resulting JWT.
If this doesn't match with your findings, please correct me.
Ah yes: there is also the question of using token exchange as defined in OAuth, with the may_act claim, which would be perfect here. But it would come after the exchange between realms, in addition. See https://datatracker.ietf.org/doc/html/rfc8693#section-4.4
EDIT: to "create the user each time from scratch"
go to "identity providers" / / "settings"
select "sync mode" to "force"
This is the relevant extrant from the tooltip:
The sync mode determines when user data will be synced using the
mappers. Possible values are: 'legacy' to keep the behaviour before
this option was introduced, 'import' to only import the user once
during first login of the user with this identity provider, 'force' to
always update the user during every login with this identity
provider.
so when you choose "force", basically the user will be overwritten at each login.
Ok so it's not really a creation, but as close as you can get :-)
The idea here is to not care about it, which is fine for prototyping. But I guess that in production you may want to optimize this.

Recovering access after initially provisioning wrong scopes for an instance

I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.

Adding roles to service accounts on Google Cloud Platform using REST API

I want to create a service account on GCP using a python script calling the REST API and then give it specific roles - ideally some of these, such as roles/logging.logWriter.
First I make a request to create the account which works fine and I can see the account in Console/IAM.
Second I want to give it the role and this seems like the right method. However, it is not accepting roles/logging.logWriter, saying HttpError 400, "Role roles/logging.logWriter is not supported for this resource.">
Conversely, if I set the desired policy in console, then try the getIamPolicy method (using the gcloud tool), all I get back is response etag: ACAB, no mention of the actual role I set. Hence I think these roles refer to different things.
Any idea how to go about scripting a role/scope for a service account using the API?
You can grant permissions to a GCP service account in a GCP project without having to rewrite the entire project policy!
Use the gcloud projects add-iam-policy-binding ... command for that (docs).
For example, given the environment variables GCP_PROJECT_ID and GCP_SVC_ACC the following command grants all privileges in the container.admin role to the chosen service account:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SVC_ACC} \
--role=roles/container.admin
To review what you've done:
$ gcloud projects get-iam-policy $GCP_PROJECT_ID \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:${GCP_SVC_ACC}"
Output:
ROLE
roles/container.admin
(or more roles, if those were granted before)
Notes:
The environment variable GCP_SVC_ACC is expected to contain the email notation for the service account.
Kudos to this answer for the nicely formatted readout.
You appear to be trying to set a role on the service account (as a resource). That's for setting who can use the service account.
If you want to give the service account (as an identity) a particular role on the project and its resources, see this method: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy

gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE

I am logged in to a GCE instance via SSH. From there I would like to access the Storage with the help of a Service Account:
GCE> gcloud auth list
Credentialed accounts:
- 1234567890-compute#developer.gserviceaccount.com (active)
I first made sure that this Service account is flagged "Can edit" in the permissions of the project I am working in. I also made sure to give him the Write ACL on the bucket I would like him to copy a file:
local> gsutil acl ch -u 1234567890-compute#developer.gserviceaccount.com:W gs://mybucket
But then the following command fails:
GCE> gsutil cp test.txt gs://mybucket/logs
(I also made sure that "logs" is created under "mybucket").
The error message I get is:
Copying file://test.txt [Content-Type=text/plain]...
AccessDeniedException: 403 Insufficient Permission 0 B
What am I missing?
One other thing to look for is to make sure you set up the appropriate scopes when creating the GCE VM. Even if a VM has a service account attached, it must be assigned devstorage scopes in order to access GCS.
For example, if you had created your VM with devstorage.read_only scope, trying to write to a bucket would fail, even if your service account has permission to write to the bucket. You would need devstorage.full_control or devstorage.read_write.
See the section on Preparing an instance to use service accounts for details.
Note: the default compute service account has very limited scopes (including having read-only to GCS). This is done because the default service account has Project Editor IAM permissions. If you use any user service account this is not typically a problem since user created service accounts get all scope access by default.
After adding necessary scopes to the VM, gsutil may still be using cached credentials which don't have the new scopes. Delete ~/.gsutil before trying the gsutil commands again. (Thanks to #mndrix for pointing this out in the comments.)
You have to log in with an account that has the permissions you need for that project:
gcloud auth login
gsutil config -b
Then surf to the URL it provides,
[ CLICK Allow ]
Then copy the verification code and paste to terminal.
Stop VM
goto --> VM instance details.
in "Cloud API access scopes" select "Allow full access to all Cloud APIs" then
Click "save".
restart VM and Delete ~/.gsutil .
I have written an answer to this question since I can not post comments:
This error can also occur if you're running the gsutil command with a sudo prefix in some cases.
After you have created the bucket, go to the permissions tab and add your email and set Storage Admin permission.
Access VM instance via SSH >> run command: gcloud auth login and follow the steps.
Ref: https://groups.google.com/d/msg/gce-discussion/0L6sLRjX8kg/kP47FklzBgAJ
So I tried a bunch of things trying to copy from GCS bucket to my VM.
Hope this post helps someone.
Via SSHed connection:
and following this script:
sudo gsutil cp gs://[BUCKET_NAME]/[OBJECT_NAME] [OBJECT_DESTINATION_IN_LOCAL]
Got this error:
AccessDeniedException: 403 Access Not Configured. Please go to the Google Cloud Platform Console (https://cloud.google.com/console#/project) for your project, select APIs and Auth and enable the Google Cloud Storage JSON API.
What fixed this was following "Activating the API" section mentioned in this link -
https://cloud.google.com/storage/docs/json_api/
Once I activated the API then I authenticated myself in SSHed window via
gcloud auth login
Following authentication procedure I was finally able to download from Google Storage Bucket to my VM.
PS
I did make sure to:
Make sure that gsutils are installed on my VM instance.
Go to my bucket, go to the permissions tab and add desired service accounts and set Storage Admin permission / role.
3.Make sure my VM had proper Cloud API access scopes:
From the docs:
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes
You need to first stop the instance -> go to edit page -> go to "Cloud API access scopes" and choose "storage full access or read/write or whatever you need it for"
Changing the service account and access scopes for an instance If you
want to run the VM as a different identity, or you determine that the
instance needs a different set of scopes to call the required APIs,
you can change the service account and the access scopes of an
existing instance. For example, you can change access scopes to grant
access to a new API, or change an instance so that it runs as a
service account that you created, instead of the Compute Engine
Default Service Account.
To change an instance's service account and access scopes, the
instance must be temporarily stopped. To stop your instance, read the
documentation for Stopping an instance. After changing the service
account or access scopes, remember to restart the instance. Use one of
the following methods to the change service account or access scopes
of the stopped instance.
Change the permissions of bucket.
Add a user for "All User" and give "Storage Admin" access.