I tried to set Cache-Control header for Google Cloud Storage bucket. It gives me 400: Invalid argument error without any explanation what exactly wrong. Any ideas?
Example:
gsutil setmeta -R -h 'Cache-Control:public, max-age=10000, no-transform' gs://example.com/stylesheets/
Setting metadata on gs://example.com/stylesheets/site-77ee6060.css
BadRequestException: 400 Invalid argument
Tried with different max-age, with or without no-transform, same result.
bucket is configured as website.
I had this issue, and discovered that I did not have OWNER permission on the object anymore (someone else on my team had edited the object).
Related
I am attempting to set permissions on individual objects in a Google Cloud Storage bucket to make them publicly viewable, following the steps indicated in Google's documentation. When I try to make these requests using our application service account, it fails with HTTP status 403 and the following message:
Access denied. Provided scope(s) are not authorized.
Other requests work fine. When I try to do the same thing but by providing a token for my personal account, the PUT request to the object's ACL works... about 50% of the time (the rest of the time it is a 503 error, which may or may not be related).
Changing the IAM policy for the service account to match mine - it normally has Storage Admin and some other incidental roles - doesn't help, even if I give it the overall Owner IAM role, which is what I have.
Using neither the XML API nor the JSON version makes a difference. That the request sometimes works with my personal credentials indicates to me that the request is not incorrectly formed, but there must be something else I've thus far overlooked. Any ideas?
Check for the scope of the service account incase you are using the default compute engine service account. By default the scope is restricted and for GCS it is read only. Use rm -r ~/.gsutil to clear cache in case of clearing cache
When trying to access GCS from a GCE instance and getting this error message ...
the default scope is devstorage.read_only, which prevents all write operations.
Not sure if scope https://www.googleapis.com/auth/cloud-platform is required, when scope https://www.googleapis.com/auth/devstorage.read_only is given by default (eg. to read startup scripts). The scope should rather be: https://www.googleapis.com/auth/devstorage.read_write.
And one can use gcloud beta compute instances set-scopes to edit the scopes of an instance:
gcloud beta compute instances set-scopes $INSTANCE_NAME \
--project=$PROJECT_ID \
--zone=$COMPUTE_ZONE \
--scopes=https://www.googleapis.com/auth/devstorage.read_write \
--service-account=$SERVICE_ACCOUNT
One can also pass all known alias names for scopes, eg: --scopes=cloud-platform. The command must be run outside of the instance, because of permissions - and the instance must be shutdown, in order to change the service account.
Follow the documentation you provided, taking into account these points:
Access Control system for the bucket has to be Fine-grained (not uniform).
In order to make objects publicly available, make sure the bucket does not have the public access prevention enabled. Check this link for further information.
Grant the service account with the appropriate permissions in the bucket. The Storage Legacy Object Owner role (roles/storage.legacyObjectOwner) is needed to edit objects ACLs as indicated here. This role can be granted for individual buckets, not for projects.
Create the json file as indicated in the documentation.
Use gcloud auth application-default print-access-token to get authorization access token and use it in the API call. The API call should look like:
curl -X POST --data-binary #JSON_FILE_NAME.json \
-H "Authorization: Bearer $(gcloud auth application-default print-access-token)" \
-H "Content-Type: application/json" \
"https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME/acl"
You need to add OAuth scope: cloud-platform when you create the instance. Look: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#--scopes
Either select "Allow full access to all Cloud APIs" or select the fine-grained approach
So, years later, it turns out the problem is that "scope" is being used by the Google Cloud API to refer to two subtly different things. One is the available permission scopes available to the service account, which is what I (and most of the other people who answered the question) kept focusing on, but the problem turned out to be something else. The Python class google.auth.credentials.Credentials, used by various Google Cloud client classes to authenticate, also has permission scopes used for OAuth. You see where this is going - the client I was using was being created with a default OAuth scope of 'https://www.googleapis.com/auth/devstorage.read_write', but to make something public requires the scope 'https://www.googleapis.com/auth/devstorage.full_control'. Adding this scope to the OAuth credential request means the setting public permissions on objects works.
I'm the owner of the account and project.
I login using the google cloud sdk and try the following command:
gsutils -m setmeta -h "Cache-Control:public, max-age=3600" gs//bucket/**/*.*
I get the following error for some of the files:
AccessDeniedException: 403 <owner#email.com> does not have storage.objects.update access to <filePath>
Most of the files are updated, but some are not. Because I have a lot of files, if 10% are not updated, that means a few gigs of data is not updated.
Any idea why this happens with an owner account and how to fix this?
If the Access Control on your bucket is set to Uniform you need to add permissions to it even if you are project owner.
For example:
I have a test file in a bucket and when I want to access it I get an access required popup.
I gave to my Owner account in the permissions tab of the bucket "Storage Object Admin" and now I can access it freely.
Here you have more info about Project Level Roles vs Bucket Level Roles.
Let me know.
I have been running a batch file to pull files from google bucket which was created by someone and had been working in the past, however, now I'm getting an error message stating
"ACCESS DENIED EXCEPTION:403 tim#gmail.com does not have storage.objects.list access to dcct_-dcm_account870"
What can I do to resolve it?
I just found out the solution to this issue.
I notice that ****#gmail.com have left the company and i have to reconfigure the gsutil to give access to myself using the link below for previous answer
gsutil cors set command returns 403 AccessDeniedException
I have been using the Google Speech API to transcribe audio to text from my PHP app (using the Google Cloud PHP Client) for several months without any problem. But my calls have now started to return 403 errors with status "PERMISSION_DENIED" and message "The caller does not have permission".
I'm using the Speech API together with Google Storage. I'm authenticating using a service account and sending my audio data to Storage. That's working, the file gets uploaded. So I understand - but I might be wrong? - that "the caller" does not have permission to then read to the audio data from Storage.
I've been playing with permissions through the Google Console without success. I've read the docs but am quite confused. The service account I am using (I guess this is "the caller"?) has owner permissions on the project. And everything used to work fine, I haven't changed a thing.
I'm not posting code because if I understand correctly my app code isn't the issue - it's rather my Google Cloud settings. I'd be grateful for any idea or clarifications of concepts!
Thanks.
Being an owner of the project doesn't necessarily imply that the service account has read permission on the object. It's possible that the object was uploaded by another account that specified a private ACL or similar.
Make sure that the service account has access to the object by giving it the right permissions on the entire bucket or on the specific object itself.
You can do so using gsutil acl. More information and additional methods may be found in the official documentation.
For instance the following command gives READ permission on an object to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket/object
And this command gives READ permission on an entire bucket to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket
In google cloud vision,when your creating credentials with service account key, you have to create role and set it owner and accesses full permissions
gsutil -m acl -r set public-read gs://my_bucket/
command gives AccessDeniedException: 403 Forbidden error even I provide full access to my email id as owner to my_bucket.I am using blobstore api to upload the file in my project. How to solve this problem.
You probably need to set up Cloud API access for your virtual machine. Currently it needs to be set during VM creation process by enabling:
Allow full access to all Cloud APIs
To provide access for VM when you haven't chosen the above setting you need to recreate instance with full access, but there is pending improvement:
Google Cloud Platform Ability to change API access scopes
When it's done we will be able to change settings after shutting down instance.