Overwriting object in google cloud storage failed - google-cloud-storage

I try to upload a new version of a file to the bucket.
gsutil cp test.txt gs://mybucket/test.txt
and receive a 403 response:
Copying "direction: ltr;" class="">AccessDeniedException: 403 xxx#yyy.iam.gserviceaccount.com does not have storage.objects.delete access to mybucket/test.txt.
Actually, the service account has an Object Creator role.
Is it not enough?

According to the official documentation
Storage Object Creator
Allows users to create objects. Does not give permission to view,
delete, or overwrite objects
resourcemanager.projects.get
resourcemanager.projects.list
storage.objects.create
Therefore, please assign to your service account Storage Object Admin (roles/storage.objectAdmin) because you do not have storage.objects.delete access to the bucket used in versioning process.
When you upload a new version of your file to your Cloud Storage bucket, Object Versioning moves the existing object into a noncurrent state.
I reproduced your use case with a service account that have Object Creator role on a bucket that has Access control Uniform and versioning enabled and got the same error message:
service-account.iam.gserviceaccount.com does not have storage.objects.delete access to your-bucket/file

Related

Access specific folder in GCS bucket according to user, using Workload Identity Federation

I have an external identity provider that supports OpenID Connect (OIDC) and want to access Google Cloud Storage(GCS) directly, using a short-lived access token. So I'm using workload identity federation in order to provide a credential from my external identity provider and get a federated token in exchange.
I have created the workload identity pool and provider and connected a service account to it, which has write access to a certain bucket in GCS.
How can I differentiate the access to specific folder in the bucket according to the token provided from my external identity provider? For example for userA to have access only to folderA in the bucket. Can I do this using one service account?
Any help would be highly appreciated.
The folders don't exist on Cloud Storage, it's a blob storage, all the object are stored at the bucket level. For human readability and representation, the / are the folder separator, by convention.
Therefore, because directory doesn't exist, you can't grant any permission on it. The finer granularity is the bucket.
In your use case, you can't grant a write access at folder level, but you can create 1 bucket per user and therefore grant the impersonated service account on the bucket.

Why does gsutil cp require storage.objects.delete on versioned bucket?

I'm using a service account to upload a file to Google Cloud Storage bucket that has versioning. I want to keep the service account privileges minimal, it only ever needs to upload files so I don't want to give it permission to delete files, but the upload fails (only after streaming everything!) saying it requires delete permission.
Shouldn't it be creating a new version instead of deleting?
Here's the command:
cmd-that-streams | gsutil cp -v - gs://my-bucket/${FILE}
ResumableUploadAbortException: 403 service-account#project.iam.gserviceaccount.com does not have storage.objects.delete access to my-bucket/file
I've double checked that versioning is enabled on the bucket
> gsutil versioning get gs://my-bucket
gs://my-bucket: Enabled
The permission storage.objects.delete is required if you are executing the gsutl cp command as per cloud storage gsutil commands.
Command: cp
Required permissions:
storage.objects.list* (for the destination bucket)
storage.objects.get (for the source objects)
storage.objects.create (for the destination bucket)
storage.objects.delete** (for the destination bucket)
**This permission is only required if you don't use the -n flag and you insert an object that has the same name as an object that already
exists in the bucket.
Google docs suggests to use -n (do not overwrite an existing file) so storage.objects.delete won't be required. But your use case uses versioning and you will be needing to overwrite, thus you will need to add storage.objects.delete on your permissions.
I tested this with a bucket versioning is enabled and only has 1 version. Service account that have roles Storage Object Creator and Storage Object Viewer.
See screenshot for the commands and output:
If you're overwriting an object, regardless of whether or not its parent bucket has versioning enabled, you must have storage.objects.delete permission for that object.
Versioning works such that when you delete the "live" version of an object, that version is marked as a "noncurrent" version (and the timeDeleted field is populated). In order to create a new version of an object when a live version already exists (i.e. overwriting the object), the transaction that happens is:
Delete the current version
Create a new version that becomes the "live" or "current" version

Unable to transfer GCS bucket from one account to another

I am trying to create a transfer job in Data Transfer, to copy all files in a bucket belonging to one account to an existing bucket belonging to another account.
I get access to both source and destination buckets, I get "green light" in the wizard, but when I try to run the transfer job I get the following error message:
To complete this transfer, you need the 'storage.buckets.setIamPolicy'
permission for the source bucket. Ask the bucket's administrator to
grant you the required permission and try again.
I have tried to apply various roles to the user runnning the transfer job, but I can't figure out how to overcome this problem.
Can anyone help me on this?
This permission storage.buckets.setIamPolicy can be granted with either roles/storage.legacyBucketOwner or roles/iam.securityAdmin role. It could be needed to keep the permissions applied to the source object.
Permissions for copying an object:
storage.objects.create (for the destination bucket)
storage.objects.delete (for the destination bucket)
storage.objects.get (for the source object)
storage.objects.getIamPolicy (for the source object)
storage.objects.setIamPolicy (for the destination bucket)
Please see:
Cloud IAM > Documentation > Understanding roles
Cloud Storage > Documentation > Reference > Cloud IAM roles

Can't remove OWNER access to a Google Cloud Storage object

I have a server that writes some data files to a Cloud Storage bucket, using a service account to which I have granted "Storage Object Creator" permissions for the bucket. I want that service account's permissions to be write-only.
The Storage Object Creator permission also allows read access, as far as I can tell, so I wanted to just remove the permission for the objects after they have been written. I thought I could use an ACL to do this, but it doesn't seem to work. If I use
gsutil acl get gs://bucket/object > acl.json
then edit acl.json to remove the OWNER permission for the service account, then use
gsutil acel set acl.json gs://bucket/object
to update the ACL, I find that nothing has changed; the OWNER permission is still there if I check the ACL again. The same thing happens if I try to remove the OWNER permission in the Cloud Console web interface.
Is there a way to remove that permission? Or another way to accomplish this?
You cannot remove the OWNER permissions for the service account that uploaded the object, from:
https://cloud.google.com/storage/docs/access-control/lists#bestpractices
The bucket or object owner always has OWNER permission of the bucket or object.
The owner of a bucket is the project owners group, and the owner of an object is either the user who uploaded the object, or the project owners group if the object was uploaded by an anonymous user.
When you apply a new ACL to a bucket or object, Cloud Storage respectively adds OWNER permission to the bucket or object owner if you omit the grants.
I have not tried this, but you could upload the objects using once service account (call it SA1), then rewrite the objects using a separate service account (call it SA2), and then delete the objects. SA1 will no longer be the owner, and therefore won't have read permissions. SA2 will continue to have both read and write permissions though, there is no way to prevent the owner of an object from reading it.
Renaming the object does the trick.
gsutil mv -p gs://bucket/object gs://bucket/object-renamed
gsutil mv -p gs://bucket/object-renamed gs://bucket/object
The renamer service account will become the object OWNER.

Google Speech API returns 403 PERMISSION_DENIED

I have been using the Google Speech API to transcribe audio to text from my PHP app (using the Google Cloud PHP Client) for several months without any problem. But my calls have now started to return 403 errors with status "PERMISSION_DENIED" and message "The caller does not have permission".
I'm using the Speech API together with Google Storage. I'm authenticating using a service account and sending my audio data to Storage. That's working, the file gets uploaded. So I understand - but I might be wrong? - that "the caller" does not have permission to then read to the audio data from Storage.
I've been playing with permissions through the Google Console without success. I've read the docs but am quite confused. The service account I am using (I guess this is "the caller"?) has owner permissions on the project. And everything used to work fine, I haven't changed a thing.
I'm not posting code because if I understand correctly my app code isn't the issue - it's rather my Google Cloud settings. I'd be grateful for any idea or clarifications of concepts!
Thanks.
Being an owner of the project doesn't necessarily imply that the service account has read permission on the object. It's possible that the object was uploaded by another account that specified a private ACL or similar.
Make sure that the service account has access to the object by giving it the right permissions on the entire bucket or on the specific object itself.
You can do so using gsutil acl. More information and additional methods may be found in the official documentation.
For instance the following command gives READ permission on an object to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket/object
And this command gives READ permission on an entire bucket to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket
In google cloud vision,when your creating credentials with service account key, you have to create role and set it owner and accesses full permissions