Can't remove OWNER access to a Google Cloud Storage object - google-cloud-storage

I have a server that writes some data files to a Cloud Storage bucket, using a service account to which I have granted "Storage Object Creator" permissions for the bucket. I want that service account's permissions to be write-only.
The Storage Object Creator permission also allows read access, as far as I can tell, so I wanted to just remove the permission for the objects after they have been written. I thought I could use an ACL to do this, but it doesn't seem to work. If I use
gsutil acl get gs://bucket/object > acl.json
then edit acl.json to remove the OWNER permission for the service account, then use
gsutil acel set acl.json gs://bucket/object
to update the ACL, I find that nothing has changed; the OWNER permission is still there if I check the ACL again. The same thing happens if I try to remove the OWNER permission in the Cloud Console web interface.
Is there a way to remove that permission? Or another way to accomplish this?

You cannot remove the OWNER permissions for the service account that uploaded the object, from:
https://cloud.google.com/storage/docs/access-control/lists#bestpractices
The bucket or object owner always has OWNER permission of the bucket or object.
The owner of a bucket is the project owners group, and the owner of an object is either the user who uploaded the object, or the project owners group if the object was uploaded by an anonymous user.
When you apply a new ACL to a bucket or object, Cloud Storage respectively adds OWNER permission to the bucket or object owner if you omit the grants.
I have not tried this, but you could upload the objects using once service account (call it SA1), then rewrite the objects using a separate service account (call it SA2), and then delete the objects. SA1 will no longer be the owner, and therefore won't have read permissions. SA2 will continue to have both read and write permissions though, there is no way to prevent the owner of an object from reading it.

Renaming the object does the trick.
gsutil mv -p gs://bucket/object gs://bucket/object-renamed
gsutil mv -p gs://bucket/object-renamed gs://bucket/object
The renamer service account will become the object OWNER.

Related

Overwriting object in google cloud storage failed

I try to upload a new version of a file to the bucket.
gsutil cp test.txt gs://mybucket/test.txt
and receive a 403 response:
Copying "direction: ltr;" class="">AccessDeniedException: 403 xxx#yyy.iam.gserviceaccount.com does not have storage.objects.delete access to mybucket/test.txt.
Actually, the service account has an Object Creator role.
Is it not enough?
According to the official documentation
Storage Object Creator
Allows users to create objects. Does not give permission to view,
delete, or overwrite objects
resourcemanager.projects.get
resourcemanager.projects.list
storage.objects.create
Therefore, please assign to your service account Storage Object Admin (roles/storage.objectAdmin) because you do not have storage.objects.delete access to the bucket used in versioning process.
When you upload a new version of your file to your Cloud Storage bucket, Object Versioning moves the existing object into a noncurrent state.
I reproduced your use case with a service account that have Object Creator role on a bucket that has Access control Uniform and versioning enabled and got the same error message:
service-account.iam.gserviceaccount.com does not have storage.objects.delete access to your-bucket/file

Unable to transfer GCS bucket from one account to another

I am trying to create a transfer job in Data Transfer, to copy all files in a bucket belonging to one account to an existing bucket belonging to another account.
I get access to both source and destination buckets, I get "green light" in the wizard, but when I try to run the transfer job I get the following error message:
To complete this transfer, you need the 'storage.buckets.setIamPolicy'
permission for the source bucket. Ask the bucket's administrator to
grant you the required permission and try again.
I have tried to apply various roles to the user runnning the transfer job, but I can't figure out how to overcome this problem.
Can anyone help me on this?
This permission storage.buckets.setIamPolicy can be granted with either roles/storage.legacyBucketOwner or roles/iam.securityAdmin role. It could be needed to keep the permissions applied to the source object.
Permissions for copying an object:
storage.objects.create (for the destination bucket)
storage.objects.delete (for the destination bucket)
storage.objects.get (for the source object)
storage.objects.getIamPolicy (for the source object)
storage.objects.setIamPolicy (for the destination bucket)
Please see:
Cloud IAM > Documentation > Understanding roles
Cloud Storage > Documentation > Reference > Cloud IAM roles

Objects do not inherit bucket permissions

In GCS storage, when adding permissions to a bucket (NOT the whole project; just a single bucket inside that project), you used to be able to set up the permissions of a bucket so that any NEW objects put in the bucket inherit the bucket's permissions.
In the newest version of the GCS however, we have not been able to figure out how to do this. We can set permissions to a root bucket:
{
"email": "someuser#someaccount.iam.gserviceaccount.com",
"entity": "someuser#someaccount.iam.gserviceaccount.com",
"role": "READER"
}
But then when a new object is placed in that bucket, it does not inherit this role.
Is there a way to either (a) inherit the role, or (b) set an IAM role to the bucket (we have only been able to set an IAM role to the project, not a specific bucket)?
Thanks!
There are five different ways to configure Access Control options for Cloud Storage buckets. I suggest you the Access Control Lists (ACLs) to inherit the role in a single bucket since ACLs are used when “you need fine-grained control over individual objects”.
To change the permissions on a single bucket inside a project using the Console,
Go to Storage, browser. Once there you will see a bucket list.
Select the bucket in which you want to change the permissions.
Click on the three vertical dots at the right side and select "Edit bucket permissions".
Type the account that you want to configure and select the desired role.
The described procedure is detailed here, as well as other ways to set the ACLs, as for example using the Cloud Shell. The next command specify individual grants:
gsutil acl ch -u [USER_EMAIL]:[PERMISSION] gs://[BUCKET_NAME]
Find a list of predefined roles here.
Update 2
Considering the next error:
CommandException:
user#account.iam.gserviceaccount.com:roles/storage.legacyBucketReader
is not a valid ACL change Allowed permissions are OWNER, WRITER,
READER
And the fact that there are two types of roles involved:
Identity and Access Management (IAM) roles: members project oriented roles. “Defines who (identity) has what access (role) for which resource”. Example: gsutil iam ch user:[USER_EMAIL]:objectCreator,objectViewer gs://[BUCKET_NAME]
Access Control Lists (ACLs): grant read or write access to users for individual buckets or objects. Example: gsutil acl ch -u [USER_EMAIL]:READER gs://[BUCKET_NAME]
The command is not working because both commands are mixed. For gsutil acl, the only possible permissions are READER, WRITER, OWNER, Default, as you can see here.

how to grant read permission on google cloud storage to another service account

our team create some data on google cloud storage so other team can copy/download/read it from there, but when they tried, they always got 403 forbidden message. I tried to edit the permission on that bucket and added new permission as 'Project', 'viewers-(other team's project id)', and 'Reader', but still they got the same error when they ran this command:
gsutil cp -R gs://our-bucket gs://their-bucket
i also tried with their client id and email account, still the same.
I'm not sure one can define another group's collection of users with a give access right (readers, in this case), and apply it to an object in a different project.
An alternative to this would be to control bucket access via Google Groups: simply set up a group for readers, adding the users you wish to grant this right to. Then you can use said Group to control access to the bucket and/or contents. Further information, and use case scenario, here https://cloud.google.com/storage/docs/collaboration#group
try:
gsutil acl ch -u serviceaccount#google.com:R gs://your-bucket
This ch:changes the permission on 'your-bucket' for u:user serviceaccount#google.com to R:Reader.

Give Full access control to a user on a cloud storage bucket

I am a project owner and i have full control over the bucket.
I would like to give another user the FULL access control over this bucket, but I didn't manage to do it.
The mail of this user is an_email_address#gmail.com and he is listed as owner of the project, but can't have, as said before, full control over the bucket.
I tried also to give him access via gsutil: this is a snippet if the output of getacl.
<EmailAddress>an_email_address#gmail.com</EmailAddress>
<Name>User Name</Name>
</Scope>
<Permission>FULL_CONTROL</Permission>
If he logs in the Cloud storage console, he can't for example, change the permission of an object and so on.
Could you please give some hints on how to proceed?
Changing the bucket ACL will grant full control access over the bucket, which will allow reading, writing, and changing bucket metadata.
However, if you want a user to have full control over all objects in the bucket, you need to change the default object ACL, which is what is applied to objects that are created in that bucket. To change the default object ACL, you should be able to use a command such as:
gsutil defacl ch -u <email_address>:FC <bucket name>
Since this will only apply to objects created after the default object ACL has been updated, you'll also need to set the object ACL for any existing objects that you want to grant access to. If you want to grant access to all objects in the bucket, you could use a command like:
gsutil acl ch -u <email_address>:FC <bucket name>/**
If you have many existing objects in this bucket, you can add the -m flag (gsutil -m acl ch ...) to use multiprocessing for speed.
For detailed information about how ACLs work, take a look at https://developers.google.com/storage/docs/accesscontrol#default