GCS: change object-level permission control to bucket-level - google-cloud-storage

I created a GCS bucket and made it public by granting READER to allUsers, so I expect all objects in the bucket will be publicly accessible, but turns out only the bucket is readable, objects are not.
I guess it is because I enabled object-level permission control for the bucket, my questions: 1) how can I verify whether it is object-level permission control or not? 2) how can I update it to bucket-level permission control?
I need gsutil based solution. Thanks!

Check Bucket Policy Only documentation: https://cloud.google.com/storage/docs/bucket-policy-only
1 - you're looking for gsutil bucketonlypolicy get
$ gsutil bucketpolicyonly get gs://my-test-bucket
Bucket Policy Only setting for gs://my-test-bucket:
Enabled: True
LockedTime: 2019-07-09 16:14:31.777000+00:00
2 - check gsutil bucketonlypolicy set
$ gsutil bucketpolicyonly set on gs://my-test-default-acl-bucket/
Enabling Bucket Policy Only for gs://my-test-default-acl-bucket...

Related

Why does gsutil cp require storage.objects.delete on versioned bucket?

I'm using a service account to upload a file to Google Cloud Storage bucket that has versioning. I want to keep the service account privileges minimal, it only ever needs to upload files so I don't want to give it permission to delete files, but the upload fails (only after streaming everything!) saying it requires delete permission.
Shouldn't it be creating a new version instead of deleting?
Here's the command:
cmd-that-streams | gsutil cp -v - gs://my-bucket/${FILE}
ResumableUploadAbortException: 403 service-account#project.iam.gserviceaccount.com does not have storage.objects.delete access to my-bucket/file
I've double checked that versioning is enabled on the bucket
> gsutil versioning get gs://my-bucket
gs://my-bucket: Enabled
The permission storage.objects.delete is required if you are executing the gsutl cp command as per cloud storage gsutil commands.
Command: cp
Required permissions:
storage.objects.list* (for the destination bucket)
storage.objects.get (for the source objects)
storage.objects.create (for the destination bucket)
storage.objects.delete** (for the destination bucket)
**This permission is only required if you don't use the -n flag and you insert an object that has the same name as an object that already
exists in the bucket.
Google docs suggests to use -n (do not overwrite an existing file) so storage.objects.delete won't be required. But your use case uses versioning and you will be needing to overwrite, thus you will need to add storage.objects.delete on your permissions.
I tested this with a bucket versioning is enabled and only has 1 version. Service account that have roles Storage Object Creator and Storage Object Viewer.
See screenshot for the commands and output:
If you're overwriting an object, regardless of whether or not its parent bucket has versioning enabled, you must have storage.objects.delete permission for that object.
Versioning works such that when you delete the "live" version of an object, that version is marked as a "noncurrent" version (and the timeDeleted field is populated). In order to create a new version of an object when a live version already exists (i.e. overwriting the object), the transaction that happens is:
Delete the current version
Create a new version that becomes the "live" or "current" version

Can't remove OWNER access to a Google Cloud Storage object

I have a server that writes some data files to a Cloud Storage bucket, using a service account to which I have granted "Storage Object Creator" permissions for the bucket. I want that service account's permissions to be write-only.
The Storage Object Creator permission also allows read access, as far as I can tell, so I wanted to just remove the permission for the objects after they have been written. I thought I could use an ACL to do this, but it doesn't seem to work. If I use
gsutil acl get gs://bucket/object > acl.json
then edit acl.json to remove the OWNER permission for the service account, then use
gsutil acel set acl.json gs://bucket/object
to update the ACL, I find that nothing has changed; the OWNER permission is still there if I check the ACL again. The same thing happens if I try to remove the OWNER permission in the Cloud Console web interface.
Is there a way to remove that permission? Or another way to accomplish this?
You cannot remove the OWNER permissions for the service account that uploaded the object, from:
https://cloud.google.com/storage/docs/access-control/lists#bestpractices
The bucket or object owner always has OWNER permission of the bucket or object.
The owner of a bucket is the project owners group, and the owner of an object is either the user who uploaded the object, or the project owners group if the object was uploaded by an anonymous user.
When you apply a new ACL to a bucket or object, Cloud Storage respectively adds OWNER permission to the bucket or object owner if you omit the grants.
I have not tried this, but you could upload the objects using once service account (call it SA1), then rewrite the objects using a separate service account (call it SA2), and then delete the objects. SA1 will no longer be the owner, and therefore won't have read permissions. SA2 will continue to have both read and write permissions though, there is no way to prevent the owner of an object from reading it.
Renaming the object does the trick.
gsutil mv -p gs://bucket/object gs://bucket/object-renamed
gsutil mv -p gs://bucket/object-renamed gs://bucket/object
The renamer service account will become the object OWNER.

How to use Service Accounts with gsutil, for downloading from CS - DCM Google private owned bucket

A project, a Google Group have been set up for controlling data access following the DCM guide: https://support.google.com/dcm/partner/answer/3370481?hl=en-GB&ref_topic=6107456
The project does not contain the bucket I want to access(under Storage->Cloud Storage), since it's Google owned bucket, for which I only have read only access. I can see the bucket in my browser since I am allowed to with my Google account(since I am a member of the ACL).
I used the gsutil tool to configure the service account of the project that was linked with the private bucket using
gsutil config -e
but when I try to access that private bucket with
gsutil ls gs://<bucket_name>
I always get 403 errors, and I don't know why is that. Did anyone tried that before or any ideas are welcome.
Since the bucket is private and in project A, service accounts in your project (project B) will not have access. The service account for your project (project B) would need to be added to the ACL for that bucket.
Note that since you can access this bucket with read access as a user, you can run gsutil config to grant your user credentials to gsutil and use that to read the bucket.

how to grant read permission on google cloud storage to another service account

our team create some data on google cloud storage so other team can copy/download/read it from there, but when they tried, they always got 403 forbidden message. I tried to edit the permission on that bucket and added new permission as 'Project', 'viewers-(other team's project id)', and 'Reader', but still they got the same error when they ran this command:
gsutil cp -R gs://our-bucket gs://their-bucket
i also tried with their client id and email account, still the same.
I'm not sure one can define another group's collection of users with a give access right (readers, in this case), and apply it to an object in a different project.
An alternative to this would be to control bucket access via Google Groups: simply set up a group for readers, adding the users you wish to grant this right to. Then you can use said Group to control access to the bucket and/or contents. Further information, and use case scenario, here https://cloud.google.com/storage/docs/collaboration#group
try:
gsutil acl ch -u serviceaccount#google.com:R gs://your-bucket
This ch:changes the permission on 'your-bucket' for u:user serviceaccount#google.com to R:Reader.

Give Full access control to a user on a cloud storage bucket

I am a project owner and i have full control over the bucket.
I would like to give another user the FULL access control over this bucket, but I didn't manage to do it.
The mail of this user is an_email_address#gmail.com and he is listed as owner of the project, but can't have, as said before, full control over the bucket.
I tried also to give him access via gsutil: this is a snippet if the output of getacl.
<EmailAddress>an_email_address#gmail.com</EmailAddress>
<Name>User Name</Name>
</Scope>
<Permission>FULL_CONTROL</Permission>
If he logs in the Cloud storage console, he can't for example, change the permission of an object and so on.
Could you please give some hints on how to proceed?
Changing the bucket ACL will grant full control access over the bucket, which will allow reading, writing, and changing bucket metadata.
However, if you want a user to have full control over all objects in the bucket, you need to change the default object ACL, which is what is applied to objects that are created in that bucket. To change the default object ACL, you should be able to use a command such as:
gsutil defacl ch -u <email_address>:FC <bucket name>
Since this will only apply to objects created after the default object ACL has been updated, you'll also need to set the object ACL for any existing objects that you want to grant access to. If you want to grant access to all objects in the bucket, you could use a command like:
gsutil acl ch -u <email_address>:FC <bucket name>/**
If you have many existing objects in this bucket, you can add the -m flag (gsutil -m acl ch ...) to use multiprocessing for speed.
For detailed information about how ACLs work, take a look at https://developers.google.com/storage/docs/accesscontrol#default