I'm the owner of the GCP project and I can manage the objects in the storage. However there is no “Edit metadata” item in the storage console, only “View metadata”.
http://prntscr.com/ps54zo
Why is that?
In the other bucket of the same project I have this option: http://prntscr.com/ps574t.
UPDATE
Doing gsutil ls -L gives me the following output:
Creation time: Tue, 29 Oct 2019 10:29:50 GMT
Update time: Tue, 29 Oct 2019 10:29:50 GMT
Storage class: MULTI_REGIONAL
Content-Length: 22536821
Content-Type: audio/mp3
Hash (crc32c): Gn3MXQ==
Hash (md5): VnUZeK6CjUZ8uqN9dIlGew==
ETag: CJbNjcWhweUCEAE=
Generation: 1572344990754454
Metageneration: 1
ACL: ACCESS DENIED
Note: You need OWNER permission on the object to read its ACL
TOTAL: 1 objects, 22536821 bytes (21.49 MiB)
That's strange that it says that I'm not the owner of the object.
http://prntscr.com/ps7o5m - Bucket permissions.
http://prntscr.com/ps7pdy - Project IAM
The permission required to view an object's ACL is storage.objects.getIamPolicy. This is, maybe surprisingly, not one of the permissions granted by the role roles/storage.legacyBucketOwner. Similarly, that permision also does not grant permission to read the objects.
If you want to be able to download all of the objects in a bucket and see all of the ACLs, you'll need to grant yourself roles/storage.legacyObjectOwner for that bucket as well.
I reproduced your issue. For this, I granted a user as Storage Legacy Bucket Owner and I can only view the medatadata.
For solving this, I granted the storage admin role to the user. Try to grant this role to your user.
Another thing. You can be Owner of a project but, if the bucket has not been created in this project, your owner role isn't inherited in the bucket.
Related
I'm using RBAC to perform a blob copy operation: the service principal which azcopy is logged in as has the Storage Blob Data Contributor role for my subscription (listed as a requirement here)... however, I get a permission denied exception as follows:
As you can see, the failing operation is to list the storage account containers (line 68 and 74)
I appreciate this isn't easy to debug without further info... but I'm pretty stumped, so if anyone has had a similar issue, I'd be very grateful for any observations/past experiences :)
Edit: please note that azcopy reports successful authentication:
INFO: SPN Auth via secret succeeded.
INFO: Scanning...
INFO: Authenticating to destination using Azure AD
INFO: Authenticating to source using Azure AD
Found this in the API docs:
Now, what's interesting here is that my service principal already had Owner permission on the subscription (infra pipeline stands up resources and assigns permissions etc.) - so I initially discounted this from being the issue... then, on a hunch, I assigned the Storage Blob Data Owner role directly on the storage account... and Voila - it worked!!
I'm the owner of the account and project.
I login using the google cloud sdk and try the following command:
gsutils -m setmeta -h "Cache-Control:public, max-age=3600" gs//bucket/**/*.*
I get the following error for some of the files:
AccessDeniedException: 403 <owner#email.com> does not have storage.objects.update access to <filePath>
Most of the files are updated, but some are not. Because I have a lot of files, if 10% are not updated, that means a few gigs of data is not updated.
Any idea why this happens with an owner account and how to fix this?
If the Access Control on your bucket is set to Uniform you need to add permissions to it even if you are project owner.
For example:
I have a test file in a bucket and when I want to access it I get an access required popup.
I gave to my Owner account in the permissions tab of the bucket "Storage Object Admin" and now I can access it freely.
Here you have more info about Project Level Roles vs Bucket Level Roles.
Let me know.
I created a GCS bucket and made it public by granting READER to allUsers, so I expect all objects in the bucket will be publicly accessible, but turns out only the bucket is readable, objects are not.
I guess it is because I enabled object-level permission control for the bucket, my questions: 1) how can I verify whether it is object-level permission control or not? 2) how can I update it to bucket-level permission control?
I need gsutil based solution. Thanks!
Check Bucket Policy Only documentation: https://cloud.google.com/storage/docs/bucket-policy-only
1 - you're looking for gsutil bucketonlypolicy get
$ gsutil bucketpolicyonly get gs://my-test-bucket
Bucket Policy Only setting for gs://my-test-bucket:
Enabled: True
LockedTime: 2019-07-09 16:14:31.777000+00:00
2 - check gsutil bucketonlypolicy set
$ gsutil bucketpolicyonly set on gs://my-test-default-acl-bucket/
Enabling Bucket Policy Only for gs://my-test-default-acl-bucket...
I have a server that writes some data files to a Cloud Storage bucket, using a service account to which I have granted "Storage Object Creator" permissions for the bucket. I want that service account's permissions to be write-only.
The Storage Object Creator permission also allows read access, as far as I can tell, so I wanted to just remove the permission for the objects after they have been written. I thought I could use an ACL to do this, but it doesn't seem to work. If I use
gsutil acl get gs://bucket/object > acl.json
then edit acl.json to remove the OWNER permission for the service account, then use
gsutil acel set acl.json gs://bucket/object
to update the ACL, I find that nothing has changed; the OWNER permission is still there if I check the ACL again. The same thing happens if I try to remove the OWNER permission in the Cloud Console web interface.
Is there a way to remove that permission? Or another way to accomplish this?
You cannot remove the OWNER permissions for the service account that uploaded the object, from:
https://cloud.google.com/storage/docs/access-control/lists#bestpractices
The bucket or object owner always has OWNER permission of the bucket or object.
The owner of a bucket is the project owners group, and the owner of an object is either the user who uploaded the object, or the project owners group if the object was uploaded by an anonymous user.
When you apply a new ACL to a bucket or object, Cloud Storage respectively adds OWNER permission to the bucket or object owner if you omit the grants.
I have not tried this, but you could upload the objects using once service account (call it SA1), then rewrite the objects using a separate service account (call it SA2), and then delete the objects. SA1 will no longer be the owner, and therefore won't have read permissions. SA2 will continue to have both read and write permissions though, there is no way to prevent the owner of an object from reading it.
Renaming the object does the trick.
gsutil mv -p gs://bucket/object gs://bucket/object-renamed
gsutil mv -p gs://bucket/object-renamed gs://bucket/object
The renamer service account will become the object OWNER.
Anyone successfully using gcsfuse?
I've tried to remove all default permission to the bucket,
and setup a service account:
gcloud auth activate-service-account to activate serviceaccname
And then running:
gcsfuse --debug_gcs --foreground cloudbuckethere /backup
gcs: Req 0x0: -> ListObjects() (307.880239ms): googleapi: Error 403: xxxxxx-compute#developer.gserviceaccount.com does not have storage.objects.list access
It's weird that it's complaining that the user xxxxx-compute which is not my activated service account:
gcloud auth list
Does show my current service account is active...
I've also granted admin owner, admin object owner, write object, read object to the bucket to my serviceaccname.
If I grant xxxxx-compute to my bucket with all the permission, including legacy permissions, listing seems to work. but writing any file to the directory failed with:
googleapi: Error 403: Insufficient Permission, insufficientPermissions
Anyone have any luck?
I found a solution, not sure if this is a good solution, but it works.
Setup a service account and download the JSON file.
Grant access to the bucket as bucket admin with the above service account name.
Then run add into environment variable, pointing to the path to the service JSON file.
GOOGLE_APPLICATION_CREDENTIALS=/path-to-json/gcloud.json gcsfuse --debug_gcs --foreground bucketname /path-to-mount
Also take note that it may uses large amount of space in the tmp directory by default. Adding flag:
... --temp-dir=/someotherpath
Will really helps if you have limited space in /tmp.