There is a method on GCP's Cloud Storage API that enables the caller to retrieve object metadata. it is documented at https://cloud.google.com/storage/docs/json_api/v1/objects/get
Is there a gsutil equivalent to this method? I've tried gsutil ls -L gs://object however it returns more information than calling the API method does.
Background to my question is that I am implementing a custom role to apply permissions on GCS buckets/objects. In order to test that custom role I am writing a script that carries out all the operations that a member of that custom role will need to be able to do. One of the permissions that the role members will require is storage.objects.get and I basically want to know what gsutil command is enabled by granting someone storage.objects.get. According to https://cloud.google.com/storage/docs/access-control/iam-json, https://cloud.google.com/storage/docs/json_api/v1/objects/get does require storage.objects.get and hence why I'm trying to find the equivalent gsutil command.
If you want to view the metadata associated with an object run:
gsutil stat gs://[BUCKET_NAME]/[OBJECT_NAME]
If you want to retrieve the object itself from the cloud and store it in a local path run:
gsutil cp gs://[BUCKET_NAME]/[OBJECT_NAME] [SAVE_TO_LOCATION]
Related
I'm using a service account to upload a file to Google Cloud Storage bucket that has versioning. I want to keep the service account privileges minimal, it only ever needs to upload files so I don't want to give it permission to delete files, but the upload fails (only after streaming everything!) saying it requires delete permission.
Shouldn't it be creating a new version instead of deleting?
Here's the command:
cmd-that-streams | gsutil cp -v - gs://my-bucket/${FILE}
ResumableUploadAbortException: 403 service-account#project.iam.gserviceaccount.com does not have storage.objects.delete access to my-bucket/file
I've double checked that versioning is enabled on the bucket
> gsutil versioning get gs://my-bucket
gs://my-bucket: Enabled
The permission storage.objects.delete is required if you are executing the gsutl cp command as per cloud storage gsutil commands.
Command: cp
Required permissions:
storage.objects.list* (for the destination bucket)
storage.objects.get (for the source objects)
storage.objects.create (for the destination bucket)
storage.objects.delete** (for the destination bucket)
**This permission is only required if you don't use the -n flag and you insert an object that has the same name as an object that already
exists in the bucket.
Google docs suggests to use -n (do not overwrite an existing file) so storage.objects.delete won't be required. But your use case uses versioning and you will be needing to overwrite, thus you will need to add storage.objects.delete on your permissions.
I tested this with a bucket versioning is enabled and only has 1 version. Service account that have roles Storage Object Creator and Storage Object Viewer.
See screenshot for the commands and output:
If you're overwriting an object, regardless of whether or not its parent bucket has versioning enabled, you must have storage.objects.delete permission for that object.
Versioning works such that when you delete the "live" version of an object, that version is marked as a "noncurrent" version (and the timeDeleted field is populated). In order to create a new version of an object when a live version already exists (i.e. overwriting the object), the transaction that happens is:
Delete the current version
Create a new version that becomes the "live" or "current" version
I have a server that writes some data files to a Cloud Storage bucket, using a service account to which I have granted "Storage Object Creator" permissions for the bucket. I want that service account's permissions to be write-only.
The Storage Object Creator permission also allows read access, as far as I can tell, so I wanted to just remove the permission for the objects after they have been written. I thought I could use an ACL to do this, but it doesn't seem to work. If I use
gsutil acl get gs://bucket/object > acl.json
then edit acl.json to remove the OWNER permission for the service account, then use
gsutil acel set acl.json gs://bucket/object
to update the ACL, I find that nothing has changed; the OWNER permission is still there if I check the ACL again. The same thing happens if I try to remove the OWNER permission in the Cloud Console web interface.
Is there a way to remove that permission? Or another way to accomplish this?
You cannot remove the OWNER permissions for the service account that uploaded the object, from:
https://cloud.google.com/storage/docs/access-control/lists#bestpractices
The bucket or object owner always has OWNER permission of the bucket or object.
The owner of a bucket is the project owners group, and the owner of an object is either the user who uploaded the object, or the project owners group if the object was uploaded by an anonymous user.
When you apply a new ACL to a bucket or object, Cloud Storage respectively adds OWNER permission to the bucket or object owner if you omit the grants.
I have not tried this, but you could upload the objects using once service account (call it SA1), then rewrite the objects using a separate service account (call it SA2), and then delete the objects. SA1 will no longer be the owner, and therefore won't have read permissions. SA2 will continue to have both read and write permissions though, there is no way to prevent the owner of an object from reading it.
Renaming the object does the trick.
gsutil mv -p gs://bucket/object gs://bucket/object-renamed
gsutil mv -p gs://bucket/object-renamed gs://bucket/object
The renamer service account will become the object OWNER.
I've got a bucket in Google Cloud Storage, and a website. People can currently upload to the bucket through the website (using Google authentication).
However, I need to set it so that anyone can view the files that are uploaded (and can't modify them).
This can't be something that Google needs to authenticate, as some of our clients' IT departments have blocked Google (for whatever reason) and refuse to budge. It could be something where the request is made from my website, it could allow it (as I'll record the URL on the website's database).
Preferably, if this could be done without using gsutil that would be great.
You can set a default object ACL on the bucket that makes all objects uploaded to that bucket publicly readable. For example you could do it using gsutil:
gsutil defacl ch -u AllUsers:R gs://your-bucket
Note that the above command only affects newly written objects. If you already have objects in your bucket that need to be made public you could accomplish that with gsutil as well:
gsutil acl ch -u AllUsers:R gs://your-bucket/**
Regarding your point about making sure anyone can view the files but not modify them: You can accomplish this by making sure the bucket ACL only allows you (or your service account) to write objects, not all users.
our team create some data on google cloud storage so other team can copy/download/read it from there, but when they tried, they always got 403 forbidden message. I tried to edit the permission on that bucket and added new permission as 'Project', 'viewers-(other team's project id)', and 'Reader', but still they got the same error when they ran this command:
gsutil cp -R gs://our-bucket gs://their-bucket
i also tried with their client id and email account, still the same.
I'm not sure one can define another group's collection of users with a give access right (readers, in this case), and apply it to an object in a different project.
An alternative to this would be to control bucket access via Google Groups: simply set up a group for readers, adding the users you wish to grant this right to. Then you can use said Group to control access to the bucket and/or contents. Further information, and use case scenario, here https://cloud.google.com/storage/docs/collaboration#group
try:
gsutil acl ch -u serviceaccount#google.com:R gs://your-bucket
This ch:changes the permission on 'your-bucket' for u:user serviceaccount#google.com to R:Reader.
I am a project owner and i have full control over the bucket.
I would like to give another user the FULL access control over this bucket, but I didn't manage to do it.
The mail of this user is an_email_address#gmail.com and he is listed as owner of the project, but can't have, as said before, full control over the bucket.
I tried also to give him access via gsutil: this is a snippet if the output of getacl.
<EmailAddress>an_email_address#gmail.com</EmailAddress>
<Name>User Name</Name>
</Scope>
<Permission>FULL_CONTROL</Permission>
If he logs in the Cloud storage console, he can't for example, change the permission of an object and so on.
Could you please give some hints on how to proceed?
Changing the bucket ACL will grant full control access over the bucket, which will allow reading, writing, and changing bucket metadata.
However, if you want a user to have full control over all objects in the bucket, you need to change the default object ACL, which is what is applied to objects that are created in that bucket. To change the default object ACL, you should be able to use a command such as:
gsutil defacl ch -u <email_address>:FC <bucket name>
Since this will only apply to objects created after the default object ACL has been updated, you'll also need to set the object ACL for any existing objects that you want to grant access to. If you want to grant access to all objects in the bucket, you could use a command like:
gsutil acl ch -u <email_address>:FC <bucket name>/**
If you have many existing objects in this bucket, you can add the -m flag (gsutil -m acl ch ...) to use multiprocessing for speed.
For detailed information about how ACLs work, take a look at https://developers.google.com/storage/docs/accesscontrol#default