Is there a number of users limit on the ACL of a google storage object?
Instead of adding a endpoint database between the user and storage, I would like to add all the users who have access directly to the acl.
Let say I have 1000 users or more in that ACL defined, will it still be ok?
Currently, there is a hard limit of 100 ACL entries. Instead, I suggest doing one of the following:
Give permission to a Group, and manage access to that group, or
Manage access in some other data store, like a SQL database, and decide whether to grant access to a resource as needed. Grant access by issuing signed URLs, which can be used for a brief window of time to access the object.
Related
Tried sharing a bucket with a colleague
Initially I added the "Storage.Object.Viewer" role, and sent the link https://console.cloud.google.com/storage/browser/bucket_name/
However on opening the link the following error was received:
You need the storage.objects.list permission to list objects in this
bucket. Ask a project or bucket owner to give you this permission and
try again.
I added more roles, and finally gave admin rights, but kept getting the same error.
How can I share a bucket with all files? specifically I would like to share with read-only permissions
Although a solution has been discovered for this issue, I'm going to summarise some relevant information which may be useful to someone who stumbles on a similar issue.
Project information isn't required in requests to storage buckets, because bucket names are required to be globally unique on Google Cloud Platform, which means if you specify a bucket name in any request, the request will point to the correct bucket no matter what project it resides within, so permissions for a given user to access that bucket must have been set-up in some capacity.
To allow users to list objects in a bucket, they must have been assigned a role with the storage.objects.list permission. Minimal role's that allow the listing of objects in buckets include:
Storage Object Viewer
Which allows users to view objects and their metadata, except for ACLs. They can also list the objects in a bucket.
Project Viewer
This roles also provides users permission to view other resources in the project. In terms of Cloud Storage, users can list buckets. They can also view bucket metadata, excluding ACLs, when listing.This role can only be applied to a project.
There are other storage specific roles which allow users to list objects in buckets, but to also have other authorisation, for example, to edit/create/delete objects. They include:
Storage Object Admin
Users have full control over objects, including listing, creating, viewing, and deleting objects.
Storage Admin
Users have full control of buckets and objects.
For more information on Cloud Storage IAM Roles please see here.
Assuming the google account used to access the URL by your colleague is the one you gave permissions to, you need to also grant "Viewer" role at the project level else he wouldn't be able to login to the GCP console and access the bucket.
My objective is to grant read-write permissions on a Google Storage Bucket to a Compute Instance Template in a way that grants only the permissions that are necessary, but I'm confused about what's considered idiomatic in GCP given the many access control options for Google Storage Buckets.
Currently, I am creating a Managed Instance Group and a Compute Instance Template and assigning the following scopes:
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/compute.readonly
https://www.googleapis.com/auth/devstorage.read_write
to the default Service Account on the Compute Instance. This seems to work fine, but given the link above, I'm wondering if I should explicitly set the Access Control List (ACL) on the Storage Bucket to private as well? But that same page also says "Use ACLs only when you need fine-grained control over individual objects," whereas in this case I need a coarse-grained policy. That makes me wonder if I should use an IAM Permission (?) but where would I assign that?
What's the idiomatic way to configure this?
It turns out the key documentation here is the Identity and Access Management overview for Google Cloud Storage. From there, I learned the following:
GCS Bucket ACLs specify zero or more "entries", where each entry grants a permission to some scope such as a Google Cloud User or project. ACLs are now considered a legacy method of assigning permissions to a bucket because they only allow the coarse-grained permissions READER, WRITER, and OWNER.
The preferred way to assign permissions to all GCP resources is to use an IAM Policy (overview). An IAM Policy is attached to either an entire Organization, a Folder of Projects, a specific Project, or a specific Resource and also specifies one or more "entries" where each entry grants a role to one or more members.
With IAM Policies, you don't grant permissions directly to members. Instead, you declare which permissions a role has, and grant members a role.
Ultimately, the hope is that you assign IAM Policies at the appropriate level of the hierarchy, knowing that lower levels of the hierarchy (like individual resources) inherit the permissions declared by the IAM Policies at higher levels (like at the Project level).
Based on this, I conclude that:
You should try to assign permissions to a GCS Bucket by assigning IAM Policies at the right level of the hierarchy.
However to limit permissions on a per-object basis, you must use ACLs.
When a Bucket is newly created, unless you specify otherwise, it is defined the default Canned ACL of projectPrivate.
As of this answer, Terraform does not yet have mature support for IAM Policies and the google_storage_bucket_acl resource represents an interface to a legacy approach to securing a Bucket.
Caveat: I'm only summarizing the docs here and have very limited practical experience with Google Cloud so far! Any corrections to above are welcome.
I am storing images of one user(owner) in google cloud storage bucket. I wanted to grant read permission for this image to a group of users(contacts of owner).I am planning to use Access Control List for this purpose; e.g., Owner will have full permission to his bucket and the contacts will have read permission on the images. There are chances that owner will have a very huge number of contacts, say 1 million.
So,
will there be any performance issue, if ACL contains a huge number of users?
Will this be the right approach for access control? Or should I consider signed URL?
Regards,Remya
This approach is not going to work for you. There are some significant limitations and downsides to trying to serve content like this. First and foremost, there is a limit of 100 ACL entries on a given object. You could get around this by granting permission to a group for which every user was a member, but even so, it still means that viewing the images will require that every user be logged in to their Google account in addition to however they authenticate for your site.
The canonical way to accomplish this would be to keep all images private and owned by your site's own account. When a user loads a page, verify however you like that they have appropriate authorization to view the images, and if so, generate signed URLs for the images. This allows you to use any authorization scheme without limitation while serving images directly from GCS.
I am working on a project where I would like a developer to have access to read/write to GCS, but not necessarily have access to uploading code in App Engine. I don't see options in the web console for specifying rights access. How can I setup specific privileges that I'd like a user to have? Thanks.
Basically, if you want a team member not to be allowed to deploy the application and modify or configure its resources, him/her must have only "can View" access level for a project.
Then you have to set the respective permission (WRITE) for a bucket scoped to "Google account email address" (email address of your developer) in you case.
As GCS documentation says, there are three ways to specify ACLs to buckets and objects, using:
The acl query string parameter to specify ACLs for certain scopes (here)
The x-goog-acl request header to specify predefined ACLs (here)
The defaultObjectACL query string parameter to change the default object ACL for all objects in a certain bucket (here)
I'd like to use Amazon SimpleDB to store data for my iPhone app. Different users will own items within the same domain. I'd like for users to be able to delete their own items but not each others', and for this restriction to be enforced server-side.
I am hoping to use anonymous TVM.
What is the best way to do this?
Using IAM User Management you can create a custom policy for each user or group to allow or deny access to delete items in SimpleDB. If each user has their own domain you can restrict access to the domain by using the arn format arn:aws:sdb:<region>:<account_ID>:domain/<domain_name>
I think that you can't use IAM - you seem to say that you have one domain where all user data is stored.
One way to achieve what you want is to use item name prefixes that are user based, e.g. user jimsmith would have all items stored under an item name that beings with 'jimsmith' or some random string, unique to jimsmith (which could be stored somewhere).
Then you are in charge of security, so you would not be able to have the phones directly query AWS - they would need to talk to your intermediary server which would handle security. You have to assume that people could run the app on a jailbroken phone, and decompile, etc.
You can use IAM to restrict a single user to a small portion of an S3 bucket though. You could then index the bucket using a server app of your design. Then the DB could be used for searching purposes with your own code, so that iPhones only deal with S3.
From what I have researched the simpleDB user right policies aren't designed to be used in such a way you are proposing (meaning undisclosed number of users of the app) and the way to handle this might be to use some server application in-the-middle as was suggested here: Mobile app and SimpleDB direct 'Access Policy'