Google Cloud Platform IAM setting to allow project level RO access to buckets - google-cloud-storage

I want to give a service account read-only access to every bucket in my project. What is the best practice for doing this?
The answers here suggest one of:
creating a custom IAM policy
assigning the Legacy Bucket Viewer role on each bucket
using ACLs to allow bucket.get access
None of these seem ideal to me because:
Giving read-only access seems too common a need to require a custom policy
Putting "Legacy" in the name makes it seem like this permission will be retired relatively soon and any new buckets will require modification
Google recommends IAM over ACL and any new buckets will require modification
Is there some way to avoid the bucket.get requirement and still access objects in the bucket? Or is there another method for providing access that I don't know about?

The closest pre-built role is Object Viewer. This allows listing and reading objects. It doesn't include storage.buckets.get permission, but this is not commonly needed - messing with bucket metadata is really an administrative function. It also doesn't include storage.buckets.list which is a bit more commonly needed but is still not part of normal usage patterns for GCS - generally when designing an app you have a fixed number of buckets for specific purposes, so listing is not useful.
If you really do want to give a service account bucket list and get permission, you will have to create a custom role on the project. This is pretty easy, you can do it with:
gcloud iam roles create StorageViewerLister --project=$YOUR_POJECT --permissions=storage.objects.get,storage.objects.list,storage.buckets.get,storage.buckets.list
gcloud projects add-iam-policy-binding $YOUR_PROJECT --member=$YOUR_SERVICE_ACCOUNT --role=StorageViewerLister

Related

Emulating tenants using roles

We are developing a keycloak(5.0.0) based solution where our clients can create their account with us and manage their own users - and only their users.
Initially with thought that we could use realms for this. Every client gets their own realm. After initial testing we deemed it might not be a good solution as after creating ~500 realms the application becomes unresponsive(https://issues.jboss.org/browse/KEYCLOAK-4593).
We decided to try using Groups to emulate a tenant. Our objective is to create during an external process(keycloak REST API) a group with an admin user.
Can't find currently a way how to restrict this administrator to be able to only manage their own group(creating subgroups, managing users, and giving them roles).
I've noticed several emails mentioning these features but I fail to find actual examples to make this work.
http://lists.jboss.org/pipermail/keycloak-user/2017-June/010882.html
http://lists.jboss.org/pipermail/keycloak-dev/2017-June/009496.html
The second link shows exactly what we would like to achieve.
Current alternative I can see is to implement a facade(client or separate web app) which would restrict visibility and access to other groups.
Are there other alternatives?

Is it possible to specify different ACLs for different folders in the same storage bucket?

I would like to specify different default ACL levels for two different folders in the same Google Cloud storage bucket. Is this even possible, or is there perhaps some workaround available?
It seems to be possible retroactively specify different ACL levels for different folders using the gsutil acl command, whereas it is only possible to use the gsutil defacl command on the entire bucket. Correct?
Some potential workarounds I've been brainstorming, but not found any support for yet:
Set each files specific ACL from the client-side (iOS/Android) when it is first uploaded.
Have a Cloud Function listen for new file uploads and then modify it's ACL accordingly.
Again, I have not found any clues in the official documentation towards implementing any of the above solutions, so any pointers would be greatly appreciated. Thanks in advance!
Right now, ACLs can only be applied to individual objects, and IAM policies can only be applied to entire buckets or individual objects.
Your two proposed solutions are both reasonable but both have drawbacks. The first solution would be best, but only if you trust the iOS user to set good permissions. The client could potentially set different permissions. That's not necessarily a problem, since presumably the client already has the data and, if they were malicious, could already share it anywhere it wanted to, but it's something to be aware of.
The second solution is also fine, but there's a very small window of time wherein the object's ACLs would be incorrect. That may also not be a problem for you.
One nice variation might be to have users upload the objects to a "staging" bucket, and then have a Cloud Function respond to that upload by moving the object to its production location with the correct ACL, then delete the staged version.

SCM: Storing Credentials

It is generally recommended not to store credentials in a repository. The question is, where should they be stored then, so all developers have access to the same configuration?
The question is subjective - different practices may be applied. For me, the approach that worked best is utilisation of some form of "Single Sign-On" where possible and provision of personal logins to every system to developers. This also has an advantage of being able to find out who was responsible for a destructive action (which sometimes happens).
You can also take the approach as described here: store the credentials in the SCM, but in encrypted form. This will allow to maintain versioning, yet not allow access "for everyone". I'd say, best option is to combine these two approaches (and store only developer-environment "service" credentials - encrypted - in the SCM)
I stored the config files in a private S3 bucket and manage access via IAM. The configuration updates and revisions are handled by a small script using the AWS gem. That way anybody with sufficient privileges can access them, and we also can issue access credentials for each developer separately.

Change ACL on all objects in bucket with api

Using google cloud storage Api (json or xml, preferably json) is there a way to set the ACLs for all objects in a bucket? I know it is possible to get a list of objects and then iterate and set the ACL for each object individually, but surely there is an easier way. I know with gsutil your can use setacl -R to setacl on entire bucket. How about with the API?? I'm working in Java appengine, but can also use the restful api of course. Any help would be great!
Existing object ACLs are orthogonal to bucket acls. In order to change the acls for all the objects in a bucket you need to do one of the following:
List all the objects in the bucket and update each object's acl
Take a look at batch requests - https://developers.google.com/api-client-library/python/guide/batch
Use a GroupByEmail or GroupByDomain grant - https://developers.google.com/storage/docs/accesscontrol
Add/remove people from your project team - https://developers.google.com/storage/docs/projects
You can change the membership of your group and team without having to go back and update all your objects.

Using google cloud storage from pc application

Hopefully I don't sound too stupid asking this. My wife and I run a small business out of our home. We want to share the accounting data, but I'm at another location often. We use a PC version of Sage Peachtree Premium Accounting that has networking capabilities, so the data files can be stored in a common place. Is it possible to share this file using something like Google Cloud Storage?
Google Drive is certainly the cheaper option as it is optimized for consumer usage patterns. Google Cloud Storage is optimized for applications that demand highly available and replicated storage with strong global consistency.
Here are a few ways that Google Cloud Storage attempts to improve team collaboration:
Resources are owned by a project team composed of multiple people.
It is possible to share files with a group.
It is possible to change the default acl applied to new objects.
Collaborate with a team
Each bucket is owned by a project and by default everyone on your team can read new objects upload to those buckets.
You manage the people on your team in the following manner:
Go to https://code.google.com/apis/console
Click on teams on the sidebar.
Add the email addresses of other people you want to collaborate with.
Use the drop-down list to give them more permissions.
Use the x to remove team members.
Permissions are concentric:
Everyone with can view access will be able to read files that do not specify an acl.
Everyone with can edit access will also be able to create and delete buckets as well as upload new objects.
Everyone with is owner access will also be able to add other viewers, editors and owners.
Share to a Google Group
Google Cloud Storage allows you to share files with a Google group. User gain access to these files when you add them to the group and lose access when you remove them from the group.
First download the objects acl:
gsutil getacl gs://bucket/obj > acl.xml
vim acl.xml
Now add the following acl entry inside the <Entries/> tag:
<Entry>
<Scope type="GroupByEmail">
<!-- Give everyone in the gs-discussion group READ access. -->
<EmailAddress>gs-discussion#googlegroups.com</EmailAddress>
</Scope>
<Permission>READ</Permission>
</Entry>
Now update the acl:
gsutil setacl acl.xml gs://bucket/obj
See the online documentation for further information about access control https://developers.google.com/storage/docs/accesscontrol#applyacls
You can create a Google group at google.com/groups
Change the default object acl
By default everyone on the team can read objects you upload. However you can configure this to be more or less permissive. You could make objects publicly-readable by default or only viewable by the owner and a Google group.
Changing the default object acl is similar to changing object acls. Just use the getdefacl and setdefacl commands.
Some predefined configurations do not require editing an xml file:
# Team members can view new objects.
gsutil setdefacl project-private gs://bucket
# Anonymous internet users can view new objects.
gsutil setdefacl public-read gs://bucket
Otherwise you can edit the acl xml:
gsutil getdefacl gs://bucket > def_acl.xml
vim def_acl.xml
# Add whichever UserByEmail, GroupByEmail, AllUsers, etc grants you want.
gsutil setdefacl def_acl.xml gs://bucket
New objects apply the default object acl:
gsutil cp foo gs://bucket # This object will receive the def_acl.xml acls.
It is easy to override the default object acl with a predefined acl for a particular object:
# Ignore the default acl. Use public-read.
gsutil cp -a public-read foo gs://bucket
The full list of predefined acls is available at developers.google.com/storage/docs/accesscontrol#extension
Google Cloud Storage is probably overkill for what you're talking about. Cloud Storage is something web developers use to deliver assets like images, videos, and documents to a large number of users around the world.
However something like Google Drive or Dropbox would probably work well for this. If you both have Gmail accounts then Google Drive is a natural choice. Both of these solutions have a service which runs on each PC and automatically syncs changed files in a specified folder to all other computers using that folder.
So if one of you makes changes to the file, it will show up in the other location automatically. However the real question is how your software will handle this. I'm not familiar with Peachtree Accounting but it probably isn't possible for you to both be making changes at the same time, unless the software is specifically designed for that use case.
If you can post a link or description for the "networking capabilities" (that is a rather vague term on its own) it may be possible to tell for sure.