My objective is to grant read-write permissions on a Google Storage Bucket to a Compute Instance Template in a way that grants only the permissions that are necessary, but I'm confused about what's considered idiomatic in GCP given the many access control options for Google Storage Buckets.
Currently, I am creating a Managed Instance Group and a Compute Instance Template and assigning the following scopes:
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/compute.readonly
https://www.googleapis.com/auth/devstorage.read_write
to the default Service Account on the Compute Instance. This seems to work fine, but given the link above, I'm wondering if I should explicitly set the Access Control List (ACL) on the Storage Bucket to private as well? But that same page also says "Use ACLs only when you need fine-grained control over individual objects," whereas in this case I need a coarse-grained policy. That makes me wonder if I should use an IAM Permission (?) but where would I assign that?
What's the idiomatic way to configure this?
It turns out the key documentation here is the Identity and Access Management overview for Google Cloud Storage. From there, I learned the following:
GCS Bucket ACLs specify zero or more "entries", where each entry grants a permission to some scope such as a Google Cloud User or project. ACLs are now considered a legacy method of assigning permissions to a bucket because they only allow the coarse-grained permissions READER, WRITER, and OWNER.
The preferred way to assign permissions to all GCP resources is to use an IAM Policy (overview). An IAM Policy is attached to either an entire Organization, a Folder of Projects, a specific Project, or a specific Resource and also specifies one or more "entries" where each entry grants a role to one or more members.
With IAM Policies, you don't grant permissions directly to members. Instead, you declare which permissions a role has, and grant members a role.
Ultimately, the hope is that you assign IAM Policies at the appropriate level of the hierarchy, knowing that lower levels of the hierarchy (like individual resources) inherit the permissions declared by the IAM Policies at higher levels (like at the Project level).
Based on this, I conclude that:
You should try to assign permissions to a GCS Bucket by assigning IAM Policies at the right level of the hierarchy.
However to limit permissions on a per-object basis, you must use ACLs.
When a Bucket is newly created, unless you specify otherwise, it is defined the default Canned ACL of projectPrivate.
As of this answer, Terraform does not yet have mature support for IAM Policies and the google_storage_bucket_acl resource represents an interface to a legacy approach to securing a Bucket.
Caveat: I'm only summarizing the docs here and have very limited practical experience with Google Cloud so far! Any corrections to above are welcome.
Related
I have 2 users, one with less than 30 roles and one with 400 roles. When I login with the 30 role user, I can reach the redirect URL without issues. But when I log in with the 400 role user, the request to the redirect URL doesn't complete. If I reduce the number of roles in the 400 role user then it will work. So, is there a way to disable passing roles in the java access token or increase some limit somewhere that's causing the failure?
I would suggest to focus rather on roles reduction/optimization than forcing the transmission (raising limits) over the maximum number of roles inside tokens (or anywhere else).
Some interesting questions (among others) to start with:
Which protected resources am I going to serve?
What am I trying to protect? and what are the associated risks? (build a Threat Model)
How do resources are served by each application? and how are they distributed among my applications?
What kind of resources are they? How can I group them? Which sets are identifiable? or what are the relationships between them? What actions are possible against all sets of resources?
Who are the users of each application? How will they interact with my resources? Which flows are sensitive?
What roles can I define for all my resources?
Which role can apply to each application, resource type or set?
What kind of user groups can I create?
Do I need additional attributes or claims for each set of roles or users/groups?
I firmly believe that if you answer all these questions you will end up having a bunch of roles instead of hundreds. Think security by design and follow principle of least privilege.
Focus on your use case
Now as far as I understand, your blocking point is that you are assuming that each resource is unique, sensitive and requires its own permissions, and consequently a role definition. While it may be true in some cases, in most other cases it does not mean that you have to use the token roles/scopes/claims to secure your assets deep at the resource-level. I'll try to illustrate this sentence by an example.
RBAC and authorizations example for your use case
Let's assume that:
you have millions of sensitive resources to serve
each registered user of your application has access to a (different) set of these resources.
your resources are splitted into, say, 3 categories (e-books, videos, musics).
each resource can be downloaded, uploaded, deleted.
your application will meet unregistered users, registered users, contributors and administrators
registered users will always have read access to resources (not a single action will ever allow a modification)
contributors are particular registered users who can perform special actions including modification ('upload', 'edit')
contributors and administrators may have access to various administrative parts of the application
your application will evolve by serving additional categories of resources in the future and new actions will be available to users later (such as 'flag', 'edit' or 'share link').
Then first things first:
organize your resources accordingly by serving them behind categorized paths such as: .../myapp/res/ebooks, .../myapp/res/videos, .../myapp/res/musics
identify your resources via UUID such that a resource may look like: .../myapp/res/ebooks/duz7327abdhgsd95a
Now imagine that your business risks or at least the greatest risks you wish to avoid are:
unregistered users having gaining access or rights for any part of the application or resource
uncontrolled registration process (robots, spam, no mail verification, fake users, ...)
registered users gaining illegal privileges (unauthorized actions, access to other categories, illegal administrative rights)
discovery of available resources by any mean
You will note that I voluntarily didn't listed:
registered user having illegal access to certain resources. For example: maliciously pointed/provided by an existing user.
This is because it is not a high risk as you may hold contact information about registered users as well as log activity and actions, quota or requests throttling, and you may be able to ban them or start legal action against them. Your registration process is also assumed robust and secure. Nonetheless if its considered a critical risk you can address this with extra mechanisms (cf. suggestions at the end). But never will it result in adding extra roles, such as one per resource, as it does not fit in any security model.
That being said, finally, here are the roles and authorizations scheme you may come with:
SCOPE / AUDIENCE
MY_APP
ROLES
USER
CONTRIBUTOR
ADMINISTRATOR
CLAIMS / ATTRIBUTES
CATEGORIES
ACTIONS
--> POSSIBLE USER GROUPS
USERS
Roles: USER
Claims: CATEGORIES(variable), ACTIONS('download')
CONTRIBUTORS
Roles: USER, CONTRIBUTOR
Claims: CATERGORIES(variable), ACTIONS('download', 'upload', 'edit')
ADMINISTRATORS
Roles: USER, CONTRIBUTOR, ADMINISTRATOR
Claims: CATEGORIES(*), ACTIONS(*)
Following this model, assigning the correct group to each registered user will provide high-grade security by mitigating/controlling the main risks. As claims/attributes are defined in the token(s) (managed and signed by Keycloak) then you can trust this information in your application and serve your resources accordingly and safely. There is also no risk of illegal access or discovery of resources as you are using UUIDs, only registered users having had access once to a resource will know it and registration with appropriate category access will be needed for another user to access it (to only be able to read it basically). Of course you may store in a database the list of resources to which each user has access to, raising the overall security to a very high level.
However, if the latest is not enough you may also implement rolling UUIDs or temporary links for your resources when served to users. To go farther you may also define groups and masks for your categories, resources and actions.
In fine, in this example I made use exclusively of token claims to define roles (common claim), categories and actions (custom claims). In terms of security the authentication and identity will be the first-line security followed by roles then categories, actions and stored list of resources per user (db).
Other alternatives are obviously possible, its just an example. Still, I hope it helps!
To fix this problem you should start from defining client scope mappings for each of you applications (e.g. oidc clients). Main idea of this facility is that even if your user is super duper admin with all existing roles, all of his roles actually don't required for any particular application. For example client foo which defines following roles:
foo_user
foo_viewer
to perform its security logic need to know only whether currently logged user has foo_user or foo_viewer, but it doesn't care about has this user roles bar_user or bar_admin from application bar. So our goal is to make Keycloak return for any client access token with only valuable set of roles for this client. And roles scope mappings is you friend here. You can set for client foo scope like:
foo.foo_user
foo.foo_viewer
bar.bar_admin
and now even if logged user has role "bar.bar_admin" this will not go to access_token since client foo doesn't take this role into account. After applying some scope settings you can test them at 'Clients -> $CLIENT_OIDC_ID -> Client scopes tab -> Evaluate sub tab.
As for you case with 400 roles, i'm quite confident that none of your application requires all of 400 roles, so precise scope configuration for you apllications can drammatically reduce access token size.
But if i'm mistaken and you really have an application that rely on large amount of roles you should look into you runtime settings.
For example if you run keycloak behind reverse proxy like nginx large tokens may not fit in default HTTP parameters buffer size (afaik about 2-4kb) so you have to increase it via appropriate nginx configuration option. Another example is tomcat which has about 16kb as default HTTP header buffer, so if you send request with very large access token in Authorization header Tomcat may not handle this request properly.
Tried sharing a bucket with a colleague
Initially I added the "Storage.Object.Viewer" role, and sent the link https://console.cloud.google.com/storage/browser/bucket_name/
However on opening the link the following error was received:
You need the storage.objects.list permission to list objects in this
bucket. Ask a project or bucket owner to give you this permission and
try again.
I added more roles, and finally gave admin rights, but kept getting the same error.
How can I share a bucket with all files? specifically I would like to share with read-only permissions
Although a solution has been discovered for this issue, I'm going to summarise some relevant information which may be useful to someone who stumbles on a similar issue.
Project information isn't required in requests to storage buckets, because bucket names are required to be globally unique on Google Cloud Platform, which means if you specify a bucket name in any request, the request will point to the correct bucket no matter what project it resides within, so permissions for a given user to access that bucket must have been set-up in some capacity.
To allow users to list objects in a bucket, they must have been assigned a role with the storage.objects.list permission. Minimal role's that allow the listing of objects in buckets include:
Storage Object Viewer
Which allows users to view objects and their metadata, except for ACLs. They can also list the objects in a bucket.
Project Viewer
This roles also provides users permission to view other resources in the project. In terms of Cloud Storage, users can list buckets. They can also view bucket metadata, excluding ACLs, when listing.This role can only be applied to a project.
There are other storage specific roles which allow users to list objects in buckets, but to also have other authorisation, for example, to edit/create/delete objects. They include:
Storage Object Admin
Users have full control over objects, including listing, creating, viewing, and deleting objects.
Storage Admin
Users have full control of buckets and objects.
For more information on Cloud Storage IAM Roles please see here.
Assuming the google account used to access the URL by your colleague is the one you gave permissions to, you need to also grant "Viewer" role at the project level else he wouldn't be able to login to the GCP console and access the bucket.
In our project, we have a group of people which should have full access to ONLY a bucket and they should not see other buckets or the object on the other buckets.
so, i changed the permission of the bucket, and i added the users as Storage Admin for that specific bucket (not for whole project).
In this case, when they use console/Storage they see the following message:
But when they open cloud Shell and they use Gsutil, they can access to the bucket objects (no access to other buckets).
Is this a bug on the interface of Console/storage?
This is not a bug, but it is a subtlety of the Console. In order to access a bucket from the Console, you typically navigate to it using the Browser, which is what appears you attempt in the screenshot. This fails, though, because to do this you need permission to list buckets for a project, even if you otherwise have free reign to work within the bucket.
There are three ways to deal with this:
1) Give your users the Viewer permission for the project that contains the bucket. There are pros and cons to this. I'd say it's probably not worth going this route (though not as much because your users will see other buckets - bucket namespace is publicly viewable anyway - but because doing so brings up some additional permission nuances you probably don't want to deal with).
2) Link directly to the desired bucket, thus avoiding the "listing buckets" portion of the Console. The URL for a bucket has the form: console.cloud.google.com/storage/browser/[BUCKET_NAME]. I believe this will work without any additional modifications to your permissions.
3) Create a custom role that only contains the storage.buckets.list permission, and use that role on the project for affected users.
I am looking for a way to set a custom ACL policy on one of my Cloud Object Storage (S3) buckets but all the examples I see at https://ibm-public-cos.github.io/crs-docs/crs-api-reference only show how to restrict by username. Essentially I would like to make my bucket private only unless the request is coming from a specific IP address.
Unfortunately, access control is pretty coarse at the moment and is only capable of granting and restricting access to other object storage instances. IP whitelisting is a priority for us and is the roadmap but is not currently supported. Granular access control via policies will be available later this year.
I am working on a project where I would like a developer to have access to read/write to GCS, but not necessarily have access to uploading code in App Engine. I don't see options in the web console for specifying rights access. How can I setup specific privileges that I'd like a user to have? Thanks.
Basically, if you want a team member not to be allowed to deploy the application and modify or configure its resources, him/her must have only "can View" access level for a project.
Then you have to set the respective permission (WRITE) for a bucket scoped to "Google account email address" (email address of your developer) in you case.
As GCS documentation says, there are three ways to specify ACLs to buckets and objects, using:
The acl query string parameter to specify ACLs for certain scopes (here)
The x-goog-acl request header to specify predefined ACLs (here)
The defaultObjectACL query string parameter to change the default object ACL for all objects in a certain bucket (here)