Can I restrict users/service accounts with roles/storageAdmin permission from a specific GCS Bucket?
I have a sensitive bucket that should be writeable only from a specific service account, and restrict write permissions from all other accounts, even from storage admins.
I've tried setting the permissions to unified instead of acl, with the following iam set on the bucket:
{
"bindings": [
{
"members": [
"serviceAccount:my-sa#my-account.iam.gserviceaccount.com"
],
"role": "roles/storage.objectAdmin"
},
{
"members": [
"group:my-dev-team#my-company.com"
],
"role": "roles/storage.objectViewer"
}
],
"etag": "abcd"
}
Some of my team members have the roles/storageAdmin role, and they can also write to the bucket - which I need to restrict.
At the bucket level, there is uniform bucket-level access, Identity and Access Management (IAM) and Access Control List (ACL). If you want to avoid creating GCP accounts for the users, then try Access Control List (ACL).
If you want to regulate the access each user has within the bucket, you should try Access Control List (ACL).
An access control list (ACL) is a mechanism you can use to define who has access to your buckets and objects, and what level of access they have. In Cloud Storage, you apply ACLs to individual buckets and objects. Each ACL consists of one or more entries. An entry gives a specific user (or group) the ability to perform specific actions. Each entry consists of two pieces of information:
A permission, which defines what actions can be performed (for example, read or write).
A scope (sometimes referred to as a grantee), which defines who can perform the specified actions (for example, a specific user or group of users).
Here is a list of the accesses that can be granted:
Related
Snowflake follows the role-based access control (RBAC) paradigm. Best practice for RBAC is, to have functional and access roles managing either user and clients or access privileges. This creates in worst-case a variety of roles that inherits permissions from and to each other. By nature, one can easily lose sight.
In snowflake, grants to roles and users are stored in ACCESS_USAGE.GRANTS_TO_ROLES and ACCESS_USAGE.GRANTS_TO_USERS. What is a proper approach to identify the data stewards/owner of a role automatically (if not labeled explicitly in a 3rd party tooling)?
Options I thought of:
recursive lookup of OWNERSHIP privileges of roles of roles (will generate a lot of false positives)
recursive discovery of a service account that has advanced permission to a role and lookup the service account owner
lookup over usage pattern of executed queries (might be actually more consumers than producers)
A couple of options:
Populate the role’s comment field with the relevant Data Steward information
Use Tags (in public preview)
I would like to use the node sdk to implement a backup and restore mechanism between 2 instances of Cloud Object Storage. I have added a service ID to the instances and added a permissions for the service id to access the buckets present in the instance i want to write to. The buckets will be in different regions. I have tried a variety of endpoints both legacy and non-legacy private and public to achieve this but i usually get Access Denied.
Is what I am trying to do possible with the sdk? if so can someone point me in the right direction?
var config = {
"apiKeyId": "xxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxx",
"endpoint": "s3.eu-gb.objectstorage.softlayer.net",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxx:xxxxxxxxxxx::",
"iam_apikey_name": "auto-generated-apikey-xxxxxxxxxxxxxxxxxxxxxx",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/0xxxxxxxxxxxxxxxxxxxx::serviceid:ServiceIdxxxxxxxxxxxxxxxxxxxxxx",
"serviceInstanceId": "crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxx::",
"ibmAuthEndpoint": "iam.cloud.ibm.com/oidc/token"
}
This should work as long as you are able to properly grant the requesting user access to be able to read the source of the put-copy, so long as you are not using KeyProtect based keys.
So the breakdown here is a bit confusing due to some unintuitive terminology.
A service instance is a collection of buckets. The primary reason for having multiple instances of COS is to have more granularity in your billing, as you'll get a separate line item for each instance. The term is a bit misleading, however, because COS is a true multi-tenant system - you aren't actually provisioning an instance of COS, you're provisioning a sort of sub-account within the existing system.
A bucket is used to segment your data into different storage locations or storage classes. Other behavior, like CORS, archiving, or retention, acts on the bucket level as well. You don't want to segment something that you expect to scale (like customer data) across separate buckets, as there's a limit of ~1k buckets in an instance. IBM Cloud IAM treats buckets as 'resources' and are subject to IAM policies.
Instead, data that doesn't need to be segregated by location or class, and that you expect to be subject to the same CORS, lifecycle, retention, or IAM policies can be separated by prefix. This means a bunch of similar objects share a path, like foo/bar and foo/bas have the same prefix foo/. This helps with listing and organization but doesn't provide granular access control or any other sort of policy-esque functionality.
Now, to your question, the answer is both yes and no. If the buckets are in the same instance then no problem. Bucket names are unique, so as long as there isn't any secondary managed encryption (eg Key Protect) there's no problem copying across buckets, even if they span regions. Keep in mind, however, that large objects will take time to copy, and COS's strong consistency might lead to situations where the operation may not return a response until it's completed. Copying across instances is not currently supported.
I'm creating a Google Cloud Storage bucket via the JSON API.
I can create it fine, but when I query the meta data for it I can't see any ACL specified, despite asking for "full" access. Another bucket created via the UI I can see all ACL for.
I need to see the ACL as to update a bucket a required field is the buckets ACL, which I don't quite understand why is needed, but without the same bucket giving its ACL data it means I can't update buckets I create.
I assume that I have full write access to the bucket once I create it, and have tried creating it with and without a predefinedAcl.
Is there anything I am missing on why I can't see the ACL on new buckets?
The creator of the bucket is always an owner of that bucket, and owners of a bucket can always see the bucket's ACL. That said, the ACLs are not part of the default response to a storage.objects.get call. Try passing in the URL query parameter "projection=full", which will cause the ACLs to be included.
The "update" call always sets the absolute, full state of the bucket's metadata, including ACLs and everything else. If you're looking to simply modify some property of the bucket, the "patch" call is probably what you want to use.
The ACL isn't returned because its just not present for the bucket. But to do updates, you can simply pass an empty list to the PATCH method and it will work, even though the acl field is still required:
PUT https://www.googleapis.com/storage/v1/b/blahblahblxxxxlifecycle?key={YOUR_API_KEY}
{
"acl": [
],
"kind": "storage#bucket",
"id": "blahblahblxxxxlifecycle",
"selfLink": "https://www.googleapis.com/storage/v1/b/blahblahblahffflifecycle",
"projectNumber": "1080525199262",
"name": "blahblahblxxxxlifecycle",
"timeCreated": "2016-09-09T21:20:56.490Z",
"updated": "2016-09-09T21:20:56.490Z",
"metageneration": "1",
"location": "US",
"versioning": {
"enabled": true
},
"storageClass": "STANDARD",
"etag": "CAE="
}
How do I set access permissions for entire folder in storage bucket? Example; I have 2 folders (containing many subfolders/objects) in single bucket (let's call them folder 'A' and 'B') and 4 members in project team. All 4 members can have read/edit access for folder A but only 2 of the members are allowed to have access to folder 'B'. Is there a simple way to set these permissions for each folder? There are hundreds/thousands of files within each folder and it would be very time consuming to set permissions for each individual file. Thanks for any help.
It's very poorly documented, but search for "folder" in the gsutil acl ch manpage:
Grant the user with the specified canonical ID READ access to all objects in example-bucket that begin with folder/:
gsutil acl ch -r \
-u 84fac329bceSAMPLE777d5d22b8SAMPLE785ac2SAMPLE2dfcf7c4adf34da46:R \
gs://example-bucket/folder/
Leaving this here so someone else doesn't waste an afternoon beating their head against this wall. It turns out that 'list' permissions are handled at the bucket level in GCS and you can't restrict them using a Condition based on object name prefix. If you do, you won't be able to access any resources in the bucket, so you have to setup the Member with unrestricted 'Storage Object Viewer' role and use Conditions with specified object prefix for 'Storage Object Admin' or 'Storage Object Creator' to restrict (over)write access. Not ideal if you are trying to keep the contents of your bucket private.
https://cloud.google.com/storage/docs/access-control/iam
"Since the storage.objects.list permission is granted at the bucket level, you cannot use the resource.name condition attribute to restrict object listing access to a subset of objects in the bucket. Users without storage.objects.list permission at the bucket level can experience degraded functionality for the Console and gsutil."
It looks like this has become possible through IAM Conditions.
You need to set a IAM Condition like:
resource.name.startsWith('projects/_/buckets/[BUCKET_NAME]/objects/[OBJECT_PREFIX]')
This condition can't be used for the permission storage.objects.list though. Add two roles to a group/user. The first one to grant list access to the whole bucket and the second one containing the condition above to allow read/write access to all objects in your "folder". Like this the group/user can list all objects in the bucket, but can only read/download/write the allowed ones.
There are some limitations here, such as no longer being able to use the gsutil acl ch commands referenced in other answers.
You cannot do this in GCS. GCS provides permissions to buckets and permissions to objects. A "folder" is not a GCS concept and does not have any properties or permissions.
Make sure, you have configured your bucket to have Fine-Grained Permission.
gsutil -m acl ch -r -g All:R gs://test/public/another/*
If doesn't work,
3. add yourself as gcs admin, legacy reader/writer permission. (which is irrelevant).
But worked for me.
I tried all suggestions here including providing access with CEL. Then I come across why everyone is not successful in resolving this issue is because GCP does not treat folders as existing.
From https://cloud.google.com/storage/docs/folders:
Cloud Storage operates with a flat namespace, which means that folders don't actually exist within Cloud Storage. If you create an object named folder1/file.txt in the bucket your-bucket, the path to the object is your-bucket/folder1/file.txt, but there is no folder named folder1; instead, the string folder1 is part of the object's name.
It's just a visual representation that provides us a hierarchical feel of the bucket and objects within it.
In the google document, it says:
"The maximum number of ACL entries you can create for a bucket or object is 100"
does that mean I can create just 100 Regardless of objects or buckets? or I can create 100 each objects and bucket?
Any help? Thanks.
All objects and all buckets have an ACL list. Any ACL list may have up to 100 entries, but no more. So a bucket can have 100 entries in its ACL, and an object in that bucket may also have 100 entries in that object's ACL.
Note: it is generally not recommended to place large numbers of ACL entries in an object or bucket's ACL list. Instead, consider one of these alternatives, which both have the advantage of not needing to modify the bucket or object when adding or removing users and groups:
Add the user or groups you need to your project's OWNER, EDITOR, and VIEWER roles, and use those roles in your bucket and object ACLs.
Add the user or groups you need to a google group and then add that google group to your bucket and object ACLs.