Creating custom ACL by IP for Cloud Object Storage (S3) - ibm-cloud

I am looking for a way to set a custom ACL policy on one of my Cloud Object Storage (S3) buckets but all the examples I see at https://ibm-public-cos.github.io/crs-docs/crs-api-reference only show how to restrict by username. Essentially I would like to make my bucket private only unless the request is coming from a specific IP address.

Unfortunately, access control is pretty coarse at the moment and is only capable of granting and restricting access to other object storage instances. IP whitelisting is a priority for us and is the roadmap but is not currently supported. Granular access control via policies will be available later this year.

Related

Can I restrict which IP address can access objects in a bucket?

My organisation intends to provide data to 3rd parties by putting that data into files in a GCS bucket and granting the 3rd party's GCP service account access to the data. We want to lock down that access as much as possible, one thing we'd especially like to do is limit the IP addresses that are allowed to issue requests to get the data.
I have been pouring over IAM conditions documentation:
Overview of IAM Conditions
Attribute reference for IAM Conditions
however I'm not able to understand the docs in sufficient detail to know if what I want to do is possible.
I read this which sounded promising:
The access levels attribute is derived from attributes of the request, such as the origin IP address, device attributes, the time of day, and more.
https://cloud.google.com/iam/docs/conditions-attribute-reference#access-levels
but it seems as though that only applies when IAP is being used:
The access levels attribute is available only when you use Identity-Aware Proxy
Is there a way to restrict which IP addresses can be used to access data in a GCS bucket?
I think I’ve just found something that will work - VPC service perimeters. I’ve tried it and it does seem just what I need.
This blog post covers it very very well https://medium.com/google-cloud/limit-bucket-access-in-google-cloud-storage-by-ip-address-d59029cab9c6
I just tested something that I never tried, and finally it works!
So, to achieve this, you need 3 things
Your bucket secured with IAM
HTTPS Load Balancer with
Your Bucket as backend bucket
A HTTPS frontend
A domain name, with your DNS A record configured on the Load Balancer IP
Cloud Armor policy
Create a edge policy
Default rule: deny all
Additional rule: all IP ranges (use /32 range for a specific IP)
Add the policy on your backend bucket
Let that cooking (about 10 minutes to let the load balancer dispatching its configuration, the certificate generation if you use a managed certificates,...)
Then, the requester can perform GET request to the bucket object, with the access token in the authorization header, something like that
curl -H "Authorization: Bearer <ACCESS TOKEN>" https://<DomainName>/path/to/object.file

Can I use Vault like an Amazon KMS service?

I am looking for the system that allows to create and store symmetric master keys in a safe manner. One of such systems is Amazon KMS, where I can create master private key per user and use it to encrypt some data (e.g. user's private keys).
But I need to support several platforms and so I have a question about Vault project (https://www.vaultproject.io). Is it appropriate tool for this task ?
I have found that Vault supports authorization functionality ( https://www.vaultproject.io/docs/auth/userpass.html) and I am wondering is it okay to use this API intensively and store 50k users or so ?
Said that, it looks like these services solve different problems, and Vault is not supposed to be used like Amazon KMS service. But I need to discuss this idea with someone in order to be completely sure.
Many thanks!
You may look into Cubbyhole backend for Vault. This backend works like a unique space for each token. Destroying the access token deletes all the data stored in its cubbyhole space.
From Cubbyhole authentication principles:
The cubbyhole backend is a simple filesystem abstraction similar to the generic backend (which is mounted by default at secret/) with one important twist: the entire filesystem is scoped to a single token and is completely inaccessible to any other token.
In other words, it does not matter, what policies attached to the token, but matter what the token is themselves. And only a single token can be used to set or retrieve values in its cubbyhole.

Idiomatic Way to Secure a Google Storage Bucket

My objective is to grant read-write permissions on a Google Storage Bucket to a Compute Instance Template in a way that grants only the permissions that are necessary, but I'm confused about what's considered idiomatic in GCP given the many access control options for Google Storage Buckets.
Currently, I am creating a Managed Instance Group and a Compute Instance Template and assigning the following scopes:
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/compute.readonly
https://www.googleapis.com/auth/devstorage.read_write
to the default Service Account on the Compute Instance. This seems to work fine, but given the link above, I'm wondering if I should explicitly set the Access Control List (ACL) on the Storage Bucket to private as well? But that same page also says "Use ACLs only when you need fine-grained control over individual objects," whereas in this case I need a coarse-grained policy. That makes me wonder if I should use an IAM Permission (?) but where would I assign that?
What's the idiomatic way to configure this?
It turns out the key documentation here is the Identity and Access Management overview for Google Cloud Storage. From there, I learned the following:
GCS Bucket ACLs specify zero or more "entries", where each entry grants a permission to some scope such as a Google Cloud User or project. ACLs are now considered a legacy method of assigning permissions to a bucket because they only allow the coarse-grained permissions READER, WRITER, and OWNER.
The preferred way to assign permissions to all GCP resources is to use an IAM Policy (overview). An IAM Policy is attached to either an entire Organization, a Folder of Projects, a specific Project, or a specific Resource and also specifies one or more "entries" where each entry grants a role to one or more members.
With IAM Policies, you don't grant permissions directly to members. Instead, you declare which permissions a role has, and grant members a role.
Ultimately, the hope is that you assign IAM Policies at the appropriate level of the hierarchy, knowing that lower levels of the hierarchy (like individual resources) inherit the permissions declared by the IAM Policies at higher levels (like at the Project level).
Based on this, I conclude that:
You should try to assign permissions to a GCS Bucket by assigning IAM Policies at the right level of the hierarchy.
However to limit permissions on a per-object basis, you must use ACLs.
When a Bucket is newly created, unless you specify otherwise, it is defined the default Canned ACL of projectPrivate.
As of this answer, Terraform does not yet have mature support for IAM Policies and the google_storage_bucket_acl resource represents an interface to a legacy approach to securing a Bucket.
Caveat: I'm only summarizing the docs here and have very limited practical experience with Google Cloud so far! Any corrections to above are welcome.

How to setup granular privileges on GCS?

I am working on a project where I would like a developer to have access to read/write to GCS, but not necessarily have access to uploading code in App Engine. I don't see options in the web console for specifying rights access. How can I setup specific privileges that I'd like a user to have? Thanks.
Basically, if you want a team member not to be allowed to deploy the application and modify or configure its resources, him/her must have only "can View" access level for a project.
Then you have to set the respective permission (WRITE) for a bucket scoped to "Google account email address" (email address of your developer) in you case.
As GCS documentation says, there are three ways to specify ACLs to buckets and objects, using:
The acl query string parameter to specify ACLs for certain scopes (here)
The x-goog-acl request header to specify predefined ACLs (here)
The defaultObjectACL query string parameter to change the default object ACL for all objects in a certain bucket (here)

Google Cloud Storage Budget Limit

Does Google Cloud Storage allow setting a monthly budget limit, similar to the one available for Google App Engine?
Google Cloud Storage does not implement usage limits on the XML API or for HTTP GETs of public objects.
It is possible to enable access logs: https://developers.google.com/storage/docs/accesslogs
This would give you detailed logs of all access to your objects. You could monitor the logs, and if the usage is higher than you want to allow, change the ACL on your objects to disable further access.