Google Storage access based on IP Address - google-cloud-storage

Is there a way to give access to a Google Cloud Storage bucket based on the IP address it is coming from.
On Amazon s3, you can just set this in the access policy like this:
"Condition" : {
"IpAddress" : {
"aws:SourceIp" : ["192.168.176.0/24","192.168.143.0/24"]
}
}
I do not want to use a signed url.

The updated answers on this page are only partially correct and should not be recommended for the use case of access control to Cloud Storage Objects.
Access Context Manager (ACM) defines rules to allow access (e.g. an IP address).
VPC Service Controls create an "island" around a project and ACM rules can be attached. These rules are "ingress" rules and not "egress" rules meaning "anyone at that IP can get into all resources in the project with the correct IAM permissions".
The ACM rule specifying an IP address will allow that IP address to access all Cloud Storage Objects and all other protected resources owned by that project. This is usually not the intended result. You cannot apply an IP address rule to an object, only to all objects in a project. VPC Service Controls are designed to prevent data from getting out of a project and are NOT designed to allow untrusted anonymous users access to a project's resources.

UPDATE: This is now possible using VPC Service Controls
No, this is not currently possible.
There's currently a Feature request to restrict google cloud storage bucket by IP Address.

The VPC Service Controls [1] allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets (and some others) to constrain data within a VPC and help mitigate data exfiltration risks.
[1] https://cloud.google.com/vpc-service-controls/

I used VPC Service Controls on behalf of a client recently to attempt to accomplish this. You cannot use VPC Service Controls to whitelist an ip address on a single bucket. Jterrace is right. There is no such solution for that today. However, using VPC Service Controls you can draw a service perimeter around the Google Cloud Storage (GCS) service as a whole within a given project, then apply an access level to your service perimeter in the project to allow an ip address/ip address range access to the service (and resources within). The implications are that any new buckets created within the project will be created within the service perimeter and thus be regulated by the access levels applied to the perimeter. So you'll likely want this to be the sole bucket in this project.
Note that the service perimeter only affects services you specify. It does not protect the project as a whole.
Developer Permissions:
Access Context Manager
VPC Service Controls
Steps to accomplish this:
Use VPC Service Controls to create a service perimeter around the entire Google Cloud Storage service in the project of your choosing
Use Access Context Manager to create access levels for ip address you want to whitelist and users/groups who will have access to the service
Apply these access levels to the service perimeter created in the previous step (it will take 30 minutes for this change to take effect)
Note: Best practice would be to provide access to the bucket using a service account or users/groups ACL, if that is possible. I know it isn't always so.

Related

How to Manage IBM Cloud Key-Protect Instance from CLI when Private Network Only Policy is Applied?

In doing some testing of the IBM Cloud Security and Compliance items, specifically the CIS Benchmarks for Best Practices, one item I was non-compliant on was in Cloud Key protect for the Goal "Check whether Key Protect is accessible only by using private endpoints"
My Key-protect instance was indeed set to "Public and Private" so I changed it to Private. This change now requires me to manage my Key-Protect instance from the CLI.
When I try to even look at my Key-Protect instance policy from the CLI I receive the following error:
ibmcloud kp instance -i my_instance_id policies
Retrieving policy details for instance: my_instance_id...
Error while getting instance policy: kp.Error: correlation_id='cc54f61d-4424-4c72-91aa-d2f6bc20be68', msg='Unauthorized: The user does not have access to the specified resource'
FAILED
Unauthorized: The user does not have access to the specified resource
Correlation-ID:cc54f61d-4424-4c72-91aa-d2f6bc20be68
I'm confused - I am running the CLI logged, in as the tenant admin with Access policy of All resources in account (including future IAM enabled services)
What am I doing wrong here?
Private endpoints are only accessible from within IBM Cloud. If you connect from the public internet, access should be blocked.
There are multiple ways, how to work with such a policy in place. One is to deploy (a VPC with) a virtual machine on a private network. Then, connect to it with a VPN or Direct Link. Thus, your resources are not accessible from the public internet, but only through private connectivity. You could continue to use the IBM Cloud CLI, but set it to use private endpoints.

Service Fabric Explorer: Limit Access to Single Applications

Is there the possibility to limit the access to Service Fabric Explorer to certain services or specific users?
We have a scenario where we host multiple services on the same cluster. The log information of the Explorer shall be only visible for the 'owner' of each service.
No.
You can use access control to limit access to certain cluster
operations for different groups of users. This helps make the cluster
more secure. Two access control types are supported for clients that
connect to a cluster: Administrator role and User role.
Users who are assigned the Administrator role have full access to
management capabilities, including read and write capabilities. Users
who are assigned the User role, by default, have only read access to
management capabilities (for example, query capabilities). They also
can resolve applications and services.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security#role-based-access-control-rbac
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-security-roles
You can assign different roles to groups, but you cannot scope a role to a service, so basically its all or nothing, you cannot give granular control

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

In CloudFormation How do I reference the VPC Id from a chosen subnet id?

I am creating an EC2 instance with a CF template, and will choose the subnet as on3 of my parameters. Once I have chosen the subnet, is there a way for CloudFormation to find the VPC id of the Subnet?
I'm afraid there isn't an easy way to just get the VPC Id automatically from a Subnet Id, CloudFormation limitation at the moment. You have two main options for getting around this:
Easiest option: Pass in VPC Id parameter that matches your Subnet Id.
Harder option: Create a custom resource, lambda is usually the easiest way, that gets the VPC Id from the Subnet Id. Amazon has a sample on creating a custom resource to look values up.
Looking around I found someone has built a library for executing CLI commands from a Lambda custom resource. This could be a good option if building your own lambda function a bit much.

Authorizing GCE to Access GCS

I have a django app running in my Google Compute Engine, and it needs to upload video files to my bucket in Google Cloud Storage. When searching for authentication methods, I found this doc. Under Setting the scope of service account access for instances section, it says I need to enable the Cloud Platform access in the settings when creating the VM. I wonder if it is a must and if there's any other way that I can access my cloud storage bucket from my apps in the compute engine. Because creating a new VM and set up the environment is very time-consuming. Any input would be greatly appreciated. Thanks in advance.
As documented on the page you linked to, to authenticate from Google Compute Engine to Google Cloud Storage, you have several options:
Use VM scopes: this must be set before creating the VM, because scopes are immutable once the VM is created. If you want read-only access, you need to add the scope devstorage.read_only (short form) or https://www.googleapis.com/auth/devstorage.read_only (full path). If you want read-write access, you should use the scope devstorage.read_write (short form) or https://www.googleapis.com/auth/devstorage.read_write (full path).
Note: there's also a feature gcloud beta compute instances set-scopes to update GCE VM scopes at runtime.
An alternative to using scopes is to use JSON authentication tokens, such as via Service accounts which can be used by Google API client libraries to connect to Google Cloud Storage.