My organisation intends to provide data to 3rd parties by putting that data into files in a GCS bucket and granting the 3rd party's GCP service account access to the data. We want to lock down that access as much as possible, one thing we'd especially like to do is limit the IP addresses that are allowed to issue requests to get the data.
I have been pouring over IAM conditions documentation:
Overview of IAM Conditions
Attribute reference for IAM Conditions
however I'm not able to understand the docs in sufficient detail to know if what I want to do is possible.
I read this which sounded promising:
The access levels attribute is derived from attributes of the request, such as the origin IP address, device attributes, the time of day, and more.
https://cloud.google.com/iam/docs/conditions-attribute-reference#access-levels
but it seems as though that only applies when IAP is being used:
The access levels attribute is available only when you use Identity-Aware Proxy
Is there a way to restrict which IP addresses can be used to access data in a GCS bucket?
I think I’ve just found something that will work - VPC service perimeters. I’ve tried it and it does seem just what I need.
This blog post covers it very very well https://medium.com/google-cloud/limit-bucket-access-in-google-cloud-storage-by-ip-address-d59029cab9c6
I just tested something that I never tried, and finally it works!
So, to achieve this, you need 3 things
Your bucket secured with IAM
HTTPS Load Balancer with
Your Bucket as backend bucket
A HTTPS frontend
A domain name, with your DNS A record configured on the Load Balancer IP
Cloud Armor policy
Create a edge policy
Default rule: deny all
Additional rule: all IP ranges (use /32 range for a specific IP)
Add the policy on your backend bucket
Let that cooking (about 10 minutes to let the load balancer dispatching its configuration, the certificate generation if you use a managed certificates,...)
Then, the requester can perform GET request to the bucket object, with the access token in the authorization header, something like that
curl -H "Authorization: Bearer <ACCESS TOKEN>" https://<DomainName>/path/to/object.file
Related
Is it possible to obtain a static IP address for a Google Cloud Storage bucket for use with DNS? I wish to host it at mydomain.com, and because I also have e-mail at mydomain.com, I cannot use a DNS CNAME record -- I need to use an IP address and an DNS A record.
You can, but doing so requires using Google Cloud Load Balancer: https://cloud.google.com/compute/docs/load-balancing/http/adding-a-backend-bucket-to-content-based-load-balancing. The upside of this approach is that it comes with a number of useful features, like mapping a collection of buckets and compute resources to a single domain, as well as the static IP address you want. The downside is that there's additional cost and complexity.
I recommend just using a subdomain and a CNAME record, if you don't need any of the other features.
Check it with google documentation.
You can manage it on instance page in networking section.
I am looking for a way to set a custom ACL policy on one of my Cloud Object Storage (S3) buckets but all the examples I see at https://ibm-public-cos.github.io/crs-docs/crs-api-reference only show how to restrict by username. Essentially I would like to make my bucket private only unless the request is coming from a specific IP address.
Unfortunately, access control is pretty coarse at the moment and is only capable of granting and restricting access to other object storage instances. IP whitelisting is a priority for us and is the roadmap but is not currently supported. Granular access control via policies will be available later this year.
Why can't I reuse names from endpoints that have been previously deleted? For example, if I create an endpoint named "acme-cdn1", delete it, and try to create a new endpoint with the same name I get the following message: "Error, that endpoint name already exists." Is it necessary delete the entire CDN profile in order to reuse old endpoint names?
No, you cannot.
CDN endpoint is reserved for sometime once created. This is to prevent other people create CDN endpoint right after you delete your endpoint and get your traffic due to CDN setup take 3 hours +.
For example, let's say I created a CDN endpoint called myendpoint.azureedge.net and I was using it to streaming my pictures. And I deleted myendpoint.azureedge.net. Suddenly, you created the endpoint called myendpoint.azureedge.net. When you visit the url, you can still see my pictures even you already set the different origin.
Such operation will not be completed for at least two hours. In this case your CDN endpoint is not usable and you will be billed on the traffic which is not acceptable.
I created a bucket for my root folder in the US Standard region but when I created my www. subdomain that could be redirected to the root folder I placed it in the Oregon region.
The redirect from the address bar is failing (I set it up using buckets>properties>redirect). AWS doesn't seem to allow this swapping between regions, so I deleted and tried to recreate the www. subdomain again, this time in the US Standard region, but it now gives the error, "A conflicting conditional operation is currently in progress against this resource. Please try again."
In short, is there a way to change the region, as AWS is apparently not allowing multiple buckets with the same name (even in separate regions)? I am planning to redirect from the domain name I registered online using Route 53 anyway, so does this issue matter (as I won't use the 'http://example.com.s3-website-us-east-1.amazonaws.com' or 'http://www.example.com.s3-website-us-east-1.amazonaws.com' because I will hopefully be using 'example.com' or 'www.example.com'.
Thank you all for the help; I hope this post is specific enough. Cheers from a first post.
AWS doesn't seem to allow this swapping between regions,
That's not correct. A bucket configured for redirection does not care where it's redirecting to -- it can redirect to any web site, and the destination doesn't have to be another bucket...so this is a misdiagnosis of the problem you were/are experiencing.
AWS is apparently not allowing multiple buckets with the same name (even in separate regions)?
Well... no:
“The bucket namespace is global - just like domain names”
— http://aws.amazon.com/articles/1109#02
Only one bucket of a given name can exist within S3 at any point in time. Because S3 is a massive distributed global system, it can take time (though it should typically only take a few minutes) before you can create the bucket again. That's your conflict -- the deletion hasn't globally propagated.
“After a bucket is deleted, the name becomes available to reuse, but the name might not be available for you to reuse for various reasons. For example, some other account could create a bucket with that name. Note, too, that it might take some time before the name can be reused.”
— http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html
Once you get it created, focus on fixing the redirect. If you haven't yet configured the DNS in Route 53, then that would be the reason the redirect didn't work -- it can't redirect to something that isn't working. S3 accomplishes this magic by sending a browser redirect -- which is why you can redirect anywhere -- it doesn't resolve the new bucket destination internally.
You should be able to redirect using Redirect all requests to another host name as long as you have Static Website enabled on the bucket where you are redirecting too.
No there's no way to change the region other than deleting the bucket in a region and recreating it in another region. Bucket names are unique across all of S3.
You can use Route 53 to create an alias for any bucket by adding a CNAME record that way www.yoursite.com maps to something like http://www.example.com.s3-website-us-east-1.amazonaws.com
Hope this helps.
It seems like this would be really, really easy - but I can't get it to work. All I need to do is to be able to serve files from Google cloud storage while restricting access to my google apps domain. I easily did this before using Google App engine simply by choosing that I wanted to limit access to my domain and setting the app.yaml appropriately. I can't find anything that tells me what I might be missing - I've tried using gsutil to set the ACL to restrict to my domain, which processes successfully through the command line, but then when I try to look at the bucket or object permissions through the cloud web console, I get "unexpected ACL entity type: domain".
I'm trying to access using storage.googleapis.com/bucket/object (of course with my bucket and object name) and I always get a 403 error even though I'm definitely logged in to gmail, and as the administrator of the domain, it seems like it should work because even if the ACL's were otherwise wrong (and I've tried it both with and without the domain restriction), and that it would work for me at least. The only way I can serve content using the above url is if I make it public - which obviously is NOT what I want to do.
I'm sure I'm missing something completely stupid, or some fundamental principles about how this should work - can anyone give me any ideas?
I'm not 100% sure what your use case is, but I'm guessing that your users are attempting to access the objects directly from a web browser. storage.cloud.google.com accepts Google authorization cookies, which means that if a user is logged in to an appropriate Google account, they can access resources restricted to certain users, groups, or domains. However, the other endpoints do not accept cookies as authorization, and so this use case won't work.
These users have permission to access objects using storage.googleapis.com, but doing so requires explicitly authorizing requests.
In othe words, a simple <img src="http://storage.cloud.google.com/bucket/object" /> link will work fine for signed-in users, but using storage.googleapis.com requires explicitly authorizing requests with via OAuth 2.