CloudCDN bucket: how to set my landing page without domain - google-cloud-storage

I have a Cloud storage bucket with static files in it.
I have set up a load balancer with Cloud CDN enabled on the cloud bucket above.
When I go to the public_IP assigned in the load balancer I get an xml error message access denied as this is just an ip, not a landing page.
When I go to public_ip/index.html, then the website load.
EDIT (removing) :The content of the bucket will only be served by a sub-domain of an external domain name, that's why I can't name my bucket as the domain name.
It is possible to rename a bucket as a subdomain, and the landing page definition works, but the base question remains.
Is there a possibility to set the landing page for the IP address anyhow?

Yes, it's possible to configure a landing page for any Cloud Storage bucket using the gsutil command line tool. For example, the following command configures the landing page for the bucket named elving:
gsutil web set -m index.html gs://elving
Unfortunately, it's not currently possible to configure this using the Google Cloud Console. You must use the API directly or use a tool such as gsutil. You can find more information about gsutil at https://cloud.google.com/storage/docs/gsutil.

Related

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

Cannot give Google CDN service account to Bucket

I am trying to give the Google CDN service account access to my bucket as said here: https://cloud.google.com/cdn/docs/using-signed-urls
gsutil iam ch serviceAccount:service-{PROJECT_NUMBER}#cloud-cdn-fill.iam.gserviceaccount.com:objectViewer gs://{BUCKET}
But the response is:
BadRequestException: 400 Invalid argument
Adding it via the cloud console is also impossible, it says "Email addresses and domains must be associated with an active Google Account or Google Apps account."
Am I missing something or is this a bug?
The Cloud CDN cache fill service account is created when you enable signed URLs. The error message suggests there's a problem with the project number or you haven't yet enabled signed URLs for that project. You can enable signed URLs by following the instructions at https://cloud.google.com/cdn/docs/using-signed-urls#creatingkeys. Make sure you enable signed URLs for a backend service or backend bucket in the same project you specify in the gsutil command.

domain name registration in route53

I have purchased a domain from aws console (ex:inetglobal.net)
then i have created bucket sky.inetglobal.net and kept a index.html file inside of it. I have enabled website hosting for this bucket.Now trying to map bucket endpoint with in route53 by creating A record
sky.inetglobal.net. A ALIAS s3-website-us-east-1.amazonaws.com
So now if i will type sky.inetglobal.net in browser then content of html file should be visble. But it is not resolve to subdomain sky.inetglobal.net
can any one help in this??
Try having another A record for sub-domain with the same s3 endpoint and it will work.

How to host a static website on google cloud storage?

So, I've spent about 5 days searching for an answer here and on Google Docs, including having one of their support people help me. My domain still doesn't resolve to the website.
For the record, the website works if I use the ugly url (http://storage.googleapis.com/7thgradeplay.org/index.html).
I have transferred the domain to google domains, days ago.
I have verified the domain with Google Search Console. Billing is enabled and accruing. Public_html is set on all files and folders.
I am using Google Domains name servers. I am not using Google Cloud DNS.
Per Google support:
Synthetic Records: 302 redirect #.7thegradeplay.org to www.7thgradeplay.org
Custom Resource Records: www CNAME 7thegradeplay.org
Does this matter? storage bucket name is 7thegradeplay.org.
I think that's about all the config I've done.
All of these changes were done on Friday (3 days ago), and I still get a 404 error when I try to go to the website. I have followed the instructions and tried to troubleshoot with these pages:
https://cloud.google.com/dns/troubleshooting
https://cloud.google.com/storage/docs/hosting-static-website
The only thing I varied was the name of the bucket in storage. I used a bucket name without the leading 'www.' Please don't tell me this is all it takes to break it.
All help is appreciated.
P.S. I added a bucket called www.7thegradeplay.org with all the same files. Waited 15 minutes. still 404 error.
P.P.S. I found an answer, but it didn't work: Connect Google domain to Google Cloud Bucket.
I will retry step #5 in the PPS above tomorrow, after the PS change has had time to 'stew'.
Again, any help is appreciated.
Your bucket name needs to match the URL exactly, so if you're visiting www.7thgradeplay.org, the bucket also needs to be named www.7thgradeplay.org.
Similarly, the DNS record for "www.7thgradeplay.org" must be a CNAME to "c.storage.googleapis.com.".
Checking DNS, I see a CNAME from "wwww.7thgradeplay.org" to "7thgradeplay.org". It needs to be "c.storage.googleapis.com." If you've already set that, you may need to wait a while for it to percolate. DNS can be slow to update.
Follow below steps to Host Static Website on Google Cloud Storage.
Creating a CNAME record in DNS :
Go to your respective Domain Service provider account and find DNS
settings and Create a CNAME record that points to
c.storage.googleapis.com.
NAME TYPE DATA
www.example.com CNAME c.storage.googleapis.com
After adding “CNAME” record it will take some time to propagate this
records.
Creating a Cloud Storage Bucket :
Go to Google cloud Console and select Storage from side menu, And
click on Create bucket.
Now create a bucket whose name must matches the CNAME record that
you have created for your domain in DNS settings.
For example, If you added a CNAME record pointing www.example.com
to c.storage.googleapis.com, then create a bucket with the name
www.example.com.
Uploading files to Cloud Storage Bucket :
Now In the list of buckets, click on the name of the bucket that you
have created.
Now create index.html file in your local system for your website
home page.
Now click on Upload files button and select index.html file that
you have created.
Browse the static website :
Now browse your website with your domain name in your web browser.
For example, If your domain name is www.example.com then browse
your website by going to http://www.example.com in your web
browser.
Now, You have successfully hosted your website on Google Cloud
Storage.

Is it possible to use like proxy forward on s3 website?

I'm planning to host s3 website with following DNS.
S3 bucket name: example.com
S3 endpoint: example.com.s3-website.amazonaws.com
I also want to separate manual page for my service:
S3 bucket name: manual
S3 endpoint: manual.s3-website.amazonaws.com
When I enter example.com/manual, it should forward all request to my manual S3 but URL should not be changed.
For example, when I access, http://example.com/manual/en/index.html,
it should show manual.s3-website.amazonaws.com/en/index.html
but the URL should not be changed.
I tried to use redirection rules of 'Static website hosting' of bucket properties, but it just redirects to the my manual page (it changed the url).
And I'm using jekyll, but it doesn't support proxy forward unlike nginx.
Is there anything solution, guide, or example to refer?
It would be possible if you would use CloudFront. You don't have to change your S3-setup.
create an origin for each bucket
create a second Behavior for the manual path
And you're done.