Google cloud load balancer create backend service for Firebase Hosting to serve microservices & frontend SPA from the same domain via load balancer? - firebase-hosting

Our application is currently focus around one single domain, let's call it: mydomain.com
Currently, we have separate docker containers via cloud run serving microservices from nested path, such as:
mydomain.com/auth -> auth microservice cloud run container
mydomain.com/files -> storage bucket
mydomain.com/public -> public microservice cloud run container.
The issue I'm having is trying to run a static SPA published on firebase from the root of the domain.
mydomain.com -> firebase hosting.
However, there doesn't seem to be any option for this in the spec and load-balancing is listed as N/A in the documentation, which makes sense as they're static files served from a CDN.
Is there a way to achieve this via firebase?

Google LB rewrite engine is limited to only stripping fixed prefixes. For SPA app we need stripping variable suffixes instead. Fortunately bucket HTTP hosting engine allows rewrites of unknown URLs to error page:
gsutil web set -m index.html -e 404.html gs://web-stage/
If we replace 404.html with index.html we obtain classical SPA URL mapping:
gsutil web set -m index.html -e index.html gs://web-stage/
with only one downside: non-root virtual URLs are returned with HTTP code 404. It is not an error and should be ignored.
Thanks to #SebastianG! I saw 404 trick earlier but hesitated to implement it: Deploy SPA application on Google Cloud Storage using Load Balancer and CDN
Overall steps are:
gcloud alpha storage buckets create --location=europe-west1 gs://web-stage
gsutil iam ch allUsers:objectViewer gs://web-stage/
gsutil web set -m index.html -e index.html gs://web-stage/
npm install
npm run build
gsutil -m rsync -r build/ gs://web-stage/
plus you register your bucket as a default route in LB so unknown URLs trigger 404 handler in bucket's HTTP web server.

Related

How to host multiple sites on Azure web app for Containers using docker compose

I would like to use a docker compose file to deploy multiple public end points for our Linux hosted site.
We already have a deployed site that has images stored on a private ACR and is hosted on an Azure App Service (using Web App for Containers). It is deployed via Azure DevOps and works well.
We would however, like to use the same site to host an additional component, an api so that we would then end up with these endpoints:
https://www.example.com - the main site
https://www.example.com/api - the api
We would like to avoid a second app service or a subdomain if possible. The architecture we prefer is to use the same https certificate and ports (443) to host the api. The web site and api share a similar code base.
In the standard app service world, we could easily have deployed a virtual directory to the main app which is simple enough.
This model though seems to be more complicated when using containers.
How can we go about this? I've already had a look at this documentation: https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-multi-container-app. However, in this example, the second container is a private one - which doesn't get exposed.
Should we use a docker compose file (example please)? Or alternatively, is there a way we can use the Azure DevOps task to deploy to a viritual directory in the way that i would like. This is the task we are using for the single container deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-containers?view=azure-devops
For your requirements, the Web App For Container is also a type of Web App service, and as you see, it only can expose one container to the outside(the Internet) and others are private. So if you want to use the multi-containers Web App to deploy the images to access multiple endpoints such as the main site and the API site, then it's impossible to make your purpose come true.
According to the feature of the Web App that it only exposes one container to the outside, what you can do to achieve your purpose is that creates only one image and route to the endpoints yourself in the code or through a tool, such as the Nginx. Then deploy it to the Web App for Container. Only in this way, you can access multiple endpoints from only one App service..

S3 Hosting + Api Gateway

I'm trying to host a static site in S3 with ability to handle some dynamic content using Lambda/Api Gateway. Can't seem to be able to do that.
I want URLs to look like this:
example.com/index.html
example.com/images/*
example.com/css/*
example.com/api/* -> API Gateway
Also, when redirecting I'd like to keep the example.com as a root domain. I tried RoutingRules in S3, but redirects from the client. I need this to be transparent from the user, like proxying requests.
While Bob's answer is pretty neat for public websites and is simple but if you are looking for other alternates which can work for internal sites or don't want to use CDN, you can try following options.
Option 1 -
This is most common option people prefer. You just configure 2 different DNS hosts for static vs api.(Assuming you enable proper CORS for *.example.com)
example.com(S3) --> S3 static content
api.example.com(APIGateway) --> Lambda
Option 2 -
Example.com(APIGateway) --> /apigLambda -->Lambda
Example.com(APIGateway) --> /* --> S3 Bucket/S3 File.
API Gateway Configuration -
API Gateway S3 Backend Proxy -
Example API Urls -
https://xxx.execute-api.us-east-1.amazonaws.com/dev/apigLambda
https://xxx.execute-api.us-east-1.amazonaws.com/dev/myfilename.css
Reference -
https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-s3.html
Note - In above reference Url, the bucket name is being accepted in Url Path but my example hides bucket name so users have no idea of S3 bucket name when they see API Gateway Url.
Option 3 -
As per your comment just use {proxy+} as resource for proxying S3 to support sub-folders calls but as you suggested, making just pass-through proxy doesn't give much options to transform HTTP response body which I believe still ok since you know your website content files.
You can configure this by putting a CloudFront distribution in front of both the API Gateway API and the S3 bucket for static content. This would also allow you to take advantage of CloudFront's edge caching.

Routing requests to index on Google Storage Bucket

In an attempt to host a (static) SPA app on Google Bucket Storage, I am wondering if it is possible at all, considering a typical SPA have dynamic routes.
For example, in a request to a SPA app:
www.myapp.com/user/jon
You would config the server to route such request to the index.html file,
or else it will throw a 404.
How can I configure Google Bucket to redirect all (even better if I get to specify) requests to the index.html in the bucket?
Have you looked at setting the website metadata attribute in your bucket:
https://cloud.google.com/storage/docs/static-website

AccessDeniedException: 403 when creating bucket using gsutil

I am trying to create bucket using gsutil provided by kubernetes.
Below is the response -
$ gsutil mb -c nearline -p kubetest gs://some-bucket
Creating gs://some-bucket/...
AccessDeniedException: 403 hello.user#gmail.com does not have storage.buckets.create access to bucket some-bucket.
I tried the above because when trying run kuberentes on bare metal failed with below exception.
$ cluster/kube-up.sh
... Starting cluster in us-central1-b using provider gce
... calling verify-prereqs
... calling verify-kube-binaries
... calling kube-up
Project: kubetest
Network Project: kubetest
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-9e9580a659 bucket does not exist.
Creating gs://kubernetes-staging-9e9580a659
Creating gs://kubernetes-staging-9e9580a659/...
AccessDeniedException: 403 hello.user#gmail.com does not have storage.buckets.create access to bucket kubernetes-staging-9e9580a659.
How can I resolve this error and give access to the user?
Got to cloud shell and use the command - gsutil config -b
The gsutil config command obtains access credentials for Google Cloud Storage and writes a boto/gsutil configuration file containing the obtained credentials along with a number of other configuration-controllable values and the flag -b causes gsutil config to launch a browser to obtain OAuth2 approval.
It prompts a URL. [ Hit Allow ]
In your browser you should see a page that requests you to authorize access to Google Cloud Platform APIs and Services on your behalf. After you approve, an authorization code will be displayed.
Copy the verification code and paste to terminal and hit enter
This should resolve the 403.

CloudCDN bucket: how to set my landing page without domain

I have a Cloud storage bucket with static files in it.
I have set up a load balancer with Cloud CDN enabled on the cloud bucket above.
When I go to the public_IP assigned in the load balancer I get an xml error message access denied as this is just an ip, not a landing page.
When I go to public_ip/index.html, then the website load.
EDIT (removing) :The content of the bucket will only be served by a sub-domain of an external domain name, that's why I can't name my bucket as the domain name.
It is possible to rename a bucket as a subdomain, and the landing page definition works, but the base question remains.
Is there a possibility to set the landing page for the IP address anyhow?
Yes, it's possible to configure a landing page for any Cloud Storage bucket using the gsutil command line tool. For example, the following command configures the landing page for the bucket named elving:
gsutil web set -m index.html gs://elving
Unfortunately, it's not currently possible to configure this using the Google Cloud Console. You must use the API directly or use a tool such as gsutil. You can find more information about gsutil at https://cloud.google.com/storage/docs/gsutil.