Host a static site single page app in Google Cloud Storage with routes - google-cloud-storage

There are guides and questions all over the place on how to do this, but never really a concrete answer that is satisfactory. Basically, I'm wondering if it's possible to host a static SPA (HTML/CSS/JS) in GCP Cloud Storage.
The main caveat of this is that the SPA has its own routing system (ReactRouter) so I want all paths to be served by index.html.
Most guides will tell you to set the ErrorDocument to index.html instead of 404.html. While this is a clever hack, it causes the site's HTTP response code to be 404 which is a disaster for SEO or monitoring tools. So that will work, as long as I can change the response code.
Is there any way to make this work? I have CloudFlare up and running too but from what I can tell there are no ways to trim the path or change the response status from there.

A good approach here is to use Google App Engine to host a static SPA. https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website
You can use the app.yaml file to map urls to the static file. Here’s an example:
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
Documentation for app.yaml https://cloud.google.com/appengine/docs/standard/python/config/appref

One way to circumvent the problem is to use server-side rendering. In SSR all client requests are passed to a backend app so there's no need for a Cloud Storage-hosted index.html.
This of course comes with its own set of complications but we're avoiding the above-mentioned 404 hack or resorting to any further dependencies like App Engine.
Alternatively you could go with hash-based routing, i.e. paths like https://example.com/#some-path.

A very simple solution would be to just add the index.html file as the 404 fallback. This will always route everything to your single page app.

If you use Cloudflare, you can use a Cloudflare Worker to override the 404 status code, which comes from the Google Cloud Storage error page.
The code for the Worker should look like this:
addEventListener('fetch', event => {
event.respondWith(fetchAndLog(event.request))
})
async function fetchAndLog(req) {
const res = await fetch(req)
console.log('req', res.status, req.url)
if (res.status === 404 && req.method === 'GET') {
console.log('overwrite status', req.url)
return new Response(res.body, {
headers: res.headers,
status: 200
})
}
return res
}
I found it here in the Cloudflare community.

Related

Netlify deploy can't connect to Heroku backend

I've built a wee program that works fine when I run it locally. I've deployed the backend to Heroku, and I can access that either by going straight to the URL (http://gymbud-tracker.herokuapp.com/users) or when running the frontend locally. So far so good.
However, when I run npm run-script build and deploy it to Netlify, something goes wrong, and any attempt to access the server gives me the following error in the console:
auth.js:37 Error: Network Error
at e.exports (createError.js:16)
at XMLHttpRequest.p.onerror (xhr.js:99)
The action that is pushing that error is the following, if it is relevant:
export const signin = (formData, history) => async (dispatch) => {
try {
const { data } = await api.signIn(formData);
dispatch({ type: AUTH, data });
history.push("../signedin");
} catch (error) {
console.log(error);
}
};
I've been tearing my hair out trying to work out what is changing when I build and deploy, but cannot work it out.
As I say, if I run the front end locally then it access the Heroku backend no problem - no errors, and working exactly as I'd expect. The API call is correct, I believe: const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com/' });
I wondered if it was an issue with network access to the MongoDB database that Heroku is linked to, but it's open to "0.0.0.0/0" (I've not taken any security precautions yet, don't kill me!). The MDB database is actually in the same collection as other projects I've used, that haven't had this problem at all.
Any ideas what I need to do?
The front end is live here: https://gym-bud.netlify.app/
And the whole thing is deployed to GitHub here: https://github.com/gordonmaloney/gymbud
Your issue is CORS (Cross-Origin Resource Sharing). When I visit your site and inspect the page, I see the following error in the JavaScript console which is how I know this:
This error essentially means that your public-facing application (running live on Netlify) is trying to make an HTTP request from your JavaScript front-end to your Heroku backend deployed on a different domain.
CORS dictates which requests from a frontend application are ALLOWED to make a request to your backend API.
What you need to do to fix this is to modify your Heroku application and have it return the appropriate Access-Control-Allow-Origin header. This article on MDN explains the header and how you can use it.
Here's a simple example of the header you could set on your Heroku backend to allow this to work:
Access-Control-Allow-Origin: *
Please be sure to read the MDN documentation, however, as this example will allow any front-end application to make requests to your Heroku backend when in reality, you'll likely want to restrict it to just the front-end domains you build.
God I feel so daft, but at least I've worked it out.
I looked at the console on a different browser (Edge), and it said it was blocking it because it was mixed origin, and I realised I had just missed out the s in the https on my API call, so it wasn't actually a cors issue (I don't think?), but just a typo on my part!
So I changed:
const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com' });
To this:
const API = axios.create({baseURL: 'https://gymbud-tracker.herokuapp.com' });
And now it is working perfectly ☺️
Thanks for your help! Even if it wasn't the issue here, I've definitely learned a lot more about cors on the way, so that's good

Axios BaseURL not working on certain hosts

Below is how I configured Axios based on the example given on Nuxt.js' website:
.env:
BASE_URL=https://path.to.endpoint
nuxt.config.js:
publicRuntimeConfig: {
axios: {
baseURL: process.env.BASE_URL
}
},
On page load I make this call:
this.$axios.get(`/endpoint`)
Once I deploy my app as a static site it works both on my personal host and on GitHub pages. But on my employer's host, the path to endpoint specified in .env becomes https://localhost:3000 so the API call fails.
Why is the most likely cause of this behaviour?
Alright, from the comments, it looks like you configuration is totally fine from what you've provided and that the team on the other side does have an incorrect setup of the environment variables.
You need to ask where they do host your code and what are the actual values of their env variables. Actually, you will probably need to give it to them since they (usually) cannot guess it by themselves.
Human communication is the next step. ^^

Hosting a static Sapper site in a subfolder in Google Cloud Storage

I have a page that uses fetch in onMount, that I export using sapper export, then upload to Google Cloud Storage to be hosted as a static site...
when a page is requested with a trailing / everything works great but when it is requested without a trailing / GCS redirects to /index.html... when this happens the fetch doesn't run... it looks like a 2nd index.[hash].js file that contains the fetch isn't being requested. All the styles load & routing works fine.
I'm not worried about GCS redirecting to /index.html, that's expected... what I'm wondering is if svelte/sapper is able to run normally if index.html is in the url?
[EDIT]
Important info: I'm hosting the site under a subfolder! Hosting it at the root works perfectly, like joshnuss mentions below.
Before exporting, I updated src/server.js to this:
polka()
.use(
"/test1", //<----- THIS IS THE SUBFOLDER
compression({ threshold: 0 }),
sirv("static", { dev }),
sapper.middleware()
)
.listen(PORT, err => {
if (err) console.log("error", err);
});
I export the site with the following command:
yarn run sapper export --legacy --basepath="/test1"

Zuul routing doesn't work, gives 404 : Spring Boot+ Cloud+ Zuul

I am working on a flow where I have ng4+boot app running on https://host_a:8080 and a backend service at https://host_b:8080 with some APIs.
I have RestController/Path at both the hosts, i.e. I need some urls to hit localhost (host_a) and others to host_b.
In application.yml, I have tried almost all possible combinations of Zuul routes but still getting 404 for all host_b rest APIs. host_a APIs work well.
Note: We have this working when there is no rest API on host_a and no custom filter on host_a.
Is there something wrong working with filter? I don't see any log from zuul filter now after I added this controller to host_a
I am aware that I can use forward property to route to localhost which works well. But somehow host_b rest all gives 404 error.
My implementation requirements-
http://host_a:8080/api/abc/user to hit at localhost i.e. host_a
http://host_a:8080/api/xyz/getall to hit at host_b
Important- Need a custom zuul filter which adds certain headers to request before it's routed to host_b as explained in point 2. - Already at place, but cannot see logs inside it now.
What I tried already-
zuul:
routes:
xyz:
path: /api/xyz/**
url: http://host_b:8080/api/xyz
I tried almost everything, using prefix, strip-prefix, only host in url, using forward for local routing, etc. Nothing works.
Kindly help me with the possible causes I may be ignoring or if missing something?
Thanks in advance.
Finally, I was able to resolve issues.
1. I had to change jersey #Path to spring #RestController
2. Changed Zuul Filter order from 1 to 999.
Works well now.

Gitlab change redirect for nonexistent paths away from login page

Using omnibus gitlab 9.2.
Action: As a non-logged-in user, attempt a request for a public project that doesn't exist (at least not publicly).
Result: Receive a 302 redirect to /users/sign_in from nginx.
What I'd like to see: Receive a 302 redirect to /public (or wherever, for that matter)
What I've tried without success: Adding this to gitlab.rb:
nginx['custom_gitlab_server_config'] = "try_files $uri $uri/ /public;\n\nfastcgi_intercept_errors on;\n\n"
I couldn't find the explicit redirect in any nginx conf, so I guess it's in Rails. I'll peruse that code.
This is actually a custom HA configuration with the gitlab nodes behind haproxy fronts. I thought about possibly doing something on the fronts, but couldn't come up with anything.
Thanks!
Edit:
I see now that replacing the unmatched_route line in routes.rb with:
get '*unmatched_route', to: redirect('/public'), via: :all
does what I need, but I'd of course want to make that change persistent. Is that possible?