Axios BaseURL not working on certain hosts - axios

Below is how I configured Axios based on the example given on Nuxt.js' website:
.env:
BASE_URL=https://path.to.endpoint
nuxt.config.js:
publicRuntimeConfig: {
axios: {
baseURL: process.env.BASE_URL
}
},
On page load I make this call:
this.$axios.get(`/endpoint`)
Once I deploy my app as a static site it works both on my personal host and on GitHub pages. But on my employer's host, the path to endpoint specified in .env becomes https://localhost:3000 so the API call fails.
Why is the most likely cause of this behaviour?

Alright, from the comments, it looks like you configuration is totally fine from what you've provided and that the team on the other side does have an incorrect setup of the environment variables.
You need to ask where they do host your code and what are the actual values of their env variables. Actually, you will probably need to give it to them since they (usually) cannot guess it by themselves.
Human communication is the next step. ^^

Related

Netlify deploy can't connect to Heroku backend

I've built a wee program that works fine when I run it locally. I've deployed the backend to Heroku, and I can access that either by going straight to the URL (http://gymbud-tracker.herokuapp.com/users) or when running the frontend locally. So far so good.
However, when I run npm run-script build and deploy it to Netlify, something goes wrong, and any attempt to access the server gives me the following error in the console:
auth.js:37 Error: Network Error
at e.exports (createError.js:16)
at XMLHttpRequest.p.onerror (xhr.js:99)
The action that is pushing that error is the following, if it is relevant:
export const signin = (formData, history) => async (dispatch) => {
try {
const { data } = await api.signIn(formData);
dispatch({ type: AUTH, data });
history.push("../signedin");
} catch (error) {
console.log(error);
}
};
I've been tearing my hair out trying to work out what is changing when I build and deploy, but cannot work it out.
As I say, if I run the front end locally then it access the Heroku backend no problem - no errors, and working exactly as I'd expect. The API call is correct, I believe: const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com/' });
I wondered if it was an issue with network access to the MongoDB database that Heroku is linked to, but it's open to "0.0.0.0/0" (I've not taken any security precautions yet, don't kill me!). The MDB database is actually in the same collection as other projects I've used, that haven't had this problem at all.
Any ideas what I need to do?
The front end is live here: https://gym-bud.netlify.app/
And the whole thing is deployed to GitHub here: https://github.com/gordonmaloney/gymbud
Your issue is CORS (Cross-Origin Resource Sharing). When I visit your site and inspect the page, I see the following error in the JavaScript console which is how I know this:
This error essentially means that your public-facing application (running live on Netlify) is trying to make an HTTP request from your JavaScript front-end to your Heroku backend deployed on a different domain.
CORS dictates which requests from a frontend application are ALLOWED to make a request to your backend API.
What you need to do to fix this is to modify your Heroku application and have it return the appropriate Access-Control-Allow-Origin header. This article on MDN explains the header and how you can use it.
Here's a simple example of the header you could set on your Heroku backend to allow this to work:
Access-Control-Allow-Origin: *
Please be sure to read the MDN documentation, however, as this example will allow any front-end application to make requests to your Heroku backend when in reality, you'll likely want to restrict it to just the front-end domains you build.
God I feel so daft, but at least I've worked it out.
I looked at the console on a different browser (Edge), and it said it was blocking it because it was mixed origin, and I realised I had just missed out the s in the https on my API call, so it wasn't actually a cors issue (I don't think?), but just a typo on my part!
So I changed:
const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com' });
To this:
const API = axios.create({baseURL: 'https://gymbud-tracker.herokuapp.com' });
And now it is working perfectly ☺️
Thanks for your help! Even if it wasn't the issue here, I've definitely learned a lot more about cors on the way, so that's good

Caching external resources with sw-precache

I'm trying to get sw-precache to pre-cache external CDN resources, but the generated service-worker.js doesn't contain the CDN url's in the precacheConfig array.
This is what I have in my gulpfile:
staticFileGlobs: [
'http://netdna.bootstrapcdn.com/font-awesome/4.0.3/css/font-awesome.min.css',
'client/assets/**/*.{js,html,css,png,jpg,gif,svg,eot,ttf,woff,ico}'
]
The files inside my local client/assets folder are added to the precacheConfig array, but the external font-awesome css isn't. Is there a way to achieve this?
sw-precache can only precache and keep up to date local assets, like those that match the client/assets/**/*... pattern you're using. It's not meant to work with remote assets that are accessed via CDN.
You have a couple of approaches:
Use npm (or the package manager or your choice) to download a local copy of the resource (i.e. font-awesome) and then deploy that third party resource alongside your first-party assets. If the third-party code is picked up by a pattern you pass to staticFileGlobs then it can be precached and versioned just like anything else local.
Use runtime caching to handle the resource on the CDN. Since the URL for your specific asset includes a 4.0.3 versioning string, it's safe to assume that the underlying contents will never change, and a cacheFirst strategy is probably safe.
You can modify your sw-precache configuration to look like the following:
{
staticFileGlobs: [
'client/assets/**/*.{js,html,css,png,jpg,gif,svg,eot,ttf,woff,ico}'
],
runtimeCaching: [{
urlPattern: /^https:\/\/netdna\.bootstrapcdn\.com\//,
handler: 'cacheFirst'
}],
// ...any other config options...
}
That configuration is broad enough to pick up anything served off that CDN, cache it, and then serve it cache-first once in subsequent visits.
Please note that your example used an http: protocol for your CDN's URL, and you'll need to use https: to obtain a response that plays nicely with service worker caching.

Setting :deploy_to from server config in Capistrano3

In my Capistrano 3 deployment, I would like to set the set :deploy_to, -> { "/srv/www/#{fetch(:application)}" } so the :deploy_to is different for each server it deploys to.
In my staging.rb file I have:
server 'dev.myserver.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/path'
server 'dev.myserver2.com', user: 'deploy', roles: %w{web app db}, install_path: 'mycustom/other/path'
My question is: would it possible to use the "install_path" I defined, in my :deploy_to? If that's possible, how would you do it?
Finally, after looking around, I came onto an issue from one of the developer of Capistrano, stating specifically that it can't be done
Quote from the Github issue:
Not possible, sorry. fetch() (as is documented widely) reads values
set by set(), the only reason to use set() and fetch() over regular
ruby variables is to provide a consistent API between plugins and
extensions, and because set() can take a Proc to be resolved later.
The variables you are setting in the host object via the server()
command belong to an individual host, some of them, user, roles, etc
have special meanings. For more information see
https://github.com/capistrano/sshkit/blob/master/EXAMPLES.md#do-something-different-on-one-host-or-another-depending-on-a-host-property.
If you specifically need to deploy to a different directory on each
machine you probably should not be using the built-in tasks (they
don't fit your needs), and rather copy the deploy.rake from the Gem
into your own project, and modify it as you need. Which in this case
might be to not take fetch(:deploy_to), but to read that from a host
property.
You could try to do something where before doing anything that relies
on calling fetch(:deploy_to), you set() it using the value from
host.someproperty but I'm pretty sure that'll break in exciting and
interesting ways.

what port does "ember-cli serve" use to serve mocks?

I am porting an ember application to ember-cli, and wanted to use the mock server facility.
What url are the mocks served at (by default, at least)?
I thought I'd look at the generated objects, but their location doesn't seem obvious. localhost:4200 seems to be serving only the client itself. Could it be under a prefix? Also where is the code that sets this up? -- "in the wild" I use oauth tokens, and may want to put auth and cors handling into the mocks to test this.
Ok ... the answer is: even if you are using ember-cli-coffeescript, you can't write server/ (or config/) code in coffee. Inside of generated server/server.js we have
var mocks = globSync('./mocks/**/*.js', { cwd: __dirname }).map(require);
var proxies = globSync('./proxies/**/*.js', { cwd: __dirname }).map(require);
I had converted my mocks to .litcoffee, which works in app/ and tests/, but not here.
I guess if I want to use coffeescript here, I need to compile in the Brocfile (or add to the addon...).

Proxy URL 'incache....com:8080' does not contain a valid hostname

Recently I was forced to switch from SVN to TFS.
I'm trying to get this working with TEE on our RedHat box.
Any action seems to end with something like this:
user#rh: tf -map $/XX/XX . -workspace:app-job -server:http://tfs.domain.com:8080/tfs/TFS2008/ -profile:TFS1_PRF_C
Password:
An error occurred: Proxy URL 'incache.domain.com:8080' does not contain a valid hostname.
Could someone help with that?
Your question is a little vague about what you expect to happen here (are you supposed to be using an HTTP proxy to access your TFS server? Or is the problem that it's assuming your HTTP proxy?)
I'm going to assume that you do not need to use an HTTP proxy to access your internal TFS server, since in most corporate environments your proxy is used to get outside the network, not inside. By default, the Team Explorer Everywhere CLC does try to use your system HTTP proxy, however this is configurable in your connection profile.
In order to override your default system HTTP proxy for that profile, you can set the profile property httpProxyIgnoreGlobal to true:
tf profile -edit -boolean:httpProxyIgnoreGlobal=true TFS1_PRF_C