This is more of a service worker question, although it could be more specific to Sapper. I really don't know, as I am the first to admin I struggle with Service Workers, have barely used them, and feel like they are a pain the butt very often.
Basically, no matter what I do, I cannot get localhost:3000 to stop loading the old copy of the app. I've unregistered the service worker in various ways, including try to programmatically unregister. I've clear caches, and even cleared all browsing data from the browser. The server from my Sapper dev environment is NOT running.
This is happening with Brave, but behaves the same in Opera, and seems like a general Chromium scenario. I don't user Firefox or Safari, but may test in one soon to see what behavior happens there.
Here is a clip showing how I try to unregister the Service Worker.
https://youtu.be/Cb24I_fEklE
I used this little trick that works like a charm. In your rollup.config.js, there's a serviceWorker object in the outputs object.
serviceworker: {
input: config.serviceworker.input(),
output: config.serviceworker.output(),
plugins: [
resolve(),
replace({
"process.browser": true,
"process.env.NODE_ENV": JSON.stringify(mode),
}),
commonjs(),
!dev && terser(),
],
preserveEntrySignatures: false,
onwarn,
},
Define a variable dev on top if not already declared:
const dev = process.env.NODE_ENV === "development";
Now change your service worker config like this:
serviceworker: !dev && {
input: config.serviceworker.input(),
output: config.serviceworker.output(),
plugins: [
resolve(),
replace({
"process.browser": true,
"process.env.NODE_ENV": JSON.stringify(mode),
}),
commonjs(),
!dev && terser(),
],
preserveEntrySignatures: false,
onwarn,
},
Clear cache, the only thing that works for me to bypass it.
The issue is the service worker is service from the cache, serving from the cache again probably resets this item in the cache as even more valid to send and you get caught in something of a cycle.
--
I found this question because I was considering completely removing the service worker to try to get the performance of my site a little higher...
Is it critically necessary? What are the benefits of it?
Make sure to close any browser tabs that have the app running in it, you can't replace a service worker while it's servicing an existing version of the application. Also try reloading the page once the service worker is installed if there are any caching issues.
Related
I've built a wee program that works fine when I run it locally. I've deployed the backend to Heroku, and I can access that either by going straight to the URL (http://gymbud-tracker.herokuapp.com/users) or when running the frontend locally. So far so good.
However, when I run npm run-script build and deploy it to Netlify, something goes wrong, and any attempt to access the server gives me the following error in the console:
auth.js:37 Error: Network Error
at e.exports (createError.js:16)
at XMLHttpRequest.p.onerror (xhr.js:99)
The action that is pushing that error is the following, if it is relevant:
export const signin = (formData, history) => async (dispatch) => {
try {
const { data } = await api.signIn(formData);
dispatch({ type: AUTH, data });
history.push("../signedin");
} catch (error) {
console.log(error);
}
};
I've been tearing my hair out trying to work out what is changing when I build and deploy, but cannot work it out.
As I say, if I run the front end locally then it access the Heroku backend no problem - no errors, and working exactly as I'd expect. The API call is correct, I believe: const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com/' });
I wondered if it was an issue with network access to the MongoDB database that Heroku is linked to, but it's open to "0.0.0.0/0" (I've not taken any security precautions yet, don't kill me!). The MDB database is actually in the same collection as other projects I've used, that haven't had this problem at all.
Any ideas what I need to do?
The front end is live here: https://gym-bud.netlify.app/
And the whole thing is deployed to GitHub here: https://github.com/gordonmaloney/gymbud
Your issue is CORS (Cross-Origin Resource Sharing). When I visit your site and inspect the page, I see the following error in the JavaScript console which is how I know this:
This error essentially means that your public-facing application (running live on Netlify) is trying to make an HTTP request from your JavaScript front-end to your Heroku backend deployed on a different domain.
CORS dictates which requests from a frontend application are ALLOWED to make a request to your backend API.
What you need to do to fix this is to modify your Heroku application and have it return the appropriate Access-Control-Allow-Origin header. This article on MDN explains the header and how you can use it.
Here's a simple example of the header you could set on your Heroku backend to allow this to work:
Access-Control-Allow-Origin: *
Please be sure to read the MDN documentation, however, as this example will allow any front-end application to make requests to your Heroku backend when in reality, you'll likely want to restrict it to just the front-end domains you build.
God I feel so daft, but at least I've worked it out.
I looked at the console on a different browser (Edge), and it said it was blocking it because it was mixed origin, and I realised I had just missed out the s in the https on my API call, so it wasn't actually a cors issue (I don't think?), but just a typo on my part!
So I changed:
const API = axios.create({baseURL: 'http://gymbud-tracker.herokuapp.com' });
To this:
const API = axios.create({baseURL: 'https://gymbud-tracker.herokuapp.com' });
And now it is working perfectly ☺️
Thanks for your help! Even if it wasn't the issue here, I've definitely learned a lot more about cors on the way, so that's good
Below is how I configured Axios based on the example given on Nuxt.js' website:
.env:
BASE_URL=https://path.to.endpoint
nuxt.config.js:
publicRuntimeConfig: {
axios: {
baseURL: process.env.BASE_URL
}
},
On page load I make this call:
this.$axios.get(`/endpoint`)
Once I deploy my app as a static site it works both on my personal host and on GitHub pages. But on my employer's host, the path to endpoint specified in .env becomes https://localhost:3000 so the API call fails.
Why is the most likely cause of this behaviour?
Alright, from the comments, it looks like you configuration is totally fine from what you've provided and that the team on the other side does have an incorrect setup of the environment variables.
You need to ask where they do host your code and what are the actual values of their env variables. Actually, you will probably need to give it to them since they (usually) cannot guess it by themselves.
Human communication is the next step. ^^
Whilst I've tried several solutions to related problems on SO, nothing appears to fix my problem when deploying a Meteor project to a VM on Google Compute Engine.
I setup mupx to handle the deployment and don't have any apparent issues when running
sudo mupx deploy
My mup.json is as follows
{
// Server authentication info
"servers": [
{
"host": "104.199.141.232",
"username": "simonlayfield",
"password": "xxxxxxxx"
// or pem file (ssh based authentication)
// "pem": "~/.ssh/id_rsa"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.36 by default. Do not use v, only version number.
"nodeVersion": "0.10.36",
// Install PhantomJS in the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (No spaces)
"appName": "simonlayfield",
// Location of app (local directory)
"app": ".",
// Configure environment
"env": {
"ROOT_URL": "http://simonlayfield.com"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
When navigating to my external IP in the browser I can see the Meteor site template however the Mongodb data isn't showing up.
http://simonlayfield.com
I have set a firewall rule up on the VM to allow traffic through port 27017
Name: mongodb
Description: Allow port 27017 access to http-server
Network: default
Source filter: Allow from any source (0.0.0.0/0)
Allowed protocols and ports: tcp:27017
Target tags: http-server
I've also tried passing the env variable MONGO_URL but after several failed attempts I found this post on the Meteor forums suggesting that it is not required when using a local Mongodb database.
I'm currently connecting to the VM using ssh rather than the gcloud SDK but if it will help toward a solution I'm happy to set that up.
I'd really appreciate it if someone could provide some guidance on how I can know specifically what is going wrong. Is the firewall rule I've setup sufficient? Are there other factors than need to be considered when using a Google Compute Engine VM specifically? Is there a way for me to check logs on the server via ssh to gain extra clarity around a connection/firewall/configuration problem?
My knowledge in this area is limited and so apologies if there's an easy fix that has evaded me.
Thanks in advance.
There were some recent meteord updates, please rerun your deployment
Also as a side note: I always specify a port for mup / mupx files
"env": {
"PORT": 5050,
"ROOT_URL": "http://youripaddress"
},
According to the Ruby on Rails Guide: Caching, caching is disabled by default in the development and testing environments. If I make a small CSS change, run rails server and access my site at localhost:3000, I can see my change. However, if I access my rails server on my iPhone at 10.0.1.2:3000, the CSS doesn't update, even Chrome in Incognito Mode. When I try different iPhone that has an empty cache, the change is there.
I found a stack overflow post that described the same problem. Here were the suggested solutions:
Remove the public/assets directory. I don't have one.
Add config.serve_static_assets = false to environments/development.rb. It's already there.
Delete /tmp/cache/assets, add config.serve_static_assets = false to environments/development.rb and restart the server. I tried this and it didn't work.
Here's my relevant environments/development.rb config:
# In the development environment your application's code is reloaded on
# every request. This slows down response time but is perfect for development
# since you don't have to restart the web server when you make code changes.
config.cache_classes = false
# Show full error reports and disable caching
config.consider_all_requests_local = true
config.action_controller.perform_caching = false
I'm pretty sure this is happening because Rails only does fingerprinting in production: http://guides.rubyonrails.org/asset_pipeline.html#in-production
This means that in development browsers that are more cache-aggressive can run into this issue.
Try adding this to your development.rb:
config.assets.digest = true
Or more preferable something conditional for when you're doing mobile development
# One of the few exceptions I'd make to a no ENV variables rule
# for my rails environment config files
config.assets.digest = true if ENV["MOBILE_DEBUG"]
How are use accessing your local machine via your iphone ?
have you configured any network settings or you push it to a different server and access from there, because the thing is if you are pusing it to a different server , that sever might be running in the production mode.
HTH
I don't have an iPhone to test, but it sounds like a normal browser caching issue. Try these instructions for clearing the browser cache. If that works, you'll need to do it each time you update your CSS (or J
I had a similar problem. It happened because my config/environments/development.rb had contained config.asset_host = 'http://localhost:3000'
I've removed it and all works fine.
I have a Perl Catalyst application which is launched normally using the -r parameter.
I have noticed 2 types of behaviour:
1) the application restarts normally on every "dummy change" of the code (by "dummy change" I mean adding a space or deleting one, smth like this)
2) the application doesn't restart (the same "dummy change"), the "Attempting to restart the server" text is displayed and the app remains blocked in this state (I have to kill it manually)
The behaviour depends on the actual code. It seems there is something related to the code which influences which behaviour acts at one moment. The behaviour is constant, i.e. the same code have one constant behaviour of 2.
The application itself seems to work fine, without any errors or warnings.
How could the code influence this behaviour? (I mean generally)
What factors are related to restart mechanism?
That is because signal handling has changed in the newer version of the Oracle client. Use the "ora_connect_with_default_signals" option to restore default signal handler.
Here is how you can do it in the DBIx::Class model (MyApp::Model::DB):
connect_info => [
'dbi:Oracle:mydb',
'username',
'password',
{
ora_connect_with_default_signals => [ 'INT' ],
},
],
or in the config file:
<Model DBIC>
connect_info dbi:Oracle:mydb
connect_info username
connect_info password
<connect_info>
ora_connect_with_default_signals [ INT ]
</connect_info>
</Model>
I have seen similar behaviour when using a standalone server via PSGI (ie plackup -r), where the server restarts once, and subsequent code changes produce the message but no restart.
However, I have never seen the built-in server myapp_server.pl -r behave in this manner. Any change to a perl module, YAML file etc triggers the restart successfully.
In the brief research I did on it at the time I did turn up this discussion of Plack and restart.