How do I generate a Service Worker with Workbox 6.5.4? - progressive-web-apps

I am trying to build a simple PWA (no frameworks) with the help of Workbox. For that I have been following tutorials on how to use workbox as well as looking at the official documentation. I tried generating a Service Worker with Workbox but the files generated seem very different to what I have seen in those tutorials.
I have noticed that all these tutorials were done with pretty old versions of Workbox (3.0 e.g.), so I am curious if I am missing some requirements since I am working with Workbox 6.5.4.
The first steps I took after creating the App shell:
npm install workbox-cli -g
workbox wizard (to generate workbox-config.js)
workbox generateSW / workbox generateSW workbox-config.js
Register Service Worker in my index.html
In all the tutorials I have seen, I see that one js file named "sw.js" is created when running workbox generateSW. However, when I do so it creates 4 js files ("sw.js", "sw.js.map", "workbox-d249b2c8.js", "workbox-d249b2c8.js.map"). I don't understand the content of them at all.
Contents of sw.js e.g.
if(!self.define){let e,s={};const i=(i,r)=>(i=new URL(i+".js",r).href,s[i]||new Promise((s=>{if("document"in self){const e=document.createElement("script");e.src=i,e.onload=s,document.head.appendChild(e)}else e=i,importScripts(i),s()})).then((()=>{let e=s[i];if(!e)throw new Error(`Module ${i} didn’t register its module`);return e})));self.define=(r,t)=>{const d=e||("document"in self?document.currentScript.src:"")||location.href;if(s[d])return;let n={};const o=e=>i(e,d),c={module:{uri:d},exports:n,require:o};s[d]=Promise.all(r.map((e=>c[e]||o(e)))).then((e=>(t(...e),n)))}}define(["./workbox-d249b2c8"],(function(e){"use strict";self.addEventListener("message",(e=>{e.data&&"SKIP_WAITING"===e.data.type&&self.skipWaiting()})),e.precacheAndRoute([{url:"css/main.css",revision:"d3072ab3693c185313018e404e07d914"},{url:"index.html",revision:"4fc118da921f59395f9c5d3e2ddd85b0"},{url:"js/app.js",revision:"aa3d97a8701edd5dae0d6a9bb89d65bd"}],{ignoreURLParametersMatching:[/^utm_/,/^fbclid$/]})}));
//# sourceMappingURL=sw.js.map
It looks completely different to the template generated in other tutorials... In the official documentation I don't see them go into detail about the file(s) that is created after running workbox generateSW. I still tried to see if the Service Worker would work since it said The service worker will precache 3 URLs, totaling 1.65 kB after I ran the command. Sadly, when I ran my http-server and DevTools there was no Service Worker to be found.
Thanks a lot in advance (feel free to tell me, if you need more information in order to solve this issue)!!

Related

Setup corda NETWORK using network bootstrapper + postgres

I am following this doc.
There is a show stopper in this.
It's explained that, point network bootstrapper jar to _node.conf files in the directory to generate network-related files, but never mentioned how does _node.conf file must look like, especially with configuration for different aspects like H2 DB, PostgresDB etc.
Share a sample link( not cordapp, I just need network setup examples ) or steps to set up a network with h2 & Postgres.
Not many articles available on the internet, everything takes me to the official document and it just increases confusion or convinces me to stop working.
If you run gradle deployNodes, it will generate all the nodes configuration files that are needed by the Network Bootstrapper inside a /build/nodes folder, which will be created by that gradle task.
You can also find all the files in this repository used in an older bootcamp (but still valid) about how to deploy the nodes using Docker. The configuration files are the same.
You can also find an example of node configuration file in the documentation here.
There is also a very good tutorial video on YouTube made by R3 specifically for network bootstrapper.

how to to host single-spa root-config and modules on a central server during development

I've been experimenting with single-spa for a while, and understand the basics of the developer experience. Create a parcel, yarn start on a unique port, add the reference to the import map declaration, and so on. The challenge with this is that as my root-config accrues more and more modules managing ports and import-maps starts to get tedious. What I want is to publish these modules to a central repository and load them from there (e.g., http://someserver.com/repository/moduleA/myorg-modulea.js, etc.).
I was recently introduced to localstack and started thinking maybe a local S3 bucket would serve for this. I have a configuration where builds (yarn build) are automatically published to an s3 bucket running on localstack. But when I try to load the root config index.html from the bucket I get the following JS error:
Unable to resolve bare specifier '#myorg/root-config'
I can access the JS files for each parcel and the root-config just fine via curl, so I suppose this would be a problem with any http server used in the same way. I can flip the root config to use the standard webpack-dev-server instead (on port 9000) and it works okay. So I'm guessing there's a difference between how a production build resolves these modules vs. the local build.
Has anyone tried something like this and got it working?
I had a similar issue that I got to work with http-server by adding each child .js file to a sub-folder in the root-config directory and launching the web server at the root-config directory level.
"imports": {
"#myorg/root-config": "http://someserver.com/root-config.js",
"#myorg/moduleA": "http://someserver.com/modules/moduleA/myorg-modulea.js",
"#myorg/moduleB": "http://someserver.com/modules/moduleB/myorg-moduleb.js",
"#myorg/moduleC": "http://someserver.com/modules/moduleC/myorg-modulec.js",
}
Note: By default, Single-SPA has an "isLocal" check before the current import mappings. You'll need to remove this if using a production build or it won't load the correct mappings.
<% if (isLocal) { %>
In my case, I was using localhost instead of someserver so I could navigate into the 'repository' folder and run npx http-server to get everything to run correctly.
I was hung up on this for a little while so hopefully this leads you in the right direction.
For the record, after trying several approaches to hosting a repository as described above, I found the unresolved bare specifier problem went away. So I have to chalk that up as just not having the right URL in the import map. Measure twice, cut once.

Does anyone have tried the HLF 2.0 feature "External Builders and Launchers" and wants to get in touch?

I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.

Deploying a plain bundle.js on a no-node-supported web server

I have a landing page (some.html) pointing to bundle.js . Under the chrome debugger I do see all my files getting loaded properly but still I get errors in the code like 'this' is undefined etc .
Please note :
I don't have node / npm / webpack installed
I am not running webpack-dev-server running on this server
What is the proper way to deploy / refer a bundle.js file ?
Do I need to install webpack globally on this server ?
I need to know is it possible to just get the functionality of my SPA through a bundle.js being pointed from index.html provided that bundle.js was generated using a webpack on my dev-machine ?
Seems like yes through an index.html we can point to a bundled js if all the functionality is client side and we can get the desired functionality. My reactApp was broken because it the underlying the page (old application) was supporting prototype.js and therefore react's internal functions had issues. Once I removed reference to prototype - my application is working in that old container –

Downloading entire sites for offline viewing

I'd like to download a website to whiten it into a private network.
I know it's being done a lot with StackOverflow itself, I just don't know how to do it myself.
The specific site I want to download is CPP-QUIZ with all the questions and explanations.
I've tried doing it with HTTrack, but it seems to download just a couple of questions and then it fails.
What is usually done to do something like this?
You can use a tool like wget or curl. If the site had an index then you could use wget's recursive option. It seems like it does not (the homepage just seems to choose a question at random for that specific site).
In that case you can just start generating commands:
wget http://cppquiz.org/quiz/question/1
wget http://cppquiz.org/quiz/question/2
wget http://cppquiz.org/quiz/question/3
And so on. After running those commands you would have the files downloaded to the directory you ran the commands from.
I was also wondering the same thing & stumbled upon this Github project this week called Diskernet
it saves resources to your file system as your browsing using your web browser (when launched with the diskernet binary or CLI tool)
It's still an early project, but I found I already started making use of it this week & it worked great!
https://github.com/i5ik/Diskernet
Highlights:
is able to download Single Page Web applications (doesn't use wget)
once you download a site, it works offline
There are alternative tools like SiteSucker, but I found Diskernet to work even for websites that require authentication