I am following this doc.
There is a show stopper in this.
It's explained that, point network bootstrapper jar to _node.conf files in the directory to generate network-related files, but never mentioned how does _node.conf file must look like, especially with configuration for different aspects like H2 DB, PostgresDB etc.
Share a sample link( not cordapp, I just need network setup examples ) or steps to set up a network with h2 & Postgres.
Not many articles available on the internet, everything takes me to the official document and it just increases confusion or convinces me to stop working.
If you run gradle deployNodes, it will generate all the nodes configuration files that are needed by the Network Bootstrapper inside a /build/nodes folder, which will be created by that gradle task.
You can also find all the files in this repository used in an older bootcamp (but still valid) about how to deploy the nodes using Docker. The configuration files are the same.
You can also find an example of node configuration file in the documentation here.
There is also a very good tutorial video on YouTube made by R3 specifically for network bootstrapper.
Related
I am trying to build a simple PWA (no frameworks) with the help of Workbox. For that I have been following tutorials on how to use workbox as well as looking at the official documentation. I tried generating a Service Worker with Workbox but the files generated seem very different to what I have seen in those tutorials.
I have noticed that all these tutorials were done with pretty old versions of Workbox (3.0 e.g.), so I am curious if I am missing some requirements since I am working with Workbox 6.5.4.
The first steps I took after creating the App shell:
npm install workbox-cli -g
workbox wizard (to generate workbox-config.js)
workbox generateSW / workbox generateSW workbox-config.js
Register Service Worker in my index.html
In all the tutorials I have seen, I see that one js file named "sw.js" is created when running workbox generateSW. However, when I do so it creates 4 js files ("sw.js", "sw.js.map", "workbox-d249b2c8.js", "workbox-d249b2c8.js.map"). I don't understand the content of them at all.
Contents of sw.js e.g.
if(!self.define){let e,s={};const i=(i,r)=>(i=new URL(i+".js",r).href,s[i]||new Promise((s=>{if("document"in self){const e=document.createElement("script");e.src=i,e.onload=s,document.head.appendChild(e)}else e=i,importScripts(i),s()})).then((()=>{let e=s[i];if(!e)throw new Error(`Module ${i} didn’t register its module`);return e})));self.define=(r,t)=>{const d=e||("document"in self?document.currentScript.src:"")||location.href;if(s[d])return;let n={};const o=e=>i(e,d),c={module:{uri:d},exports:n,require:o};s[d]=Promise.all(r.map((e=>c[e]||o(e)))).then((e=>(t(...e),n)))}}define(["./workbox-d249b2c8"],(function(e){"use strict";self.addEventListener("message",(e=>{e.data&&"SKIP_WAITING"===e.data.type&&self.skipWaiting()})),e.precacheAndRoute([{url:"css/main.css",revision:"d3072ab3693c185313018e404e07d914"},{url:"index.html",revision:"4fc118da921f59395f9c5d3e2ddd85b0"},{url:"js/app.js",revision:"aa3d97a8701edd5dae0d6a9bb89d65bd"}],{ignoreURLParametersMatching:[/^utm_/,/^fbclid$/]})}));
//# sourceMappingURL=sw.js.map
It looks completely different to the template generated in other tutorials... In the official documentation I don't see them go into detail about the file(s) that is created after running workbox generateSW. I still tried to see if the Service Worker would work since it said The service worker will precache 3 URLs, totaling 1.65 kB after I ran the command. Sadly, when I ran my http-server and DevTools there was no Service Worker to be found.
Thanks a lot in advance (feel free to tell me, if you need more information in order to solve this issue)!!
I've been experimenting with single-spa for a while, and understand the basics of the developer experience. Create a parcel, yarn start on a unique port, add the reference to the import map declaration, and so on. The challenge with this is that as my root-config accrues more and more modules managing ports and import-maps starts to get tedious. What I want is to publish these modules to a central repository and load them from there (e.g., http://someserver.com/repository/moduleA/myorg-modulea.js, etc.).
I was recently introduced to localstack and started thinking maybe a local S3 bucket would serve for this. I have a configuration where builds (yarn build) are automatically published to an s3 bucket running on localstack. But when I try to load the root config index.html from the bucket I get the following JS error:
Unable to resolve bare specifier '#myorg/root-config'
I can access the JS files for each parcel and the root-config just fine via curl, so I suppose this would be a problem with any http server used in the same way. I can flip the root config to use the standard webpack-dev-server instead (on port 9000) and it works okay. So I'm guessing there's a difference between how a production build resolves these modules vs. the local build.
Has anyone tried something like this and got it working?
I had a similar issue that I got to work with http-server by adding each child .js file to a sub-folder in the root-config directory and launching the web server at the root-config directory level.
"imports": {
"#myorg/root-config": "http://someserver.com/root-config.js",
"#myorg/moduleA": "http://someserver.com/modules/moduleA/myorg-modulea.js",
"#myorg/moduleB": "http://someserver.com/modules/moduleB/myorg-moduleb.js",
"#myorg/moduleC": "http://someserver.com/modules/moduleC/myorg-modulec.js",
}
Note: By default, Single-SPA has an "isLocal" check before the current import mappings. You'll need to remove this if using a production build or it won't load the correct mappings.
<% if (isLocal) { %>
In my case, I was using localhost instead of someserver so I could navigate into the 'repository' folder and run npx http-server to get everything to run correctly.
I was hung up on this for a little while so hopefully this leads you in the right direction.
For the record, after trying several approaches to hosting a repository as described above, I found the unresolved bare specifier problem went away. So I have to chalk that up as just not having the right URL in the import map. Measure twice, cut once.
I'm getting my way through the HLF 2.0 docs and would love to discuss and try out the new features "External Builders and Launchers" and "Chaincode as an external service".
My goal is to run HLF2.0 on an K8s cluster (OpenShift). Does anyone wants to get in touch or has anyone already figured his way through?
Cheers from Germany
Also trying to use the ExternalBuilder. Setup core.yaml, rebuilt the containers to use it. I get an error that on "peer lifecycle chaincode install .tgz...", that the path to the scripts in core.yaml can not be found.
I've added volume bind commands in the peer-base.yaml, and in docker-compose-cli.yaml, and am using the first-network setup. Dropped out the part of the byfn.sh that would connect to the cli container, so that I do that part manually, do the create, join, update anchors successfully, and then try to do the install and fail. However, on the install, I'm failing on the /bin/detect, because it can't find that file to fork/exec it. To get that far, peer was able to read my external configuration, and read the core.yaml file. At the moment, trying the "mode: dev" in the core.yaml which seems to indicate that the scripts and the chaincode will be run "locally", which I think means it should run in the cli container. Otherwise, tried to walk the code to see how the docker containers are being created dynamically, and from what image, but haven't been able to nail that down yet.
We currently have an application that is a plugin host, thus having the "Pipeline" folder in it's application directory. All of the plugins that are managed through this host are plugins relating to a windows service that is running, and that windows service is basically for managing one county for the purpose of this example.
What we want to achieve is to be able to install multiple instances of this windows service and to manage each of these through the host application. Our original thought was to have several "Pipeline" folders, one for each county which manages it's instance of the windows service but I don't see how we are going to do this since it seems like the "Pipeline" folder naming convention is set in stone and there is no way to dynamically point your application to a specific "Pipeline" folder.
Any thoughts?
Seems like I always dig up the answer after posting...
There is a parameter on the FindAddIns method used to pass the pipeline root. This should work just fine.
We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.