How add multiple service worker on Flutter Web - flutter

I want to add a service worker (in addition to the existing "flutter_service_worker").
The problem I have is that I can't get both to be active at the same time. I don't quite understand the scope issue.
When I register my service worker, I do the following in the index.html: navigator.serviceWorker.register("./sw.js", { Scope: "/". });
Then I install it and activate it in cache, but it overlaps the original.
Which scope should be used to keep both service workers active?

Related

Can a ReplicaSet be configured to allow in progress updates to complete?

I currently have a kubernetes setup where we are running a decoupled drupal/gatsby app. The drupal acts as a content repository that gatsby pulls from when building. Drupal is also configured through a custom module to connect to the k8s api and patch the deployment gatsby runs under. Gatsby doesn't run persistently, instead this deployment uses gatsby as an init container to build the site so that it can then be served by a nginx container. By patching the deployment(modifying a label) a new replicaset is created which forces a new gatsby build, ultimately replacing the old build.
This seems to work well and I'm reasonably happy with it except for one aspect. There is currently an issue with the default scaling behaviour of replica sets when it comes to multiple subsequent content edits. When you make a subsequent content edit within drupal it will still contact the k8s api and patch the deployment. This results in a new replicaset being created, the original replicaset being left as is, the previous replicaset being scaled down and any pods that are currently being created(gatsby building) are killed. I can see why this is probably desirable in most situations but for me this increases the amount of time that it takes for you to be able to see these changes on the site. If multiple people are using drupal at the same time making edits this will be compounded and could become problematic.
Ideally I would like the containers that are currently building to be able to complete and for those replicasets to finish scaling up, queuing another replicaset to be created once this is completed. This would allow any updates in the first build to be deployed asap, whilst queueing up another build immediately after to include any subsequent content, and this could continue for as long as the load is there to require it and no longer. Is there any way to accomplish this?
It is the regular behavior of Kubernetes. When you update a Deployment it creates new ReplicaSet and respectively a Pod according to new settings. Kubernetes keeps some old ReplicatSets in case of possible roll-backs.
If I understand your question correctly. You cannot change this behavior, so you need to do something with architecture of your application.

Service Fabric Application - changing instance count on application update fails

I am building a CI/CD pipeline to release SF Stateless Application packages into clusters using parameters for everything. This is to ensure environments (DEV/UAT/PROD) can be scoped with different settings.
For example in a DEV cluster an application package may have an instance count of 3 (in a 10 node cluster)
I have noticed that if an application is in the cluster and running with an instance count (for example) of 3, and I change the deployment parameter to anything else (e.g. 5), the application package will upload and register the type, but will fail on attempting to do a rolling upgrade of the running application.
This also works the other way e.g. if the running app is -1 and you want to reduce the count on next rolling deployment.
Have I missed a setting or config somewhere, is this how it is supposed to be? At present its not lending itself to being something that is easily scaled without downtime.
At its simplest form we just want to be able to change instance counts on application updates, as we have an infrastructure-as-code approach to changes, builds and deployments for full tracking ability.
Thanks in advance
This is a common error when using Default services.
This has been already answered multiple times in these places:
Default service descriptions can not be modified as part of upgrade set EnableDefaultServicesUpgrade to true
https://blogs.msdn.microsoft.com/maheshk/2017/05/24/azure-service-fabric-error-to-allow-it-set-enabledefaultservicesupgrade-to-true/
https://github.com/Microsoft/service-fabric/issues/253#issuecomment-442074878

How can we route a request to every pod under a kubernetes service on Openshift?

We are building a Jboss BRMS application with two microservices in spring-boot, one for rule generation (SRV1) and one for rule execution (SRV2).
The idea is to generate the rules using the generation microservice (SRV1) and persist them in the database with versioning. The next part of the process is having the execution microservice load these persisted rules into each pods memory by querying the information from the shared database.
There are two following scenarios when this should happen :
When the rule execution service pod/pods starts up, it queries the db for the lastest version and every pod running the execution application loads those rules from the shared db.
The second senario is we manually want to trigger the loading of a specific version of rules on every pod running the execution application preferably via a rest call.
Which is where the problem lies!
Whenever we try and issue a rest request to the api, since it is load balanced under a kubernetes service, the request hits only one of the pods and the rest of them do not load the specific rules.
Is there a programatic or design change that may help us achieve that or is there any other way we construct our application to achieve a capability to load a certain version of rules on all pods serving the execution microservice.
The second senario is we manually want to trigger the loading of a specific version of rules on every pod running the execution application preferably via a rest call.
What about using Rolling Updates? When you want to change the version of rules to be fetched within all execution pods, tell OpenShift to do rolling update which kills/starts all your pods one by one until all pods are on the new version, thus, they fetch the specific version of rules at the startup. The trigger of Rolling Updates and the way you define the version resolution is up to you. For instance: Have an ENV var within a pod that defines the version of rules that are going to be fetched from db, then change the ENV var to a new value and perform Rollling Updates. At the end, you should end up with new set of pods, all of them fetching the version rules based on the new value of the ENV var you set.

Load and use a Service Worker in Karma test

We want to write a Service Worker that performs source code transformation on the loaded files. In order to test this functionality, we use Karma.
Our tests import source files, on which the source code transformation is performed. The tests only succeed if the Service Worker performs the transformation and fail when the Service Worker is not active.
Locally, we can start Karma with singleRun: false and watch for changed files to restart the tests. However, Service Workers are not active for the page that originally loaded them. Therefore, every test case succeeds but the first one.
However, for continuous integration, we need a single-run mode. So, our Service Worker is not active during the run of the test, which fail accordingly.
Also, two consecutive runs do not solve this issue, as Karma restarts the used browser (so we lose the Service Worker).
So, the question is, how to make the Service Worker available in the test run?
E.g., by preserving the browser instance used by karma.
Calling self.clients.claim() within your service worker's activate hander signals to the browser that you'd like your service worker to take control on the initial page load in which the service worker is first registered. You can see an example of this in action in Service Worker Sample: Immediate Control.
I would recommend that in the JavaScript of your controlled page, you wait for the navigator.serviceWorker.ready promise to resolve before running your test code. Once that promise does resolve, you'll know that there's an active service worker controlling your page. The test for the <platinum-sw-register> Polymer element uses this technique.

JBoss 6 Cluster Node register & deregister listener in deployed application

I have a cluster over jboss6 AS in domain mode. I have an application deployed in it. My application need to have a listener(callback) when a new node become member of the cluster and also when gets removed. Is there a way to get the member node list and to add such a listener?
The simplest way is to get define a clustered cache in the configuration and get access to it from your code (see example). With the cache available, you can call cache.getCacheManager().addListener(Object) that can listen for org.infinispan.notifications.cachemanagerlistener.annotation.ViewChanged. See listener documentation for more info.