Load and use a Service Worker in Karma test - karma-runner

We want to write a Service Worker that performs source code transformation on the loaded files. In order to test this functionality, we use Karma.
Our tests import source files, on which the source code transformation is performed. The tests only succeed if the Service Worker performs the transformation and fail when the Service Worker is not active.
Locally, we can start Karma with singleRun: false and watch for changed files to restart the tests. However, Service Workers are not active for the page that originally loaded them. Therefore, every test case succeeds but the first one.
However, for continuous integration, we need a single-run mode. So, our Service Worker is not active during the run of the test, which fail accordingly.
Also, two consecutive runs do not solve this issue, as Karma restarts the used browser (so we lose the Service Worker).
So, the question is, how to make the Service Worker available in the test run?
E.g., by preserving the browser instance used by karma.

Calling self.clients.claim() within your service worker's activate hander signals to the browser that you'd like your service worker to take control on the initial page load in which the service worker is first registered. You can see an example of this in action in Service Worker Sample: Immediate Control.
I would recommend that in the JavaScript of your controlled page, you wait for the navigator.serviceWorker.ready promise to resolve before running your test code. Once that promise does resolve, you'll know that there's an active service worker controlling your page. The test for the <platinum-sw-register> Polymer element uses this technique.

Related

NestJS schedualers are not working in production

I have a BE service in NestJS that is deployed in Vercel.
I need several schedulers, so I have used #nestjs/schedule lib, which is super easy to use.
Locally, everything works perfectly.
For some reason, the only thing that is not working in my production environment is those schedulers. Everything else is working - endpoints, data base access..
Does anyone has an idea why? is it something with my deployment? maybe Vercel has some issue with that? maybe this schedule library requires something the Vercel doesn't have?
I am clueless..
Cold boot is the process of starting a computer from shutdown or a powerless state and setting it to normal working condition.
Which means that the code you deployed in a serveless manner, will run when the endpoint is called. The platform you are using spins up a virtual machine, to execute your code. And keeps the machine running for a certain period of time, incase you get another API hit, it's cheaper and easier on them to keep the machine running for lets say 5 minutes or 60 seconds, than to redeploy it on every call after shutting the machine when function execution ends.
So in your case, most likely what is happening is that the machine that you are setting the cron on, is killed after a period of time. Crons are system specific tasks which run in the kernel. But if the machine is shutdown, the cron dies with it. The only case where the cron would run, is if the cron was triggered at a point of time, before the machine was shut down.
Certain cloud providers give you the option to keep the machines alive. I remember google cloud used to follow the path of that if a serveless function is called frequently, it shifts from cold boot to hot start, which doesn't kill the machine entirely, and if you have traffic the machines stay alive.
From quick research, vercel isn't the best to handle crons, due to the nature of the infrastructure, and this is what you are looking for. In general, crons aren't for serveless functions. You can deploy the crons using queues for example or another third party service, check out this link by vercel.

Is there a Kubernetes pattern or concept equivalent to init containers that runs after the application has started?

I have an application that starts, is configured, then the configuration is run, via the application's API calls (i.e. Cannot be configured until after the application is running). The application also takes a bit of time to start up before it is ready to accept the configuration. This container is being run inside a Kubernetes cluster.
I want to create a helper container inside the pod that:
Uses a different language than what the application container has (add python, so it should be a separate container)
Loops/waits/sleeps until the application is ready (check for a url to be available)
Runs a script to configure the application, then start the configuration
Runs to completion and isn't always running
Is tied to the main application (automated, runs once if the main application is restarted, I can already do this if I just run it manually after a restart)
(Bonus) Runs the script to completion or shows the main application as failed
Basically I am looking for a lot of the same characteristics of an init container, except that it runs post application startup instead of pre application startup.
Anyone have an example or pattern to research that does this?
Thanks!

How to create a cron job in a Kubernetes deployed app without duplicates?

I am trying to find a solution to run a cron job in a Kubernetes-deployed app without unwanted duplicates. Let me describe my scenario, to give you a little bit of context.
I want to schedule jobs that execute once at a specified date. More precisely: creating such a job can happen anytime and its execution date will be known only at that time. The job that needs to be done is always the same, but it needs parametrization.
My application is running inside a Kubernetes cluster, and I cannot assume that there always will be only one instance of it running at the any moment in time. Therefore, creating the said job will lead to multiple executions of it due to the fact that all of my application instances will spawn it. However, I want to guarantee that a job runs exactly once in the whole cluster.
I tried to find solutions for this problem and came up with the following ideas.
Create a local file and check if it is already there when starting a new job. If it is there, cancel the job.
Not possible in my case, since the duplicate jobs might run on other machines!
Utilize the Kubernetes CronJob API.
I cannot use this feature because I have to create cron jobs dynamically from inside my application. I cannot change the cluster configuration from a pod running inside that cluster. Maybe there is a way, but it seems to me there have to be a better solution than giving the application access to the cluster it is running in.
Would you please be as kind as to give me any directions at which I might find a solution?
I am using a managed Kubernetes Cluster on Digital Ocean (Client Version: v1.22.4, Server Version: v1.21.5).
After thinking about a solution for a rather long time I found it.
The solution is to take the scheduling of the jobs to a central place. It is as easy as building a job web service that exposes endpoints to create jobs. An instance of a backend creating a job at this service will also provide a callback endpoint in the request which the job web service will call at the execution date and time.
The endpoint in my case links back to the calling backend server which carries the logic to be executed. It would be rather tedious to make the job service execute the logic directly since there are a lot of dependencies involved in the job. I keep a separate database in my job service just to store information about whom to call and how. Addressing the startup after crash problem becomes trivial since there is only one instance of the job web service and it can just re-create the jobs normally after retrieving them from the database in case the service crashed.
Do not forget to take care of failing jobs. If your backends are not reachable for some reason to take the callback, there must be some reconciliation mechanism in place that will prevent this failure from staying unnoticed.
A little note I want to add: In case you also want to scale the job service horizontally you run into very similar problems again. However, if you think about what is the actual work to be done in that service, you realize that it is very lightweight. I am not sure if horizontal scaling is ever a requirement, since it is only doing requests at specified times and is not executing heavy work.

Stateless Worker service in Service Fabric restarted in the same process

I have a stateless service that pulls messages from an Azure queue and processes them. The service also starts some threads in charge of cleanup operations. We recently ran into an issue where these threads which ideally should have been killed when the service shuts down continue to remain active (definitely a bug in our service shutdown process).
Further looking at logs, it seemed that, the RunAsync methods cancellation token received a cancellation request, and later within the same process a new instance of the stateless service that was registered in ServiceRuntime.RegisterServiceAsync was created.
Is this expected behavior that service fabric can re-use the same process to start a new instance of the stateless service after shutting down the current instance.
The service fabric documentation https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-hosting-model does seem to suggest that different services can be hosted on the same process, but I could not find the above behavior being mentioned there.
In the shared process model, there's one process per ServicePackage instance on every node. Adding or replacing services will reuse the existing process.
This is done to save some (process-level) overhead on the node that runs the service, so hosting is more cost-efficient. Also, it enables port sharing for services.
You can change this (default) behavior, by configuring the 'exclusive process' mode in the application manifest, so every replica/instance will run in a separate process.
<Service Name="VotingWeb" ServicePackageActivationMode="ExclusiveProcess">
As you mentioned, you can monitor the CancellationToken to abort the separate worker threads, so they can stop when the service stops.
More info about the app model here and configuring process activation mode.

Artifact dependency direction in deployment diagram

I have some troubles defining some dependencies for artifacts in a deployment diagram for the following cases:
A service (MyService) launched by a process supervisor (Supervisord, init, a cron job, ...)
Some HTML files served by a static HTTP file server
There is a kind of double dependency since a service (or HTML files) needs a process supervisor (or an HTTP file server); and obviously the process supervisor (or the HTTP file server) has a configuration pointing to the supervised process (or the files to serve).
I see the following modeling possibilities:
The process supervisor has a dependency to the service since it controls it
The service has a dependency to the process supervisor since it cannot run without it
Double dependency
We consider the process supervisor is a UML node, and the service runs in this node
For me, the most logical would be 1) since the process supervisor must have knowledge about the service to supervise. And if 4) seems to be a good answer, I feel I lose a way to explicitly ask for a deployment of a specific process supervisor artifact (Supervisord, or cron, or ...).
If we want to emphasize the needs of the two artifacts, is there a standard approach, or is the answer debatable?
A service (MyService) launched by a process supervisor (Supervisord, init, a cron job, ...)
The service has a dependency to the process supervisor since it cannot run without it
Based on the first statement I don't think that the second one is true.
The Service doesn't communicate with the launcher (supervisor) in any way (how would you communicate with cron?) -- the supervisor just launches and observes the service; so I don't see a dependency. If a cron were to die, then the service would happily carry on (bar of cron killing its suprocesses).