The workbox is not connected. What is the reason? - progressive-web-apps

I connect workbox on the basis of official documentation. It turns out the following code.
  
self.addEventListener ('install', (event) => {
  importScripts ('https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js');
    if (workbox) {
        console.log (`Yay! Workbox loaded 🎉`);
    } else {
        console.log (`Boo! Workbox did not boot 😬`);
    }
    workbox.routing.registerRoute (
        /\.js$/,
        new workbox.strategies.NetworkFirst ()
    );
    workbox.routing.registerRoute (
        // Cache image files.
        /\.(?:png|jpg|jpeg|svg|gif)$/,
        // Use the cache if it's available.
        new workbox.strategies.CacheFirst ({
            // Use a custom cache name.
            cacheName: 'image-cache',
            plugins: [
                new workbox.expiration.Plugin ({
                    // Cache only 20 images.
                    maxEntries: 20,
                    // Cache for a maximum of a week.
                    maxAgeSeconds: 7 * 24 * 60 * 60,
                })
            ],
        })
    );
    self.skipWaiting ();
    console.log ('Service Worker has been installed');
});
The workbox is loaded correctly, but no further actions are performed. NetworkFirst issues the following warnings.
Event handler of 'fetch' event must be added on the initial evaluation
of worker script.
Event handler of 'message' event must be added on the initial
evaluation of worker script.
Nothing appears in the cache. I do not use Node.js, but as far as I understood from the documentation and responses on the server, this is not necessary. So what is the problem after all? Is there no Node or am I connecting something wrong?

The error message is telling you what's wrong.
Event handler of 'fetch' event must be added on the initial evaluation of worker script.
In your case, the script file is being evaluated and fetch and message event listeners are not defined. The install event triggers and then workbox tries to define the other listeners. Those other listeners need to be defined in the initial script that's evaluated.
The easy solution is you don't need the install listener. Remove it and import and use workbox directly in the service worker.
Also note that you should only use importScripts at the top level of your script.

Related

how to run bpy callback on workspace tools change

How to add a pre-draw hook to current context workspace.tools change?
I attempted to get there using bpy.types.SpaceView3D.draw_handler_add(...) which as it runs on every draw, checks if workspace.tools changed, and if it changed, run my callback, but my callback wants to add its own SpaceView3D.draw_handler_add and doing it this way adds it a frame-too-late, leaving the view port undrawn until a user event repaints the screen.
I found this post online
https://devtalk.blender.org/t/update-property-when-active-tool-changes/11467/12
summary: maybe there is a mainline callback new
https://developer.blender.org/D10635
AFWS Jan '20
#kaio
This seem like a better solution. It’s kind of a mystery code, cause I
couldn’t figure out where you got that code info ,but then started
looking at the space_toolsystem_common.py file. kaio AFWS Jan '20
Just realized there might be a cleaner way of getting callbacks for
active tools using the msgbus. Since workspace tools aren’t rna
properties themselves, figured it’s possible to monitor the
bpy_prop_collection which changes with the tool.
The handle is the workspace itself, so shouldn’t have to worry about
keeping a reference. The subscription lasts until a new file is
loaded, so add a load_post callback which reapplies it.
Note this doesn’t proactively subscribe to workspaces added
afterwards. Might need a separate callback for that :joy:
import bpy
def rna_callback(workspace):
idname = workspace.tools[-1].idname
print(idname)
def subscribe(workspace):
bpy.msgbus.subscribe_rna(
key=ws.path_resolve("tools", False),
owner=workspace,
args=(workspace,),
notify=rna_callback)
if __name__ == "__main__":
ws = bpy.context.workspace
subscribe(bpy.context.workspace)
# Subscribe to all workspaces: if 0:
for ws in bpy.data.workspaces:
subscribe(bpy.context.workspace)
# Clear all workspace subscriptions if 0:
for ws in bpy.data.workspaces:
bpy.msgbus.clear_by_owner(ws)

Anylogic: Queue TimeOut blocks flow

I have a pretty simple Anylogic DE model where POs are launched regularly, and a certain amount of material gets to the incoming Queue in one shot (See Sample Picture below). Then the Manufacturing process starts using that material at a regular rate, but I want to check if the material in the queue gets outdated, so I'm using the TimeOut option of that queue, in order to scrap the outdated material (older than 40wks).
The problem is that every time that some material gets scrapped through this Timeout exit, the downstream Manufacturing process "stops" pulling more material, instead of continuing, and it does not get restarted until a new batch of material gets received into the Queue.
What am I doing wrong here? Thanks a lot in advance!!
Kindest regards
Your situation is interesting because there doesn't seem to be anything wrong with what you're doing. So even though what you are doing seems to be correct, I will provide you with a workaround. Instead of the Queue block, use a Wait block. You can assign a timeout and link the timeout port just like you did for the queue (seem image at the end of the answer).
In the On Enter field of the wait block (which I will assume is named Fridge), write the following code:
if( MFG.size() < MFG.capacity ) {
self.free(agent);
}
In the On Enter of MFG block write the following:
if( self.size() < self.capacity && Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
And finally, in the On Exit of your MFG block write the following:
if( Fridge.size() > 0 ) {
Fridge.free(Fridge.get(0));
}
What we are doing in the above, is we are manually pushing the agents. Each time an agent is processed, the model checks if there is capacity to send more, if yes, a new agent is sent.
I know this is an unpleasant workaround, but it provides you with a solution until AnyLogic support can figure it out.

Avoiding repetitive calls when creating reactfire hooks

When initializing a component using reactfire, each time I add a reactfire hook (e.g. useFirestoreDocData), it triggers a re-render and therefore repeats all previous initialization. For example:
const MyComponent = props => {
console.log(1);
const firestore = useFirestore();
console.log(2);
const ref = firestore.doc('count/counter');
console.log(3);
const { value } = useFirestoreDocDataOnce(ref);
console.log(4);
...
return <span>{value}</span>;
};
will output:
1
1
2
3
1
2
3
4
This seems wasteful, is there a way to avoid this?
This is particularly problematic when I need the result of one reactfire hook to create another (e.g. retrieve data from one document to determine which other document to read) and it duplicates the server calls.
See React's documentation of Suspense.
Particulary that part: Approach 3: Render-as-You-Fetch (using Suspense)
Reactfire uses this mechanics. It is not supposed to fetch more than one time for each call even if the line is executed more than once. The mechanics behind "understand" that the fetch is already done and will start the next one.
In your case, react try to render your component, see it needs to fetch, stop rendering and show suspense's fallback while fetching. When fetch is done it retry to render your component and as the fetch is completed it will render completely.
You can confirm in your network tab that each calls is done only once.
I hope I'm clear, please don't hesitate to ask for more details if i'm not.

(Laravel 5) Monitor and optionally cancel an ALREADY RUNNING job on queue

I need to achieve the ability to monitor and be able to cancel an ALREADY RUNNING job on queue.
There's a lot of answers about deleting QUEUED jobs, but not on an already running one.
This is the situation: I have a "job", which consists of HUNDREDS OF THOUSANDS rows on a database, that need to be queried ONE BY ONE against a web service.
Every row needs to be picked up, queried against a web service, stored the response and its status updated.
I had that already working as a Command (launching from / outputting to console), but now I need to implement queues in order to allow piling up more jobs from more users.
So far I've seen Horizon (which doesn't runs on Windows due to missing process control libs). However, in some demos seen around it lacks (I believe) a couple things I need:
Dynamically configurable timeout (the whole job may take more than 12 hours, depending on the number of rows to process on the selected job)
Ability to CANCEL an ALREADY RUNNING job.
I also considered the option to generate EACH REQUEST as a new job instead of seeing a "job" as the whole collection of rows (this would overcome the timeout thing), but that would give me a Horizon "pending jobs" list of hundreds of thousands of records per job, and that would kill the browser (I know Redis can handle this without itching at all). Further, I guess is not possible to cancel "all jobs belonging to X tag".
I've been thinking about hitting an API route, fire the job and decouple it from the app, but I'm seeing that this requires forking processes.
For the ability to cancel, I would implement a database with job_id, and when the user hits an API to cancel a job, I'd mark it as "halted". On every loop I would check its status and if it finds "halted" then kill itself.
If I've missed any aspect just holler and I'll add it or clarify about it.
So I'm asking for an advice here since I'm new to Laravel: how could I achieve this?
So I finally came up with this (a bit clunky) solution:
In Controller:
public function cancelJob()
{
$jobs = DB::table('jobs')->get();
# I could use a specific ID and user owner filter, etc.
foreach ($jobs as $job) {
DB::table('jobs')->delete($job->id);
}
# This is a file that... well, it's self explaining
touch(base_path(config('files.halt_process_signal')));
return "Job cancelled - It will stop soon";
}
In job class (inside model::chunk() function)
# CHECK FOR HALT SIGNAL AND [OPTIONALLY] STOP THE PROCESS
if ($this->service->shouldHaltProcess()) {
# build stats, do some cleanup, log, etc...
$this->halted = true;
$this->service->stopProcess();
# This FALSE is what it makes the chunk() method to stop looping
return false;
}
In service class:
/**
* Checks the existence of the 'Halt Process Signal' file
*
* #return bool
*/
public function shouldHaltProcess() :bool
{
return file_exists($this->config['files.halt_process_signal']);
}
/**
* Stop the batch process
*
* #return void
*/
public function stopProcess() :void
{
logger()->info("=== HALT PROCESS SIGNAL FOUND - STOPPING THE PROCESS ===");
$this->deleteHaltProcessSignalFile();
return ;
}
It doesn't looks quite elegant, but it works.
I've surfed the whole web and many goes for Horizon or other tools that doesn't fit my case.
If anyone has a better way to achieve this, it's welcome to share.
Laravel queue have 3 important config:
1. retry_after
2. timeout
3. tries
See more: https://laravel.com/docs/5.8/queues
Dynamically configurable timeout (the whole job may take more than 12
hours, depending on the number of rows to process on the selected job)
I think you can config timeout + retry_after about 24h.
Ability to CANCEL an ALREADY RUNNING job.
Delete job in jobs table
Delete process by process id in your server
Hope it help you :)

Is there any way to inter communicate with modules in boilerplatejs?

Regarding the BoilerplateJs example, how should we adjust those modules to be intercommunicate in such a way once the user done any change to one module, the other related modules should be updated with that change done.
For example, if there is a module to retrieve inputs from user as name and sales and another module to update those retrieved data in a table or a graph, can you explain with some example ,how those inter connection occurs considering event handling?
Thanks!!
In BoilerplateJS, each of your module will have it's own moduleContext object. This module context object contains two methods 'listen' and 'notify'. Have a look at the context class at '/src/core/context.js' for more details.
The component that need to 'listen' to the event, should register for the event by specifying the name of the event and callback handler. Component that raise the event should use 'notify' method to let others know something interesting happened (optionally passing a parameter).
Get an update of the latest BoilerplateJS code from GitHub. I just committed changes with making clickCounter a composite component where 'clickme component' raising an event and 'lottery component' listening to the event to respond.
Code for notifying the Event:
moduleContext.notify('LOTTERY_ACTIVITY', this.numberOfClicks());
Code for listening to the Event:
moduleContext.listen("LOTTERY_ACTIVITY", function(activityNumber) {
var randomNum = Math.floor(Math.random() * 3) + 1;
self.hasWon(randomNum === activityNumber);
});
I would look at using a Publish-Subscribe library, such as Amplify. Using this technique it is easy for one module to act as a publisher of events and others to register as subscribers, listening and responding to these events in a highly decoupled manner.
As you are already using Knockout you might be interested in first trying Ryan Niemeyer's knockout-postbox plugin first. More background on this library is available here including a demo fiddle. You can always switch to Amplify later if you require.