Which service worker event indicates it has controlled the page, and will intercept web traffic? - progressive-web-apps

In my web app, some web requests must be intercepted and modified by the service worker, otherwise the requests will fail. This is especially important on the very first visit for a new user. I use clientsClaim() to ensure that.
Since I need to make sure the service worker is ready before I make the request, I tried to wait for navigator.serviceWorker.ready:
await navigator.serviceWorker.ready;
fetch(myRequest);
However, I found it doesn't work as intended. The very first request on the first visit is not intercepted. So I tried to add some wait time:
await navigator.serviceWorker.ready;
await twoSeconds();
fetch(myRequest);
This works, but it damages user experience because it delays the first meaningful UI. On the other hand, I also can't be sure 2 seconds is long enough for every computer.
What's the event that can tell me as soon as the the sw is ready to intercept traffic? It's only a problem on the first visit, but ideally the event will fire on every reload, because the code is easier to write if I simply await the same thing on every visit.

I think you're looking for a promise that you can use to signal when the current page is under control of a service worker.
This can be achieved via
await new Promise(r => {
if (navigator.serviceWorker.controller) return r();
navigator.serviceWorker.addEventListener('controllerchange', e => r());
});
// At this point, the page will be controlled by a service worker.
This code is adapted from this GitHub discussion, and there's more context there.
Generally speaking, it's not great to design a page that will only work if it's controlled by a service worker, since service workers are intended to be progressive enhancement, rather than a core requirement. But if you have that use case, the code above will help.

Related

Precaching network responses using `Isolates` with Dio and Hive

I just want to precache some endpoint calls at the beginning of the app to load faster some request since the backend service is really slow.
I've tried using Isolates for this but seems like Hive Cache doesn't support that, so even if I try to request on another Isolate it will ended up taking the same time (20-30 seconds) the same request when I'm trying to pull for the second time, where it should cache it.
Since I read it wasn't supported yet, and I personally think is critical, I moved to just call the endpoints meanwhile the app is loading a bunch of stuff in the main thread. I dont want to delay any more the app so I just want to preload 5 endpoints, so that the next time I'm requesting again it can perform faster.
1. First Approach, with Isolates
I'm calling this inside a function
final requestsToPrecache = events
.map((entityId) =>
_dataRepository.getEventTable(entityId: entityId))
.toList();
compute(precacheFutureOperations, requestsToPrecache);
Then this function is the one I'm passing thru Isolates
void precacheFutureOperations(List<Future> functions) {
Future.wait(functions);
}
2. Second Approach, just not calling await on the Network Request to trigger the request and cache them, so I dont need to wait for the response since I just want to execute it and keep it in cache before it is used, so I just call b
precacheFutureOperations(requestsToPrecache);
All of this trigger the endpoints I want to precache succesfully, I am monitoring that using proxyman. The weird part is that it doesnt cache them this way. Only after I call the request normally for the Detail Screen, is that it actually caches as expected if I tried to re-enter the screen.
What can I do to precache multiple request at the beginning of the app

Progressive Web App: skipWaiting() with multiple service worker versions

[CONTEXT]
I worked through Jake Archibald's fantastic Udacity course found here: Offline Web Applications. His work provides a Toasts dialog alerting the user that there is an update available, and they are invited to update:
Refresh / Dismiss Dialog
While this dialog is available to the user, there's a corner case on hand that I can't seem to resolve:
The service-worker can be updated any number of times prior to the client updating the local instance, pushing the numbered version of the service worker past 'just one more'. For example, the current and active service worker is #821, while the service worker that is waiting is now #824
active and waiting service workers
[PROBLEM]
I cannot find the right way to alert the browser that the next service worker to install needs to be #824, instead of #822, the dialog-box + PWA tell me that the current browser is 'redundant', and that I can't get to service-worker #824 without refreshing, and then clicking the update button.
I can recreate this with any version of Jake's code once the service-worker is set, and skipWaiting() is introduced.
I literally just want to be able to cover the corner case where the service-worker is updated 2 or more times before the user decides to update their local PWA.
You can find Jake's code on github: Jakearchibald/wittr
[ASK]: Has anyone found a solution for this corner case? If so, how do you solve it? What I'm seeing doesn't make sense as the service-worker lifecycle seems to respected per Googles documentation: service-workers/lifecyle
I did quite a bit of additional reading/research and found the following discussion threads on Github:
- Provide an easier way to listen for waiting/activated/redundant Service Workers
- Immediate Service Worker
- Recommended Approach for Refreshing Page on new SW
- Provide a one-line way to listen for a waiting Service Worker
It looks like this idea was brought up in 2017, and has for the most part gone stale. However, you can double down on using
navigator.serviceWorker.waiting.then(reg => {
if (confirm('refresh now?')) reg.waiting.postMessage('skipWaiting');
});
That will give you the ability to listen for a new Web Worker, after activating Web Worker #1, then setting Web Worker #2 to redundant, and moving Web Worker #3 into a waiting state. It's obtuse and indirect, but at least you can now move the 3# thread up and into the right slot.
A real shout-out to dfabulich, Matt Gaunt, and Beatrix Perez

What should be returned from the API for CQRS commands?

As far as I understand, in a CQRS-oriented API exposed through a RESTful HTTP API the commands and queries are expressed through the HTTP verbs, the commands being asynchronous and usually returning 202 Accepted, while the queries get the information you need. Someone asked me the following: supposing they want to change some information, they would have to send a command and then a query to get the resulting state, why to force the client to make two HTTP requests when you can simply return what they want in the HTTP response of the command in a single HTTP request?
We had a long conversation in DDD/CRQS mailing list a couple of months ago (link). One part of the discussion was "one way command" and this is what I think you are assuming. You can find out that Greg Young is opposed to this pattern. A command changes the state and therefore prone to failure, meaning it can fail and you should support this. REST API with POST/PUT requests provide perfect support for this but you should not just return 202 Accepted but really give some meaningful result back. Some people return 200 success and also some object that contains a URL to retrieve the newly created or updated object. If the command handler fails, it should return 500 and an error message.
Having fire-and-forget commands is dangerous since it can give a consumer wrong ideas about the system state.
My team also recently had a very heated discussion about this very thing. Thanks for posting the question. I have usually been the defender of the "fire and forget" style commands. My position has always been that, if you want to be able to move to an async command dispatcher some day, then you cannot allow commands to return anything. Doing so would kill your chances since an async command doesn't have much of a way to return a value to the original http call. Some of my team mates really challenged this thinking so I had to start thinking if my position was really worth defending.
Then I realized that async or not async is JUST an implementation detail. This led me to realize that, using our frameworks, we can build in middleware to accomplish the same thing our async dispatchers are doing. So, we can build our command handlers the way we want to, returning what ever makes sense, and then let the framework around the handlers deal with the "when".
Example: My team is building an http API in node.js currently. Instead of requiring a POST command to only return a blank 202, we are returning details of the newly created resource. This helps the front-end move on. The front-end POSTS a widget and opens a channel to the server's web socket using the same command as the channel name. the request comes to the server and is intercepted by middleware which passes it to the service bus. When the command is eventually processed synchronously by the handler, it "returns" via the web socket and the front-end is happy. The middleware can be disabled easily, making the API synchronous again.
There is nothing stopping you from doing that. If you execute your commands synchronously and create your projections synchronously, then it will be easy for you to just make a query directly after executing the command and returning that result. If you do this asynchronously via the rest-api, then you have no query result to send back. If you do it asynchronously within your system, then you can wait for the projection to be created and then send the response to the client.
The important thing is that you separate your write and read models in classic CQRS style. That does not mean that you cannot do a read in the same request as you do the command. Sure, you can send a command to the server and then with SignalR (or something) wait for a notification that your projection have been created/updated. I do not see a problem with waiting for the projection to be created on the server side instead for on the client.
How you do this will affect you infrastructure and error handling. Also, you will hold the HTTP request open for a longer time if you return the result at once.

GWT: Is Timer the only way to keep my app up-to-date with the server?

I just got asked to reduce the traffic made by my GWT app. There is one method that checks for status.
This method is an asynchronous call wrapped in a Timer. I know web apps are stateless and all that, but I do wonder if there is some other way to do this, or if everyone has a Timer wrapped around a call when they need this kind of behaviour.
You can check out gwteventservice. It claims to have a way to push server events and notify the client.
I have a feeling they might be implemented as long running (hanging) client to server RPC calls which time out after an interval (say 20sec), and then are re-made. The server returns the callback if an event happens in the meanwhile.
I haven't used it personally but know of people using it to push events to the client. Have a look at the docs . If my assumption is correct, the idea is to send an RPC call to the server which does not return (hangs). If an event happens on the server, the server responds and that RPC call returns with the event. If there is no event, the call will time out in 20 seconds. Then a new call is made to the server which hangs in the same way until there is an event.
What this achieves is that it reduces the number of calls to the server to either one for each event (if there is one), or a call once every 20 seconds (if there isn't one). It looks like the 20 sec interval can be configured.
I imagine that if there is no event the amount of data sent back will be minimal - it might be possible to cancel the callback entirely, or have it fail without data transfer, but I don't really know.
Here is another resource on ServerPush - which is likely what's implemented by gwteventservice.
Running on Google app engine you could use they Channel technology
http://code.google.com/intl/en-US/appengine/docs/java/channel/overview.html
If you need the client to get the status from the server, then you pretty much have to make a call to the server to get it's status.
You could look at reducing the size of some of your messages
You could wind back the timer so the status call goes out less often
You could "optimise" so that the timer gets reset when some other client/server IO happens (i.e. if the server replies you know it is ok, so you don't need to send the next status request).

Can I use async controllers in the following scenario?

I have an application in Asp.net MVC where at some point I would like to display a modal dialog to the user that would display process execution progress indicator.
The process behind the scenes does a lot of database data processing (based on existing data it generates lots of resulting records that get written back to database as well). Process may take anything from a brief moment to a very long time (depending on existing data).
Application will initiate this process asynchronously (via Ajax request) and display progress in the same manner.
The problem
I've read a bit about Async controllers where one can asynchronously start a process and will informed about the end of it but there's no progress indication and I'm not really sure how browser timeouts are handled. As far as the client goes an async request is the same as synchronous one. Client will therefore wait for response (as I understand it). the main difference being that server will execute something in async manner so it won't block other incoming requests. What I should actually do is:
make a request that would start the process and respond to the client taht process has started.
client would them periodically poll the server for process progress status getting immediate response back with percentage value (probably as JSON)
when progress would be 100% it would mean that it ended so client would know to make a request for results.
I'm not convinced that async controllers work this way...
The thing is that I'm not really sure I understand async controllers hence am not sure which approach should I use approach this problem as just described? I see two possibilities myself:
Asp.net MVC Async controllers if they can work this way
Windows Service app that processes data on request and reports its progress - this service would be started by writing a particular record to DB using a normal controller action; that would start it and then service would be writing its progress status to DB so my Asp.net MVC app would be able to read it on client process polling requests.
I haven't used Asynch controllers myself in a project. However here's a link to someone who has.
asynchronous-processing-in-asp-net-mvc-with-ajax-progress-bar
I have personally used Number 2 in a large production project.
Number 2 was a Service App running on a separate server using OpenSSH to communicate between the two servers. We'd poll for progress periodically to update the progress bar to the clients UI via AJAX.
Additionally by separating your web server from your long running process you are separating your concerns. You web server is not interested in writing files to disk, handling IO, etc and so shouldn't be burdended with such.
If your long running process has to be killed or fails then this wont affect your web server handling requests, and processing transactions.
Another suggestion would be for an extremely long running process is not to burden the client with waiting, give them an option to come back later to see the progress. I.e. send them an e-mail when its done.
Or actually show them something interesting, in our case we had a signed Java Applet show exactly what their process is doing at that exact moment.