Service worker goes to redudent phase because pre-caching files are blocked by Ad Blocker(a chrome extension.) - progressive-web-apps

All the logic get failed because chrome extensions block one of my js files,
Is there a way to make pre-caching more robust? even if I get an error when caching some files, I can still cache most of them correctly, and I have run time caching.

If you're using workbox-precaching, then the answer is no—it's designed so that service worker installation will only proceed if all of the items in the precache manifest were successfully added to the cache. That way, you're guaranteed to have a functional set of resources available offline. (There's a longstanding feature request for adding support for "optional" precaching, but it's not clear how that would work in practice.)
I'd recommend using runtime caching for URLs that are optional, and might be blocked by browser extensions. If you want to "warm" the cache in this scenario, and you don't care if the cache population fails, you can add your own logic along the lines of:
import {CacheFirst} from 'workbox-strategies';
import {registerRoute} from 'workbox-routing';
import {precacheAndRoute} from 'workbox-precaching';
const OPTIONAL_CACHE_NAME = 'optional-resources';
const OPTIONAL_URLS = [
// Add URLs here that might be blocked.
];
self.addEventListener('install', (event) => {
event.waitUntil((async () => {
const cache = await caches.open(RUNTIME_CACHE_NAME);
for (const url of OPTIONAL_URLS) {
try {
await cache.add(url);
} catch (e) {
// Ignore failures due to, e.g., a content blocker.
}
}
})());
});
// Precache everything in the manifest, which you need to
// configure to exclude your "optional" URLs.
precacheAndRoute(self.__WB_MANIFEST);
// Use a cache-first runtime strategy.
registerRoute(
// Check url.pathname, or url.href, if OPTIONAL_URLS
// contains full URLs.
({url}) => OPTIONAL_URLS.includes(url.pathname),
new CacheFirst({cacheName: OPTIONAL_CACHE_NAME}),
);

Related

How to batch requests to the same URL without causing memory leaks

I have a system that processes images. Essentially, I provide an ID to it, and it fetches a source image, and then it begins performing transformations on it to resize and reformat it.
This system gets quite a bit of usage, and one of the things that I've noticed is that I tend to get many requests for the same ID simultaneously, but in different requests to the webserver.
What I'd like to do is "batch" these requests. For example, if there's 5 simultaneous requests for the image "user-upload.png", I'd like there to be only one HTTP request to fetch the source image.
I'm using NestJS with default scopes for my service, so the service is shared between requests. Requests to fetch the image are done with the HttpModule, which is using axios internally.
I only care about simultaneous requests. Once the request finishes, it will be cached, and that prevents new requests from hitting the HTTP url.
I've thought about doing something like this (Pseudocode):
#Provider()
class ImageFetcher {
// Store in flight requests as a map between id:promise
inFlightRequests = { }
fetchImage(id: string) {
if (this.inFlightRequests[id]) {
return this.inFlightRequests[id]
}
this.inFlightRequests[id] = new Promise(async (resolve, reject) => {
const { data } = await this.httpService.get('/images' + id)
// error handling omitted here
resolve(data)
delete inFlightRequests[id]
})
return this.inFlightRequests[id]
}
}
The most obvious issue I see is the potential for a memory leak. This is solveable with more custom code, but I thought I'd see if anyone has any suggestions for doing this without writing more code.
In particular, I've also thought about using an axios interceptor, but I'm not entirely sure how to handle that properly. Any pointers here would be really appreciated.

Workbox Precaching / RuntimeCaching hybrid

I have converted my app into a PWA with workbox, and using the precaching strategy
Right now I reload the page when the workbox worker has finished refetching the cache
// Register service worker extract
import { register } from 'register-service-worker';
if (process.env.NODE_ENV === 'production') {
register(`${process.env.BASE_URL}service-worker.js`, {
updatefound() {
// New content is downloading.
},
updated() {
// New content is available; refresh.
setTimeout(() => {
window.location.reload(true);
}, 500);
},
});
}
// Service worker extract
import { precacheAndRoute } from 'workbox-precaching/precacheAndRoute';
precacheAndRoute(self.__WB_MANIFEST);
self.skipWaiting();
But I find it really bothering to have a stale version for 2-5 seconds and then the page reloaded with the new version
What I would like to achieve, is RuntimeCaching when an update is found, the new files are used directly instead of refetching the cache in the background
Is there a way to configure workbox for that, so that I can reload the page straight away
// Register service worker extract
updatefound() {
window.location.reload(true);
},
updated() {
},
And the worbox worker will not serve the cache on the reloaded page and instead make the network requests on the fly, basically a Precaching and RuntimeCaching hybrid to get the best of both worlds?
I couldn't find anything that achieves that anywhere
What you're describing is using a NetworkFirst strategy, along with optionally "warming" the runtime cache with the content that you want to make sure is available offline.
Precaching, with its cache-first approach to serving content, doesn't sound like an appropriate solution to your use case.

For PWA, what is the easiest way to get per-device settings (as in reading a .ini file or environment variables)?

For PWA, what is the easiest way to get per-device settings (as in reading a .ini file or environment variables)?
I'm making a very simple in-company react PWA for andriod-based tablets (only). I just want to store a couple of settings (the room number where the device is being used, and a device id) and read those in upon startup.
My experience in recently in Windows, and so I'm imaging a text file that I could place on each tablet with the settings. Does that make sense for our PWA?
Or is there a better/easier way to do app settings?
Thank you.
The answer depends on how that data is initially provisioned and what kind of guarantees you need about it being "tamper-proof."
Assuming you can provision the information during the web app's initial launch, and you're fine using storage that's exposed via a browser's Developer Tools (i.e. your threat model doesn't include a motivated user using DevTools to erase/modify the data), a simple approach would be to a) use the Cache Storage API to read/write that data as JSON, using a synthetic URL as the key and b) requesting persistent storage just for an added guarantee that it won't be purged if the device ends up running low on storage.
This could look like:
// Just use any URL that doesn't exist on your server.
const SETTINGS_KEY = '/_settings';
const SETTINGS_CACHE_NAME = 'settings';
async function getSettings() {
const cache = await caches.open(SETTINGS_CACHE_NAME);
const cachedSettingsResponse = await cache.match(SETTINGS_KEY);
if (cachedSettingsResponse) {
return await cachedSettingsResponse.json();
}
// This assumes a generateInitialSettings function that does provisioning
// and returns an object that can be stringified.
const newSettings = await generateInitialSettings();
await cache.put(SETTINGS_KEY, JSON.stringify(newSettings), {
headers: {
'content-type': 'application/json',
}
});
// Optional: request persistent storage.
// This call may trigger a permissions dialog in the local browser, so it is
// a good idea to explain to the user what's being stored first.
// See https://web.dev/persistent-storage/#when-should-i-ask-for-persistent-storage
if (navigator.storage && navigator.storage.persist) {
// This returns a promise, but we don't have to delay the
// rest of the program execution by await-ing it.
navigator.storage.persist();
}
return newSettings;
}

PWA multiple virtual paths with same backend code does not create separate installs

I have a generic common NodeJS app that multiple users access. The users are identified via the path. For example: https://someapp.web.app/abc can be one path while https://someapp.web.app/def can be another path.
On the NodeJS server path, I send the same server code by passing the path parameters to the program. The route appears something like this:
app.get('/*', async (req, res) => {
...
locals.path = req.path;
...
res.render('index', locals);
}
In the above index is a template that uses locals data for customisation
What I would like is that for each path there is a separate manifest and its associated icons and that on a single device (phone or desktop) multiple installations be possible. Thus, https://someapp.web.app/abc be one icon and https://someapp.web.app/def be another icon.
I am having difficulty in the placement and the scoping of the manifest and service worker. It always adds only one icon (the first path installed) to the home screen or desktop. My settings are:
In the public (root) folder I have each manifest viz. abc-manifest.json and def-manifest.json and a common sw.js.
The abc-manifest.json is:
'scope': '/abc',
'start_url': '/abc',
...
The access to the service-worker from the index.js is:
if (navigator.serviceWorker) {
navigator.serviceWorker.register('sw.js')
.then(function (registration) {
console.log('ServiceWorker registration succeeded');
}).catch(function (error) {
console.log('ServiceWorker registration failed:', error);
});
}
I have tried changing the paths of scope and start_url to / but it did not work. Since all requests to the public path are common and not within the virtual /abc path, I am unable to figure out how to get this working.
Thanks
Could that be an option to have a dedicated route that will redirect the user to /abc or /def?
In the manifest:
{
"start_url": "https://example.com/login",
"scope": "https://example.com/",
}
/login would make sure to redirect to /abc or /def.
This way you could keep one service worker, and one manifest.
And in the Service Worker, maybe try to return the specific icon based on file name.
self.addEventListener('fetch', e => {
// Serve correct icon
let url = new URL(e.request.url)
if (url.pathname.contains('/android-icon-512.png')) {
return respondWith(e, '/android-icon-512-abc.png')
}
// other ifs…
// Return from cache or fallback to network.
respondWith(e, e.request)
})
const respondWith = (e, url) =>
e.respondWith(caches.match(url)
.then(response => response || fetch(e.request).then(response => response))
)
Maybe you’ll need a specific header to do this, or use a URL parameter (icon.png?user=abc) to help query the right icon. I’m throwing idea, because it probably depends a lot on your app back-end and/or front-end architecture.
I once did this: the back-end (PHP / Laravel) handled the correct returning of the icon and manifest (I had one for each use case) based on other stuff.

How RestBase wiki handle caching

Following the installation of RestBase using standard config, I have a working version of summary API.
The problem that the caching mechanism seems strange to me.
The piece of code would decide whether to look at a table cache for fast response. But I cannot make it a server-cache depend on some time-constrain (max-age when the cache is written for example). It means that the decision to use cache or not entirely depend on clients.
Can someone explain the workflow of RestBase caching mechanism?
// Inside key.value.js
getRevision(hyper, req) {
//This one get the header from client request and decide to use cache
or not depend on the value. Does it mean server caching is non-existent?
if (mwUtil.isNoCacheRequest(req)) {
throw new HTTPError({ status: 404 });
}
//If should use cache, below run
const rp = req.params;
const storeReq = {
uri: new URI([rp.domain, 'sys', 'table', rp.bucket, '']),
body: {
table: rp.bucket,
attributes: {
key: rp.key
},
limit: 1
}
};
return hyper.get(storeReq).then(returnRevision(req));
}
Cache invalidation is done by the change propagation service, which is triggered on page edits and similar events. Cache control headers are probably set in the Varnish VCL logic. See here for a full Wikimedia infrastructure diagram - it is outdated but gives you the generic idea of how things are wired together.