For PWA, what is the easiest way to get per-device settings (as in reading a .ini file or environment variables)? - progressive-web-apps

For PWA, what is the easiest way to get per-device settings (as in reading a .ini file or environment variables)?
I'm making a very simple in-company react PWA for andriod-based tablets (only). I just want to store a couple of settings (the room number where the device is being used, and a device id) and read those in upon startup.
My experience in recently in Windows, and so I'm imaging a text file that I could place on each tablet with the settings. Does that make sense for our PWA?
Or is there a better/easier way to do app settings?
Thank you.

The answer depends on how that data is initially provisioned and what kind of guarantees you need about it being "tamper-proof."
Assuming you can provision the information during the web app's initial launch, and you're fine using storage that's exposed via a browser's Developer Tools (i.e. your threat model doesn't include a motivated user using DevTools to erase/modify the data), a simple approach would be to a) use the Cache Storage API to read/write that data as JSON, using a synthetic URL as the key and b) requesting persistent storage just for an added guarantee that it won't be purged if the device ends up running low on storage.
This could look like:
// Just use any URL that doesn't exist on your server.
const SETTINGS_KEY = '/_settings';
const SETTINGS_CACHE_NAME = 'settings';
async function getSettings() {
const cache = await caches.open(SETTINGS_CACHE_NAME);
const cachedSettingsResponse = await cache.match(SETTINGS_KEY);
if (cachedSettingsResponse) {
return await cachedSettingsResponse.json();
}
// This assumes a generateInitialSettings function that does provisioning
// and returns an object that can be stringified.
const newSettings = await generateInitialSettings();
await cache.put(SETTINGS_KEY, JSON.stringify(newSettings), {
headers: {
'content-type': 'application/json',
}
});
// Optional: request persistent storage.
// This call may trigger a permissions dialog in the local browser, so it is
// a good idea to explain to the user what's being stored first.
// See https://web.dev/persistent-storage/#when-should-i-ask-for-persistent-storage
if (navigator.storage && navigator.storage.persist) {
// This returns a promise, but we don't have to delay the
// rest of the program execution by await-ing it.
navigator.storage.persist();
}
return newSettings;
}

Related

Service worker goes to redudent phase because pre-caching files are blocked by Ad Blocker(a chrome extension.)

All the logic get failed because chrome extensions block one of my js files,
Is there a way to make pre-caching more robust? even if I get an error when caching some files, I can still cache most of them correctly, and I have run time caching.
If you're using workbox-precaching, then the answer is no—it's designed so that service worker installation will only proceed if all of the items in the precache manifest were successfully added to the cache. That way, you're guaranteed to have a functional set of resources available offline. (There's a longstanding feature request for adding support for "optional" precaching, but it's not clear how that would work in practice.)
I'd recommend using runtime caching for URLs that are optional, and might be blocked by browser extensions. If you want to "warm" the cache in this scenario, and you don't care if the cache population fails, you can add your own logic along the lines of:
import {CacheFirst} from 'workbox-strategies';
import {registerRoute} from 'workbox-routing';
import {precacheAndRoute} from 'workbox-precaching';
const OPTIONAL_CACHE_NAME = 'optional-resources';
const OPTIONAL_URLS = [
// Add URLs here that might be blocked.
];
self.addEventListener('install', (event) => {
event.waitUntil((async () => {
const cache = await caches.open(RUNTIME_CACHE_NAME);
for (const url of OPTIONAL_URLS) {
try {
await cache.add(url);
} catch (e) {
// Ignore failures due to, e.g., a content blocker.
}
}
})());
});
// Precache everything in the manifest, which you need to
// configure to exclude your "optional" URLs.
precacheAndRoute(self.__WB_MANIFEST);
// Use a cache-first runtime strategy.
registerRoute(
// Check url.pathname, or url.href, if OPTIONAL_URLS
// contains full URLs.
({url}) => OPTIONAL_URLS.includes(url.pathname),
new CacheFirst({cacheName: OPTIONAL_CACHE_NAME}),
);

How to do actions when MongoDB Realm Web SDK change stream closes or times out?

I want to delete all of a user's inserts in a collection when they stop watching a change stream from a React client. I'm using the Realm Web SDK for this.
Here's a summary of my code with what I want to do at the end of it:
import * as Realm from "realm-web";
const realmApp: Realm.App = new Realm.App({ id: realmAppId });
const credentials = Realm.Credentials.anonymous();
const user: Realm.User = await realmApp.logIn(credentials);
const mongodb = realmApp?.currentUser?.mongoClient("mongodb-atlas");
const users = mongodb?.db("users").collection("users");
const changeStream = users.watch();
for await (const change of changeStream) {
switch (change.operationType) {
case "insert": {
...
break;
}
case ...
}
}
// This pseudo-code shows what I want to do
changeStream.on("close", () => // delete all user's inserts)
changeStream.on("timeout", () => // delete all user's inserts)
changeStream.on("user closes app thus also closing stream", () => ... )
Realm Web SDK patterns seem rather different from the NodeJS ones and do not seem to include a method for closing a stream or for running a callback when it closes. In any case, they don't fit my use case.
These MongoDB Realm Web docs lead to more docs about Realm. Unless I'm missing it, both sets don't talk about how to monitor for closing and timing out of a change stream watcher instantiated from the Realm Web SDK, and how to do something when it happens.
I thought another way to do this would be in Realm's Triggers. But it doesn't seem likely from their docs.
Can this even be done from a front end client? Is there a way to do this on MongoDB itself in a "serverless" way?
If you want to delete the inserts specifically when a (client-)listener of a change-stream stops listening you have to implement some logic on client side. There is currently no way to get notified of such even within Mongodb Realm.
Sice a watcher could be closed because the app / browser is closed I would recommend against running the deletion logic on your client. Instead notify a server (or call a Mongodb Realm function / http endpoint) to make the deletions.
You can use the Beacon API to reliably send a request to trigger the delete, even when the window unloads.
Client side
const inserts = [];
for await (const change of changeStream) {
switch (change.operationType) {
case 'insert': inserts.push(change);
}
}
// This point is only reached if the generator returns / stream closes
navigator.sendBeacon('url/to/endpoint', JSON.stringify(inserts));
// Might also add a handler to catch users closing the app.
window.addEventListener('unload', sendBeacon);
Note that the unload event is not reliable MDN. But there are some alternatives which maybe be good enough for your use case.
Inside a realm function you could delete the documents.
That being said, maybe there is a better way to do what you want to achieve. Is it really the timeout of the change stream listener that has to trigger the delete or some other userevent?

Store the path to uploaded file on client-side or the file outside the browser for offline

Is there a way to store the path to file which user wants to upload, but doesn't have an internet connection (it's a PWA) and reupload it when a connection is back? Or maybe not store the path, but save the file outside browser storage, somewhere on the user's machine (even if it will require some acceptance from the user to allow the browser to read/write files), but I'm not sure if it's even allowed to do.
Currently, I'm storing the whole file as a base64 in IndexedDB, but it's crashing/slowing down the browser when it comes to reading big files (around 100MB). Also, I don't want to overload browser storage.
There's a couple of things to consider.
Storing the data you need to upload in IndexedDB and then reading that in later will be the most widely supported approach. As you say, though, it means taking up extra browser storage. One thing that might help is to skip the step of encoding the file in Base64 first, as in all modern browsers, IndexedDB will gladly store bytes directly for you as a Blob.
A more modern approach, but one that's not currently supported by non-Chromium browsers, would be to use the File System Access API. As described in this article, once you get the user's permission, you can save a handle to a file in IndexedDB, and then read the file later on (assuming the underlying file hasn't changed in the interim). This has the advantage of not duplicating the file's contents in IndexedDB, saving on storage space. Here's a code snippet, borrowed from the article:
import { get, set } from 'https://unpkg.com/idb-keyval#5.0.2/dist/esm/index.js';
const pre = document.querySelector('pre');
const button = document.querySelector('button');
button.addEventListener('click', async () => {
try {
// Try retrieving the file handle.
const fileHandleOrUndefined = await get('file');
if (fileHandleOrUndefined) {
pre.textContent =
`Retrieved file handle "${fileHandleOrUndefined.name}" from IndexedDB.`;
return;
}
// This always returns an array, but we just need the first entry.
const [fileHandle] = await window.showOpenFilePicker();
// Store the file handle.
await set('file', fileHandle);
pre.textContent =
`Stored file handle for "${fileHandle.name}" in IndexedDB.`;
} catch (error) {
alert(error.name, error.message);
}
});
Regardless of how you store the file, it would be helpful to use the Background Sync API when available (again, currently limited to Chromium browsers) to handle automating the upload once the network is available again.

What are the options for offline registration and forms?

I have a project that caters for individuals with poor internet connections in predominantly rural areas. I need to allow for users to download(or any other applicable means), or fill out details offline and then when they are ready and the internet connection is ready the data filled out offline should sync with the online database and give a report.
The offline form also needs the same validation as online, to ensure no time wastage.
What are the options I know that HTML 5 has an offline application ability. I would prefer an open source option, which will allow people with intermittent internet issues to continue filling out a form or series of forms even though internet has dropped, and the data sync when internet reconnects.
So what are the best options? Having the user requiring to download a large application is also not the best case, I would prefer a browser or small download solution. Maybe even a way of downloading a validatable form in some format for re-upload.
This is something I've been muddling through myself as some of the users of the site I am currently tasked with building have poor connections or would like to fill in forms away from a network for various reasons. Depending on your precise needs and your customer's browser compatibility, the solution I've decided to go with is to use the HTML5 cache capability you mention in your post.
The amount of data stored is not that great, and it will mean that the webpage you want them to fill in is available offline.
If you couple this with the localStorage interface you can keep all form submissions until they regain connection.
As an example of my current solution:
The cache.php file, to write the manifest
<?php
header("Content-Type: text/cache-manifest");
echo "CACHE MANIFEST\n";
$pages = array(
//an array of the pages you want cached for later
);
foreach($pages as $page) {
echo $page."\n";
}
$time = new datetime("now");
//this makes sure that the cache is different when the browser checks it
//otherwise the cache will not be rebuilt even if you change a cached page
echo "#Last Build Time: ".$time->format("d m Y H:i:s T");
You can then have a simple ajax script checking for connection
setInterval( function() {
$.ajax({
url: 'testconnection.php',
type: 'post',
data: { 'test' : 'true' },
error: function(XHR, textStatus, errorThrown) {
if(textStatus === 'timeout') {
//update a global var saying connection is down
noCon = true;
}
}
});
if(hasUnsavedData) {
//using the key/value pairs in localstorage, put together a data object and ajax it into the database
//once complete, return unsavedData to false to prevent refiring this until we have new data
//also using localStorage.removeItem(key) to clear out all localstorage info
}
}, 20000 /*medium gap between calls, do whatever works best for you here*/);
Then for your form submission script, use localstorage if that noCon variable is set to true
$(/*submit button*/).on("click", function(event) {
event.preventDefault();
if(noCon) {
//go through all inputs in some way and put to localstorage, your method is up to you
$("input").each( function() {
var key = $(this).attr("name"), val = $(this).val();
localStorage[key] = val;
});
hasUnsavedData = true;
//update a global variable to let the script above know to save information
} else {
//or if there's connection
$("form").submit();
//submit the form in some manner
}
});
I've not tested every script on this page, but they're written based on the skeleton of what my current solution is doing, minus a lot of error checking etc, so hopefully it will give you some ideas on how to approach this
Suggestions for improvements are welcomed

Multiple s3 buckets in Filepicker.io

I need to upload to multiple s3 buckets with filepicker.io. I found a tweet that indicated that there was a hacky, but possible, way to do this. Support hasn't gotten back to me yet, so I'm hoping that someone here already knows the answer!
Have you tried generating a second application/API key? It looks like they lock your S3/AWS credentials to an application/API key rather than directly to the account.
Support just got back to me. There's no way to do this besides creating multiple applications, which is okay if you are just switching between prod/staging/dev, but not a good solution if you have to upload to arbitrary buckets.
My solution is to execute a PUT request with the x-amz-copy-source header after the file has been uploaded, which copies it to the correct bucket.
This is pretty hacky as it request two extra requests per file -- one filepicker.stat and one more call to s3 (or your server).
#Ben
I am developing code with same issue of files needing to go into many buckets. I'm working in ASP.net.
What I have done is have one Filepicker 'application' with it's own S3 bucket.
I already had a callback to the server in the javascript onSuccess() function (which is passed as a parameter to filepicker.store()). This callback needed to be there to do some book-keeping anyway.
So I have just added in an extra bit to the server-side callback code which uses the AWS SDK to copy the object from the bucket filepicker uploades it to, to it's final destination bucket.
This is my C# code for moving, or rather copying, an object between buckets:
public bool MoveObject(string bucket1, string key1, string bucket2, string key2 = null)
{
bool success = false;
if (key2 == null) key2 = key1;
Logger logger = new Logger(); // my logging system
try
{
RegionEndpoint region = RegionEndpoint.EUWest1; // use your region here
using (AmazonS3Client s3Client = new AmazonS3Client(region))
{
// TODO: CheckForBucketFunction
CopyObjectRequest request = new CopyObjectRequest();
request.SourceBucket = bucket1;
request.SourceKey = key1;
request.DestinationBucket = bucket2;
request.DestinationKey = key2;
S3Response response = s3Client.CopyObject(request);
logger.Info2Log("response xml = \n{0}\n", response.ResponseXml);
response.Dispose();
success = true;
}
}
catch (AmazonS3Exception ex)
{
logger.Info2Log("Error copying file between buckets: {0} - {1}",
ex.ErrorCode, ex.Message);
success = false;
}
return success;
}
There are AWS SDKs for other server languages and the good news is Amazon doesn't charge for copying objects between buckets in the same region.
Now I just have to decide how to delete the object from the filepicker application bucket. I could do it on the server using more AWS SDK code but that will be messy as it leaves links to the object in the filepicker console. Or I could do it from the browser using filepicker code.