I have a project that caters for individuals with poor internet connections in predominantly rural areas. I need to allow for users to download(or any other applicable means), or fill out details offline and then when they are ready and the internet connection is ready the data filled out offline should sync with the online database and give a report.
The offline form also needs the same validation as online, to ensure no time wastage.
What are the options I know that HTML 5 has an offline application ability. I would prefer an open source option, which will allow people with intermittent internet issues to continue filling out a form or series of forms even though internet has dropped, and the data sync when internet reconnects.
So what are the best options? Having the user requiring to download a large application is also not the best case, I would prefer a browser or small download solution. Maybe even a way of downloading a validatable form in some format for re-upload.
This is something I've been muddling through myself as some of the users of the site I am currently tasked with building have poor connections or would like to fill in forms away from a network for various reasons. Depending on your precise needs and your customer's browser compatibility, the solution I've decided to go with is to use the HTML5 cache capability you mention in your post.
The amount of data stored is not that great, and it will mean that the webpage you want them to fill in is available offline.
If you couple this with the localStorage interface you can keep all form submissions until they regain connection.
As an example of my current solution:
The cache.php file, to write the manifest
<?php
header("Content-Type: text/cache-manifest");
echo "CACHE MANIFEST\n";
$pages = array(
//an array of the pages you want cached for later
);
foreach($pages as $page) {
echo $page."\n";
}
$time = new datetime("now");
//this makes sure that the cache is different when the browser checks it
//otherwise the cache will not be rebuilt even if you change a cached page
echo "#Last Build Time: ".$time->format("d m Y H:i:s T");
You can then have a simple ajax script checking for connection
setInterval( function() {
$.ajax({
url: 'testconnection.php',
type: 'post',
data: { 'test' : 'true' },
error: function(XHR, textStatus, errorThrown) {
if(textStatus === 'timeout') {
//update a global var saying connection is down
noCon = true;
}
}
});
if(hasUnsavedData) {
//using the key/value pairs in localstorage, put together a data object and ajax it into the database
//once complete, return unsavedData to false to prevent refiring this until we have new data
//also using localStorage.removeItem(key) to clear out all localstorage info
}
}, 20000 /*medium gap between calls, do whatever works best for you here*/);
Then for your form submission script, use localstorage if that noCon variable is set to true
$(/*submit button*/).on("click", function(event) {
event.preventDefault();
if(noCon) {
//go through all inputs in some way and put to localstorage, your method is up to you
$("input").each( function() {
var key = $(this).attr("name"), val = $(this).val();
localStorage[key] = val;
});
hasUnsavedData = true;
//update a global variable to let the script above know to save information
} else {
//or if there's connection
$("form").submit();
//submit the form in some manner
}
});
I've not tested every script on this page, but they're written based on the skeleton of what my current solution is doing, minus a lot of error checking etc, so hopefully it will give you some ideas on how to approach this
Suggestions for improvements are welcomed
Related
I do prevent a page reload in my web application by the following function:
window.onbeforeunload = (event) => {
const e = event || window.event;
// Cancel the event
e.preventDefault();
save_user_data_to_indexed_db();
if (e) {
e.returnValue = ''; // Legacy method for cross browser support
}
return ''; // Legacy method for cross browser support
};
However, the save_user_data_to_indexed_db() function is not being executed during the "Reload site?" message. I thought that if I could execute my function during the displayed message, I could maybe automatically answer the same dialog programmatically and let the browser continue reloading the page.
Is there a way to make the browser wait for this kind of operation?
Generally, there is no way to make the browser wait. What I often do in this case is write the data to an intermediate place, such as localStorage, synchronously, and then asynchronously copy that data over to indexedDB later on when there is time, such as when the page is next loaded again, or from within a service worker.
Following the installation of RestBase using standard config, I have a working version of summary API.
The problem that the caching mechanism seems strange to me.
The piece of code would decide whether to look at a table cache for fast response. But I cannot make it a server-cache depend on some time-constrain (max-age when the cache is written for example). It means that the decision to use cache or not entirely depend on clients.
Can someone explain the workflow of RestBase caching mechanism?
// Inside key.value.js
getRevision(hyper, req) {
//This one get the header from client request and decide to use cache
or not depend on the value. Does it mean server caching is non-existent?
if (mwUtil.isNoCacheRequest(req)) {
throw new HTTPError({ status: 404 });
}
//If should use cache, below run
const rp = req.params;
const storeReq = {
uri: new URI([rp.domain, 'sys', 'table', rp.bucket, '']),
body: {
table: rp.bucket,
attributes: {
key: rp.key
},
limit: 1
}
};
return hyper.get(storeReq).then(returnRevision(req));
}
Cache invalidation is done by the change propagation service, which is triggered on page edits and similar events. Cache control headers are probably set in the Varnish VCL logic. See here for a full Wikimedia infrastructure diagram - it is outdated but gives you the generic idea of how things are wired together.
Basically, I'm using the accounts-base package on meteor and on meteor startup, I set up what template the server should use for the password recovery mail, email confirmation mail, etc.
For example, in my server/startup.js on meteor startup I do many things like :
Accounts.urls.verifyEmail = function (token) {
return Meteor.absoluteUrl(`verify-email/${token}`);
};
Accounts.emailTemplates.verifyEmail.html = function (user, url) {
return EmailService.render.email_verification(user, url);
};
The problem is that my app is hosted on multiple host names like company1.domain.com, company2.domain.com, company3.domain.com and if a client wants to reset his password from company1.domain.com, the recovery url provided should be company1.domain.com/recovery.
If another client tried to connect on company2.domain.com, then the recovery url should be company2.domain.com.
From my understanding, this is not really achievable because the method used by the Accounts Package is "Meteor.absoluteUrl()", which returns the server ROOT_URL variable (a single one for the server).
On the client-side, I do many things based on the window.location.href but I cannot seem, when trying to reset a password or when trying to confirm an email address, to send this url to the server.
I'm trying to find a way to dynamically generate the url depending on the host where the client is making the request from, but since the url is generated server-side, I cannot find an elegent way to do so. I'm thinking I could probably call a meteor server method right before trying to reset a password or create an account and dynamically set the ROOT_URL variable there, but that seems unsafe and risky because two people could easily try to reset in the same timeframe and potentially screw things up, or people could abuse it.
Isn't there any way to tell the server, from the client side, that the URL I want generated for the current email has to be the client current's location ? I would love to be able to override some functions from the account-base meteor package and achieve something like :
Accounts.urls.verifyEmail = function (token, clientHost) {
return `${clientHost}/verify-email/${token}`;
};
Accounts.emailTemplates.verifyEmail.html = function (user, url) {
return EmailService.render.email_verification(user, url);
};
But I'm not sure if that's possible, I don't have any real experience when it comes to overriding "behind the scene" functionalities from base packages, I like everything about what is happening EXCEPT that the url generated is always the same.
Okay so I managed to find a way to achieve what I was looking for, it's a bit hack-ish, but hey..
Basically, useraccounts has a feature where any hidden input in the register at-form will be added to the user profile. So I add an hidden field to store the user current location.
AccountsTemplates.addField({
_id: 'signup_location',
type: 'hidden',
});
When the template is rendered, I fill in this hidden input with jQuery.
Template.Register.onRendered(() => {
this.$('#at-field-signup_location').val(window.location.href);
});
And then, when I'm actually sending the emailVerification email, I can look up this value if it is available.
Accounts.urls.verifyEmail = function (token) {
return Meteor.absoluteUrl(`verify-email/${token}`);
};
Accounts.emailTemplates.verifyEmail.html = function (user, url) {
const signupLocation = user.profile.signup_location;
if (signupLocation) {
let newUrl = url.substring(url.indexOf('verify-email'));
newUrl = `${signupLocation}/${newUrl}`;
return EmailService.render.email_verification(user, newUrl);
}
return EmailService.render.email_verification(user, url);
};
So this fixes it for the signUp flow, I may use the a similar concept for resetPassword and resendVerificationUrl since the signupLocation is now in the user profile.
You should probably keep an array of every subdomains in your settings and keep the id of the corresponding one in the user profile, so if your domain changes in the future then the reference will still valid and consistent.
I want to speak some text; I can get the audio-file(mp3) from google translate tts if I enter a properly formatted url in the browser.
But if I try to createSound it, I only see a 404-error in firebug.
I use this, but it fails:
soundManager.createSound(
{id:'testsound',
autoLoad:true,
url:'http://translate.google.com/translate_tts?ie=UTF-8&tl=da&q=testing'}
);
I have pre-fetched the fixed voiceprompts with wget, so they are as local mp3-files on the same webserver as the page. But I would like to say a dynamic prompt.
I see this was asked long time ago, but I have come to a similar issue, and I was able to make it work for Chrome and Firefox, but with the Audio Tag.
Here is the demo page I have made
http://jsfiddle.net/royriojas/SE6ET/
here is the code that made the trick for me...
var sayIt;
function createSayIt() {
// Tiny trick to make the request to google actually work!, they deny the request if it comes from a page but somehow it works when the function is inside this iframe!
//create an iframe without setting the src attribute
var iframe = document.createElement('iframe');
// don't know if the attribute is required, but it was on the codepen page where this code worked, so I just put this here. Might be not needed.
iframe.setAttribute('sandbox', 'allow-scripts allow-same-origin allow-pointer-lock');
// hide the iframe... cause you know, it is ugly letting iframes be visible around...
iframe.setAttribute('class', 'hidden-iframe')
// append it to the body
document.body.appendChild(iframe);
// obtain a reference to the contentWindow
var v = iframe.contentWindow;
// parse the sayIt function in this contentWindow scope
// Yeah, I know eval is evil, but its evilness fixed this issue...
v.eval("function sayIt(query, language, cb) { var audio = new Audio(); audio.src = 'http://translate.google.com/translate_tts?ie=utf-8&tl=' + language + '&q=' + encodeURIComponent(query); cb && audio.addEventListener('ended', cb); audio.play();}");
// export it under sayIt variable
sayIt = v.sayIt;
}
I guess that I was able to byPass that restriction. They could potentially fix this hack in the future I don't know. I actually hope they don't...
You can also try to use the Text2Speech HTML5 api, but it is still very young...
IE 11 is not working with this hack, some time in the future I might try to fix it
Even though you see this as a 404 error, you're actually running into a cross-domain restriction.
Some of the response headers from that 404 will also give you a clue of what's going on:
X-Content-Type-Options:nosniff
X-XSS-Protection:1; mode=block
So, you won't be able to do this client-side, as Google does not (and probably will never) allow you to do so.
In order to do this dynamic loading of audio, you need to work around this x-domain restriction by setting up a proxy on your own server, which would download whatever file requested by the end-user from Google's servers (via wget or whatever) and spitting whatever data comes from google.
Code I used to reproduce the issue:
soundManager.setup({
url: 'swf',
onready: function() {
soundManager.createSound({
id:'testsound',
autoLoad:true,
url:'http://translate.google.com/translate_tts?ie=UTF-8&tl=da&q=testing'
});
}
});
Your code should look like this:
soundManager.createSound({
id:'testsound',
autoLoad:true,
url:'/audioproxy.php?ie=UTF-8&tl=da&q=testing' // Same domain!
});
Regards and good luck!
I've got a node.js application that 'streams' tweets to users. At the moment, it just searches Twitter for a hard-coded string, but I'd like to allow users to configure this in the URL (eg. by visiting /?q=stackoverflow).
At the moment, my code looks a bit like this:
app.get('/', function (req, res) {
// page rendering skipped
io.sockets.on('connection', function (socket) {
twit.stream('user', {track: 'stackoverflow'}, function(stream) {
stream.on('data', function (data) {
socket.volatile.emit('tweet', data);
}
});
});
});
The question is, how do I make it so that each user can see a different stream of tweets simultaneously? At the moment, it works fine in a single browser tab, but it falls over as soon as a second one is opened - and the error is fairly deep down inside socket.io. Am I misusing it?
I haven't fully got my head around socket.io yet, so that could be the issue.
Thanks in advance!
Every time a new request comes in, you are redefining the connection callback with io.sockets.on - you should move that block of code outside of app.get, after your initialization statement of the io object.