I would like to add functionality to a ruleset that fires a distinct rule based on whether or not the browser is mobile or not. (one rule fires for a standard browser, a different rule fires for a mobile browser) I know that the browser detection can be done any number of ways, but my first inclination would be with javascript.
Any thoughts on how to start with this?
You can use the useragent object, like this:
rule detect_agent {
select when pageview ".*"
pre {
browser_name = useragent:browser_name();
browser_version = useragent:browser_version();
os = useragent:os();
os_type = useragent:os_type();
os_version = useragent:os_version();
full_useragent = useragent:string();
message = <<
<p><strong>Information about your browser:</strong></br />
<em>Browser name:</em> #{browser_name}</br />
<em>Browser version:</em> #{browser_version}</br />
<em>Operating system:</em> #{os}</br />
<em>OS type:</em> #{os_type}</br />
<em>OS version:</em> #{os_version}</br /></p>
<p>#{full_useragent}</p>
>>;
}
append("body", message);
}
You might have to do some parsing of your own, though, since the browser_name and os may or may not be correct. Here's what it looks like in Chrome on a Mac (you can test it using this URL in any browser):
Here's what it looks like in Safari on an iPad:
Do some research into what the UserAgent strings look like for the browsers you care about. Then you can use the useragent:string() function together with match() to determine what to do with it. (If you want an example of how to do that, let me know.)
Related
since iOS 16 update my vocabulary app (PWA) has problems with spelling provided text to SpeechSynthesisUtterance object. It doesn't apply to all languages, eg. Russian sounds the same like before update to iOS 16. If it comes to German or English - the quality is very low, muffled, the voice sounds nasal... For MacOS Safari everything works as supposed to, but not for iOS 16.
const fullPhrase = toFullPhrase(props.phrase);
const utterance = new SpeechSynthesisUtterance();
onMounted(() => { // Vue lifecycle method
utterance.text = fullPhrase;
utterance.lang = voice.value.lang;
utterance.voice = voice.value;
utterance.addEventListener(ON_SPEAK_END, toggleSpeakStatus);
});
I tried to modify pitch and rate properties but without success... Did they change API for SpeechSynthesis / SpeechSynthesisUtterance for Safari in iOS 16 maybe?
It looks like IO16 introduced a lot of new (sometimes very weird) voices for en-GB and en-US. In my case I was looking for a voice only by a lang and taking the first one. As a result I was getting a strange voice.
Given this screenshot of a Firefox DOM rendering, I'm interested in reading that highlighted element down a ways there and writing to the "hidden" attribute 3 lines above it. I don't know the Javascript hierarchy nomenclature to traverse through that index "0" subwindow that shows in the first line under window indexed "3" which is the root context of my code's hierarchy. That innerText element I'm after does not appear anywhere else in the DOM, at least that I can find...and I've looked and looked for it elsewhere.
Just looking at this DOM, I would say I could address that info as follows: Window[3].Window[0].contentDocument.children[0].innerText (no body, interestingly enough).
How this DOM came about is a little strange in that Window[0] is generated by the following code snippet located inside an onload event. It makes a soft EMBED element, so that Window[0] and everything inside is transient. FWIW, the EMBED element is simply a way for the script to offload the task of asynchronously pulling in the next .mp4 file name from the server while the previous .mp4 is playing so it will be ready instantly onended; no blocking necessary to get it.
if (elmnt.contentDocument.body.children[1] == 'undefined' || elmnt.contentDocument.body.children[1] == null)
{
var mbed = document.createElement("EMBED");
var attsrc = document.createAttribute("src")
mbed.setAttributeNode(attsrc);
var atttyp = document.createAttribute("type")
mbed.setAttributeNode(atttyp);
var attwid = document.createAttribute("width")
mbed.setAttributeNode(attwid);
var atthei = document.createAttribute("height")
mbed.setAttributeNode(atthei);
elmnt.contentDocument.body.appendChild(mbed);
}
elmnt.contentDocument.body.children[1].src=elmnt.contentDocument.body.children[0].currentSrc + '\?nextbymodifiedtime'
elmnt.contentDocument.body.children[1].type='text/plain'
I know better than to think Window[3].Window[0]...... is valid. Can anyone throw me a clue how to address the DOM steps into the contentDocument of that Window[0]? Several more of those soft Windows from soft EMBED elements will eventually exist as I develop the code, so keep that in mind. Thank you!
elmnt.contentWindow[0].document.children[0].innerText does the trick
I offer a service which needs the correct location and heading of the mobile device the user is accessing my service with. It was relatively easy to get the information but now I am stuck at finding out how accurate the heading I received is.
Since I'm not using any framework for this or develop in Android/iOS, where there are solutions for that problem, I'd need a solution only depending on javascript (+ 3rd party libraries).
I've found the Generic Sensor API from W3C but couldn't find any sensor which hold that information. Neither the Accelerometer-, nor the AbsoluteOrientationSensor held the needed information.
My current code looks like that..
let accelerometer = null;
try {
accelerometer = new Accelerometer({ frequency: 60 });
accelerometer.addEventListener('error', event => {
// Handle runtime errors.
if (event.error.name === 'NotAllowedError') {
console.log('Permission to access sensor was denied.');
} else if (event.error.name === 'NotReadableError') {
console.log('Cannot connect to the sensor.');
}
});
accelerometer.addEventListener('reading', (event) => {
console.log(accelerometer);
console.log(event);
});
accelerometer.start();
.. but since the problem if more of a 'finding the right tool to do the job'-type, it won't help much in better depicting my problems. Further, Google uses kind of the same functionality I'm aiming for in Google Maps, so theoretically there must be a way to get the heading accuracy.
So, in conclusion, is there any way to retrieve the heading accuracy in a simple javascript environment?
Thanks!
I was wondering if it was possible to not include text in my SSML, since my audio file says 'Are you ready to play?', I dont need any speech from the google assistant itself.
app.intent('Default Welcome Intent',(conv) =>{
const reply = `<speak>
<audio src="intro.mp3"></audio>
</speak>`;
conv.ask(reply);
});
The code above produces an error since I do not have any text input.
The error you probably got was something like
expected_inputs[0].input_prompt.rich_initial_prompt.items[0].simple_response: 'display_text' must be set or 'ssml' must have a valid display rendering.
As it notes, there are conditions where the Assistant runs on a device with a display (such as your phone), and it should show a message that is substantively the same as what the audio plays.
You have a couple of options that are appropriate for these cases.
First, you can provide optional text inside the <audio> tag that will be shown, but not read out (unless the audio file couldn't be loaded for some reason).
<speak>
<audio src="intro.mp3">Are you ready to play?</audio>
</speak>
Alternately, you can provide separate strings that represent the SSML version and the plain text version of what you're saying.
const ssml = `<speak><audio src="intro.mp3"></audio></speak>`;
const text = "Are you ready to play?";
conv.ask( new SimpleResponse({
speech: ssml,
text: text
}) );
Found a hacky work around for this, By adding a very short string and then putting it in a prosody tag with a silent volume:
app.intent('Default Welcome Intent',(conv) =>{
const reply = `<speak>
<audio src="intro.mp3"></audio>
<prosody volume ="silent">a</prosody> </speak>`;
conv.ask(reply);
});
This plays the audio and does not speak the 'a' text.
The other way to trick, try to use blank space to don't get No Response error (... is not responding now)
conv.ask(new SimpleResponse(" "))
const reply = `<speak>
<audio src="intro.mp3"></audio>
</speak>`;
conv.ask(reply);
I have a Perl script which listens to a port and filters messages, and, based on them, proposes to take action or ignore event.
I'd like to make it show a notification window (not a dialogue window) with buttons 'take action' and 'ignore', which would go after a certain timeout.
So far I have something like this:
my #react = ("somecommand", "someoptions); # based on what regex a message matched
my $cmd = "xmessage";
my $cmd_args = "-print -timeout 7 -buttons React,Dismiss $message"; # raw message from port
open XMSG, "$cmd $cmd_args |";
while (<XMSG>) {
if ($_ eq "React\n") {
do something...
}
}
But it would handle only one notification at once, and the next message would not appear until the previous one is dismissed, reacted to or timed out, so it's quite a bad decision. I cannot do anything until I get return code from xmessage, and I can't get xmessage run a command. Well I probably can if I introduce event IDs and listen to a socket where xmessage prints, but it would make things too complicated, I guess.
So I wonder is there a library or an utility for Linux to draw notify-like windows with buttons which would each trigger a command?
I'm sorry I didn't see this one when it first was posted. There are several gui toolkits which could do something along these lines. Prima is a toolkit built especially for Perl and has no external library dependencies.
For when you just need a popup dialog, there is the Ask module which delegates the task of popping up windows to any available library.
In case anyone's interested, I've ended up writing a small Tcl/Tk program for that, the full code (all 48 lines) can be found here: http://cloudcabin.org/read/twobutton_notify, and you can ignore the text in Russian around it.