In MDN it says "The suspend event is fired when media data loading has been suspended." But what does this actually mean? What are some examples for this event. I'm especially interested in the context of an audio stream.
docs: https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/suspend_event
The suspend event occurs when the browser is intentionally not getting media data. This event is triggered when the loading of the media is suspended, suspended means when the download of a media has been completed, or it has been paused for some other reasons
Related
I am new to flutter and the famous flutter_bloc library. I have created a chat system based on the BLoC pattern :
A widget fires an event to chatBloc
The chatBloc catches the event, process it and call chatRepository
ChatRepository adds the message as a draft in its messageList variable and then sends the message to the API
ChatRepository receives the message from the API when it is done and replaces the draft within its messageList variable
ChatBloc yields its final state and the page rebuilds
Everything works fine as long as no more than one message is processed at a time.
If my button widget fires two quick events, they will overlap and so the second event will fire while the first event is still in progress. By the time when the second event starts, the first message is still a draft, so when message 2 completes and chatRepository updates its messageList, it will treat message 1 as a draft even if it has been validated.
At this point I'm asking myself if I made a mistake or if I'm trying to do it the wrong way.
Would it be better to create a queue and process messages in the background one by one ?
Thank you for your help !
I'm using the Media Response to play audio in my Actions SDK (Action SDK is being used as the Fulfillment tool in my Actions Console) driven Action. At the end of each audio clip, I'm using the MEDIA_STATUS callback to advance to another mp3 file in a predefined playlist. As a result, users should be able to navigate forwards / backwards.
When testing on my Google Home mini, Google Assistant on Android and Smart Display, I can intercept "next" and advance to the next audio clip (it sends a request with intent of type MEDIA_STATUS). However, I can't properly intercept "previous" Whenever I try, the audio will restart. Real devices seems to be handling this intent on it's own and does not throw any console output (as my webhook is not accessed at all).
Dialog Flow seem to handle "next" and "previous" as follow up intents, but I need to do the same without using Dialog Flow as the Fulfillment tool.
Can anyone please help with this particular problem?
Currently only a FINISHED event is supported for the Media Response. So, you would not be able to handle distinguish between next and previous.
Referring to #Leon Nicholls answer, I am really confused. I followed his Google Audioplayer tutorial/guide which links to an audioplayer template action
You can still download it, and if you unzip it, you will see code that is put there specifically to handle previous and next tracks.
Now it appears Leon is saying that it can't handle previous and next by voice (but buttons appear to work).
Here is the sample code below. There are links to tutorial actions in it, but now all the content has been removed. I simply need blind people to be able to select and play various bit of audio, too long for SSML, which must be navigable (at least next and pause/resume later) may be part of a playlist. How do I achieve this? Suggesting "Media Partner" or "Podcast" is not a solution as neither available in the UK.
// Handle the More/Yes/Next intents
app.intent(['More', 'Yes', 'Next'], (conv) => {
console.log(`More: fallbackCount=${conv.data.fallbackCount}`);
nextTrack(conv, false);
});
// Handle the Repeat/Previous intent
app.intent(['Repeat', 'Previous'], (conv) => {
console.log(`Repeat: ${conv.user.storage.track}`);
nextTrack(conv, false, true);
});
// Select the next track to play from the data
const nextTrack = (conv, intro, backwards) => {
console.log(`nextTrack: ${conv.user.storage.track}`);
let track = data[0];
// Persist the selected track in user storage
// https://developers.google.com/actions/assistant/save-data#save_data_across_conversations
if (conv.user.storage.track) {
conv.user.storage.track = parseInt(conv.user.storage.track, 10);
if (backwards) {
I would like to catch the hook "actions.intent.MEDIA_STATUS" with the timecode when the user stopped the playback streaming.
the status return by this callback is only when failed or finished (https://developers.google.com/actions/reference/rest/Shared.Types/Status)
https://developers.google.com/actions/assistant/responses#handling_callback_after_playback_completion
do you think that is possible to catch the timecode media ?
There is no reliable way to do this. As you note, the event sent does not contain the timecode where it stopped, or even the cause of why it stopped (did the user stop it or did it end). Other Intents may be triggered with the user requesting an action, and you won't even get the MEDIA_STATUS event.
You can get a very rough idea of where the user stopped by including the media start time in a context or in session data when you first send the reply. When you get the next message, you can compare the time that you previously saved with the current time. This is very imperfect, however, since it doesn't account for network latency or the user pausing and resuming the audio during playback.
Is there anyway that I can add a custom message when user says ‘Cancel’ or ‘Exit’ to end the conversation?
I have added intent for these utterances and connected it with my webhook but the message I send back in app.ask() is not displayed or read.
This is currently a known issue and you cannot override these words currently. You can handle handle other words like "quit", "finish", "exit", etc.
Done using API.AI. (see the picture)
http://i.imgur.com/WDQWmwb.png
As per this specification
Conversation Exit google assistant
probably you will only able to do an app.tell() in your webhoook because this must be a final Intent (you can tell something to user but not ask and waiting for a response), see point five of suggested configuration:
Enable Set this intent as end of conversation
In every case if you use a weebhhok take care: The maximum execution time allowed for conversation exit requests is 2 seconds
Is there a way to subscribe to an event and handle it asynchronously using the event+=EventHandler
syntax or is there any workaround to achieve it
The event/delegate system calls each of the subscribed event handles synchronously on the thread the fires the event. To make the event handler processing be done asynchrounously, it has to be part of the function that you subscribe to the event.