Send prompt after a default timer - actions-on-google

I'm working on Google Actions Console.
I want to have my google agent to verbally warn that time is up (instead of setting a timer, for instance).
I have now two main scenes:
user says "I am ready", the agent responds "OK. Ready, set, go!";
(user says nothing and) the agent says "please stop now".
I would like the prompt in 2 to proactively run 5 minutes after the end of the prompt in 1, without the user having to say anything.
Is it possible to create a timer/delay fo 5 min before the transition from 1 to 2 or to have the prompt in 2 delayed of 5 min during scene 2? How can I create this delay? Is there any workaround otherwise?
NB: I'm not a developer so be patient :D

This is difficult to do without code, but not impossible.
First - in general, Actions on Google is poorly suited for this. It is much better for conversational systems rather than timed events.
You have two options for how to do this:
As part of an Interactive Canvas game.
Using a Media response.
As part of an Interactive Canvas game
This scenario has you controlling the timer using JavaScript code that is part of an Interactive Canvas page that you have loaded on a Smart Display or Smart Phone device. As part of the "Ready Set Go" response, you send data back to indicate that your local code should start the timer.
You'll capture this data as part of the onUpdate() callback and in your callback function set the timer. This timer is done using JavaScripts setTimeout() function. In the function that setTimeout() triggers when it is done, you can call the sendTextQuery() function to continue the conversation.
Using a Media response
This will work on devices that can play long-form audio, but do not have a screen (so they can't use the Interactive Canvas).
In this scenario, when you send the "Ready Set Go" response, you also include a Media prompt which plays a 5-minute long audio.
When the audio finishes playing, it will send a MEDIA_STATUS_FINISHED System Intent which you can handle and then reply to continue the conversation.
Which should you use?
Well... maybe both. Media works better on Smart Speakers, while the Interactive Canvas works better on Smart Displays and Smart Phones (assuming your Action is a Game).

Related

Google Home Action is being throttled

I have created a Google Home action that plays short audio clips in a sequence. Each clip about a couple of seconds long.
Flow is as below.
Play audio1.
MEDIA_STATUS intent is triggered at the end of audio1. Then play audio2
MEDIA_STATUS intent is triggered at the end of audio2. Then play audio3
Then Play audio4 and so on
Issue is that the execution pauses after every 4 or 5 audio clips are played. The pause varies between half a minute to even two minutes long.
Seeing this problem in Google Home Speaker. Not seeing problem in Simulator (in browser)
Based on the logs, there is no delay in the action between request received and response sent. The pause seems to be because agent is being called after a pause.
Probably the speaker takes a pause and does not call the backend for those durations?
What else could be causing this, and is there a workaround?
I had similar problem with old assistant library. Which on are you using? The new one is called assistant-conversation-nodejs. If you are using the old one then only thing you can do is to update, as the old one is not being updated.

Ending a conv playing a media response doesn't work as expected. It pauses the audio output but doesn't quit the action

Media handling with media response and exit handling with exit intents conflict somehow with users expectations. Links to the docs supplied below. Expected/actual behaviour also described.
Is there any chance to get that handled (at least in the near future) by defining custom utterances for media handling? As far as I know there is no possibility to define custom utterances/intents for "play" / "pause" / "stop" / "start over".
I set up a AoG to play streams through the media response.
When I want to completely end the conversation with an exit intent from within media response it doesn't really stop the conversation, but just pauses the player with no voice output whereas on any visual device it shows a play button instead of the pause button. On second "stop" utterance (or whatever calls the exit intent) the action finally finishes and plays the desired exit audio/shows desired text making clear that the action really ended.
Whereas this is somehow the expected behaviour this is still pretty annoying according user expectations. When a user says "STOP" s/he probably wants to end the conversation and not pause a stream, or am I wrong here? User case studies in our company showed that at least.
Solution would be to be able to add custom voice output when stopping media playback.
This is a known bug in the Media response.
Ending a conversation with a MediaResponse will open up player features like pause, forward, backwards and raising / lowering the volume. This is however currently broken on a Google Home device as the audio will never play. We had it working for several months until it broke and we had to switch to keeping the conversation open in order to make the audio play.
We have noticed that there are several undocumented commands which does strange things... For instance, when playing an audio with the conversation open and saying "Lämna" which is the Swedish equivalent to "Leave", the conversation will end and the audio will keep playing. The command "Stopp" will stop the audio but keep the conversation open. "Avsluta" = "End" will stop the audio and close the conversation.
None of these actions will callback to our backend and things seems to change on a weekly basis.

Capture Silence on Google Home

Can we get Google send something when the person stays silent in a conversation? A "no.response" intent before closing the microphone. I'm thinking of an "are you still there" question scenario or timed question/response games.
The point is not to close the session and give a chance to continue.
This would happen only once (or configurable times) so the microphone would not stay open. 
When using the client library assistant.ask method, remember to specify values for the no-inputs parameters. You can use that to prompt the user to respond when the microphone doesn't hear anything.

Certification failed for background audio streaming app

I submitted a background audio app for certification and has failed with two reasons in which I could not figure out why.
Reason 1:
This app failed to correctly respond to at least one of the play,
pause, or play/pause events.
I understand that the MediaControl events for Play, Pause, Stop and PlayPause need to be catered, and have done so (and tested on both tablets and local devices that they are working) in the code. However, due to the reason that stopping a media stream and restarting it requires a longer-than-expected time, I used MediaElement.Pause() for both "Pause" and "Stop".
I read another post who had similar problem at the certification phase. Somebody recommended to use MediaElement.PlaybackRate = 0; instead. However, this is not ideal for long pauses as the stream will not move on.
What I wish to know is am I doing this the right way? For all my MediaControl events I have made sure that the MediaControl.IsPlaying property is correctly set as well.
Also, another reason it failed was this:
App failed the Perf test in the Windows ACK. See the following links
for more information: Test cases ran:
http://msdn.microsoft.com/en-us/library/windows/apps/hh920274.aspx
I have ran my app against the ACK and it all passed. The only thing I can think of is that the app does not enter suspend mode when the hardware (or on-screen) media control pause button is pressed. I have placed a debugger in the App_Suspending event but it never hits there.
As the description is too vague I am not sure if this is the problem. But if it's the case, can I know how do I force the app to enter suspended mode? I tried looking in the Window.Current class and Application.Current class, but to no avail.
Thanks!
For your first issue be sure that your media element is ready to play using :
while (CurrentTrack.CurrentState == MediaElementState.Opening || CurrentTrack.CurrentState == MediaElementState.Buffering)
{
await Task.Delay(100);
}
CurrentTrack.Play();
Also you have to stop your media element when the view is unload.
Regards.
After nearly 10 attempts in releasing the app, I finally got to the root of the problem, thanks to some guessing work by the folks at Microsoft too.
My app will automatically start the MediaElement streaming after the app is started. The background-capable audio will prevent the app from passing WACK because it will never enter suspended mode!
So, in order to get pass the store's WACK I had to remove the auto-starting feature, and now the app is in the store! (Phew).

iPhone data transmission issue

I have an iPhone app that uses ASIHTTPRequest to transmit data input by the user to a remote MySQL database via a PHP webservice layer. This works perfectly.
If the user presses the submit button the data should be sent regardless...the problem arises when there is insufficient bandwidth...rather than displaying some uialert to inform the user, I would like to implement some kind of function that constantly 'sniffs' for an internet connection even when the app isn't running (in main view) that ensures that the user only has to press 'submit' once.
How is this possible? Has anyone come across any tutorials/examples of anything similar?
Check out this example application from Apple: Reachability
It'll help you with some code to detect when the connection has changed.
Here's a link about backgrounding tasks. As you'll read, you can request additional time to complete a task, but it won't wait an infinite amount of time until it's complete. Background Tasks
Use the Reachability API in conjunction with a flag of some sort that will perform the desired action once it detects that a connection is available.