How to close conversation from the webhook in #assistant/conversation - actions-on-google

I want to close the conversation after the media started playing in #assistant/conversation. As I am doing here
app.intent("media", conv => {
conv.ask(`Playing your Radio`);
conv.ask(
new MediaObject({
url: ""
})
);
return conv.close(new Suggestions(`exit`));
});

As Jordi had mentioned, suggestion chips cannot be used to close a conversation. Additionally, the syntax of the #assistant/conversation is different from actions-on-google. As you're using the tag dialogflow-es-fulfillment but also actions-builder, I really don't know which answer you want. As such, I'm going to put two answers depending on which you're using.
Dialogflow
If you are using Dialogflow, you are pretty much set. You should switch to using actions-on-google and instantiate the dialogflow constant.
const {dialogflow} = require('actions-on-google')
const app = dialogflow()
Actions Builder
The syntax of the #assistant/conversation lib is different. Some method names are different. Additionally, you will need to go through Actions Builder to canonically close the conversation.
In your scene, you will need to transition the scene to End Conversation to close, rather than specifying it as part of your response. Still, your end transition should not have suggestion chips.
You will need to refactor your webhook:
const {conversation} = require('#assistant/conversation')
const app = conversation()
app.handle("media", conv => {
conv.add(`Playing your Radio`);
conv.add(
new MediaObject({
url: ""
})
);
conv.add(new Suggestions(`exit`));
});

As it seems you are trying to have a media control and after that to end the conversation, you should refer to the doc (https://developers.google.com/assistant/conversational/prompts-media) to check the available events as you have the chance to control each one for the media playback.
For example
// Media status
app.handle('media_status', (conv) => {
const mediaStatus = conv.intent.params.MEDIA_STATUS.resolved;
switch(mediaStatus) {
case 'FINISHED':
conv.add('Media has finished playing.');
break;
case 'FAILED':
conv.add('Media has failed.');
break;
case 'PAUSED' || 'STOPPED':
if (conv.request.context) {
// Persist the media progress value
const progress = conv.request.context.media.progress;
}
// Acknowledge pause/stop
conv.add(new Media({
mediaType: 'MEDIA_STATUS_ACK'
}));
break;
default:
conv.add('Unknown media status received.');
}
});
Once you get the FINISHED status you can offer the suggestion chip to exit the conversation.

Related

is there a way to check if the PWA was launched through a file or not?

I'm using the file handle API to give my web app the optional capability to launch through double-clicking files in the file explorer.
Writing the code below, I expected if (!("files" in LaunchParams.prototype)) to check if a file was used to launch the app, but apparently, it checks if the feature is supported. OK, that makes sense.
After that, I thought the setConsumer callback would be called in any launch scenario, and files.length would be zero if the app was launched in other ways (like by typing the URL in the browser). But on those use cases, the callback was not called at all, and my init logic was never executed.
if (!("launchQueue" in window)) return textRecord.open('a welcome text');
if (!("files" in LaunchParams.prototype)) return textRecord.open('a welcome text');
launchQueue.setConsumer((launchParams) => {
if (launchParams.files.length <= 0) return textRecord.open('a welcome text');
const fileHandle = launchParams.files[0];
textRecord.open(fileHandle);
});
I've also followed the Launch Handler API article instructions and enabled the experimental API.
The new code confirms that "targetURL" in LaunchParams.prototype is true, but the setConsumer callback is not executed if the user accesses the web app through a standard browser tab.
function updateIfLaunchedByFile(textRecord) {
if (!("launchQueue" in window)) return;
if (!("files" in LaunchParams.prototype)) return;
console.log({
'"targetURL" in LaunchParams': "targetURL" in LaunchParams.prototype,
});
// this is always undefined
console.log({ "LaunchParams.targetURL": LaunchParams.targetURL });
// setConsumer does not trigger if the app is not launched by file, so it is not a good place to branch what to do in every launch situation
launchQueue.setConsumer((launchParams) => {
// this never run in a normal tab
console.log({ setConsumer: launchParams });
if (launchParams.files.length <= 0) return;
const fileHandle = launchParams.files[0];
textRecord.open(fileHandle);
});
}
This is the result...
Is there a universal way to check if the web app was launched through a file?
Check out the Launch Handler origin trial. It lets you determine the launch behavior exactly and lets your app detect how the launch happened. This API works well together with the File Handling API that you already use. You could, for example, check the LaunchParams.targetURL to see how the app was launched. Your feedback is very welcome.
Since I was not able to guarantee that the setConsumer callback was called in every situation (especially when the app is launched in a regular browser tab), I hacked it through setTimeout:
function wasFileLaunched() {
if (!("launchQueue" in window)) return;
if (!("files" in LaunchParams.prototype)) return;
return new Promise((resolve) => {
let invoked = false;
// setConsumer does not triggers if the app is not launched by file, so it is not a good place to branch what to do in every launch situation
launchQueue.setConsumer((launchParams) => {
invoked = true;
if (launchParams.files.length <= 0) return resolve();
const fileHandle = launchParams.files[0];
resolve(fileHandle);
});
setTimeout(() => {
console.log({ "setTimeout invoked =": invoked });
if (!invoked) resolve();
}, 10);
});
}

Google assistant media player goes away on pause

BRIEF :
I have created Google assistant application that plays music using Google Action Builder. On specific command, it triggers a webhook. Webhook contains MediaResponse
OR Media from '#assistant/conversation' Library and the code is following
conv.add(new Media({
mediaType: 'AUDIO',
start_offset: `3.000000001s`,
mediaObjects: [{
name: music,
description: 'This is example of code ',
url: `https://example.com`,
image: {
large: {
url: 'https://example.com'
},
}
}]
}));
It is running well on android and the emulator .
ISSUE :
When I pause the music (USING PAUSE BUTTON), the Media player goes away.
What should I do to keep the media player so that I can resume the music?
Any information regarding this would be appreciated & Thanks in advance.
EDITED: It works well for showing media player and plays music but if you click pause button it goes away for both above devices(Android/Test Emulator).
Just adding acknowledgment to it fixed the issue.
app.handle('media_status', (conv) => {
const mediaStatus = conv.intent.params.MEDIA_STATUS.resolved;
switch (mediaStatus) {
case 'FINISHED':
conv.add('Media has finished playing.');
break;
case 'FAILED':
conv.add('Media has failed.');
break;
case 'PAUSED' || 'STOPPED':
if (conv.request.context) {
// Persist the media progress value
const progress = conv.request.context.media.progress;
}
// Acknowledge pause/stop
conv.add(new Media({
mediaType: 'MEDIA_STATUS_ACK'
}));
break;
default:
conv.add('Unknown media status received.');
}
});

In my Google Action, how do I efficiently conclude play of one MP3 and "skip" to another MP3 using a single user utterance?

I have a one-scene Action that calls my webhook 'randomSpeech' (mentioned below) upon invocation, which plays an MP3. I added a "skip" intent to skip to the next MP3. When I say "skip", the Action should transition (loop) back into the webhook 'randomSpeech', and since there is a counter, x, the Action should begin playing the 2nd MP3 in the switch statement.
However, I have to say the word "skip" twice in order for it to work.
The 1st time I say "skip", the system intent, MEDIA_STATUS_FINISHED automatically calls the 'mediaStatus' handler and the text 'Media has finished.' is added to the conversation. Even though I've configured the "skip" intent to call the handler, 'randomSpeech', it doesn't appear to happen as no new Media is added to the conversation. It's almost like 'randomSpeech', is completely ignored!
The 2nd time I say "skip", the second MP3 finally begins playing.
My main question is, how can I make it so the user only has to say "skip" one time?
let x = 1;
app.handle('randomSpeech', (conv) => {
switch(x) {
case(1):
conv.add(new Media({
mediaObjects: [
{
name: 'NEVER GIVE UP',
description: 'some athlete',
url: 'http://zetapad.com/speeches/nevergiveup.mp3',
image: {
large: {
url: 'https://www.keepinspiring.me/wp-content/uploads/2020/02/motivation-gets-you-started-jim-ryun-quote-min.jpg'
}
}
}
],
mediaType: 'AUDIO',
optionalMediaControls: ['PAUSED', 'STOPPED'],
startOffset: '5s'
}));
x++;
break;
case(2):
conv.add(new Media({
mediaObjects: [
{
name: 'SPEECHLESS',
description: 'Denzel Washington (feat Will Smith)',
url: 'http://zetapad.com/speeches/denzel.mp3',
image: {
large: {
url: 'https://www.keepinspiring.me/wp-content/uploads/2020/02/motivational-quotes-2-min.jpg'
}
}
}
],
mediaType: 'AUDIO',
optionalMediaControls: ['PAUSED', 'STOPPED']
}));
break;
}
});
app.handle('media_status', (conv) => {
const mediaStatus = conv.intent.params.MEDIA_STATUS.resolved;
switch(mediaStatus) {
case 'FINISHED':
conv.add('Media has finished.');
break;
case 'FAILED':
conv.add('Media has failed.');
break;
case 'PAUSED' || 'STOPPED':
if (conv.request.context) {
// Persist the media progress value
const progress = conv.request.context.media.progress;
}
conv.add(new Media({
mediaType: 'MEDIA_STATUS_ACK'
}));
break;
default:
conv.add('Unknown media status received.');
}
});
Images from the only scene, "Motivation":
Scene
On enter
Intent handling
Further notes:
MEDIA_STATUS_PAUSED / MEDIA_STATUS_FINISHED / MEDIA_STATUS_STOPPED all only call the 'media_status' wehbook
The issue at the heart of your question is that "skip" is a built-in Media Player command (although this is not clearly documented), so when the user says "skip", the player treats this as the audio being completed, so it sends the MEDIA_STATUS_FINISHED Intent, just as it the user listened to it all the way through.
The good news is - you actually want to handle both these cases the same way! So if the user skips to the next audio, or finishes the first and it should advance to the next audio - you want to play the next audio.
In your code, "playing the next audio" is all done as part of your switch statement. So you should probably put that into a regular JavaScript function by itself. You can then call that function from the different handlers that you have setup.
It might look something like this (without some of the code details):
function nextAudio( conv ){
// Code goes here to figure out the next audio to play and send it back
}
app.handle('randomSpeech', (conv) => {
nextAudio( conv );
}
app.handle('media_status', (conv) => {
const mediaStatus = conv.intent.params.MEDIA_STATUS.resolved;
switch(mediaStatus) {
case 'FINISHED':
nextAudio( conv );
break;
// Other media status cases can go here
}
});

Actions-On-Google NodeJS v2 alpha: More than one conv.close()

Will we be able to use conv.close() multiple times within an intent to provide more than just a single element on exit?
Similar to how you can provide multiple conv.ask() in the one intent.
Or can you include more than one 'new element' in a conv.close() tag?
Yes, of course to both ways! At least it's how it works currently during the alpha (which can change depending on feedback).
conv.ask and conv.close are implemented almost identically just that conv.close sets expectUserResponse to false which means you don't expect more responses from the user and the mic will be closed.
This means you can use conv.close just like conv.ask and call it multiple times.
For example, this code:
const { dialogflow } = require('actions-on-google')
const app = dialogflow()
app.intent('Default Welcome Intent', conv => {
conv.close(`Here's a cat image`)
conv.close(new Image({
url: 'https://developers.google.com/web/fundamentals/accessibility/' +
'semantics-builtin/imgs/160204193356-01-cat-500.jpg',
alt: 'A Cat',
}))
})
when the IntentHandler function is done executing (or if it returns a Promise, when the Promise is resolved), constructs a RichResponse based on the response fragments you provided and sends it back to Dialogflow or the Google Assistant.
It closes the mic and shows this as a result in the simulator.
Alternatively, conv.ask and conv.close also allow you to call it with an arbitrary number of response arguments. So this code will also work identical to the example before:
app.intent('Default Welcome Intent', conv => {
conv.close(`Here's a cat image`, new Image({
url: 'https://developers.google.com/web/fundamentals/accessibility/' +
'semantics-builtin/imgs/160204193356-01-cat-500.jpg',
alt: 'A Cat',
}))
})

Can I create follow-up actions on Actions on Google?

I know that I can deep link into my Google Home application by adding to my actions.json.
I also know that I can parse raw string values from the app.StandardIntents.TEXT intent that's provided by default, which I am currently doing like so:
if(app.getRawInput() === 'make payment') {
app.ask('Enter payment information: ');
}
else if(app.getRawInput() === 'quit') {
app.tell('Goodbye!');
}
But does Actions on Google provide direct support for creating follow-up intents, possibly after certain user voice inputs?
An example of a conversation flow is:
OK Google, talk to my app.
Welcome to my app, I can order your most recent purchase or your saved favorite. Which would you prefer?
Recent purchase.
Should I use your preferred address and method of payment?
Yes.
OK, I've placed your order.
My previous answer won't work after testing.
Here is a tested version.
exports.conversationComponent = functions.https.onRequest((req, res) => {
const app = new ApiAiApp({request: req, response: res});
console.log('Request headers: ' + JSON.stringify(req.headers));
console.log('Request body: ' + JSON.stringify(req.body));
const registerCallback = (app, funcName)=>{
if (!app.callbackMap.get(funcName)){
console.error(`Function ${funcName} required to be in app.callbackMap before calling registerCallback`);
return;
}
app.setContext("callback_followup", 1, {funcName});
}
const deRegisterCallback = (app)=>{
const context = app.getContext("callback_followup");
const funcName = app.getContextArgument("callback_followup", "funcName").value;
const func = app.callbackMap.get(funcName);
app.setContext("callback_followup", 0);
return func;
}
app.callbackMap = new Map();
app.callbackMap.set('endSurvey', (app)=>{
if (app.getUserConfirmation()) {
app.tell('Stopped, bye!');
}
else {
app.tell('Lets continue.');
}
});
app.callbackMap.set('confirmationStartSurvey', (app)=>{
const context = app.getContext("callback_follwup");
if (app.getUserConfirmation()) {
registerCallback(app, 'endSurvey');
app.askForConfirmation('Great! I\'m glad you want to do it!, do you want to stop?');
} else {
app.tell('That\'s okay. Let\'s not do it now.');
}
});
// Welcome
function welcome (app) {
registerCallback(app, 'confirmationStartSurvey');
const prompt = "You have one survey in your task list, do you want to proceed now?";
app.askForConfirmation(prompt);
}
function confirmationCalbackFollowup (app) {
const context = app.getContext("callback_followup");
if (! context){
console.error("ERROR: confirmationCallbackFollowup should always has context named callback_followup. ");
return;
}
const callback = deRegisterCallback(app);
return callback(app);
}
const actionMap = new Map();
actionMap.set(WELCOME, welcome);
actionMap.set('confirmation.callback.followup', confirmationCalbackFollowup );
app.handleRequest(actionMap);
});
The previous solution won't work because app is generated everytime the action function is called. I tried to save a callback function into app.data but it won't be existing next intent coming. So I changed another way. Register the callback function to app.callbackMap inside the function. so it will be there anyway.
To make it work, one important thing is Api.Ai need to have context defined in the intent. See the Api.Ai Intent here.
Make sure you have event, context, and action of course. otherwise, this intent won't be triggered.
Please let me know if you can use this solution. sorry for my previous wrong solution.
thanks
Can you give an example of a conversation flow that has what you are trying to do?
If you can use API.AI, they have Follow Up intents in the docs.
I do not think your code
if(app.getRawInput() === 'make payment') {
app.ask('Enter payment information: ');
}
else if(app.getRawInput() === 'quit') {
app.tell('Goodbye!');
}
is a good idea. I would suggest you have two different intent to handle "Payment information" and "Quit".