I am trying to figure out how I can embed Google Actions responses, such as the carousel, in a webhook response for DialogFlow.
I am using V2 of the REST protocol, so I am filling ACTIONS_ON_GOOGLE in the source field and the payload field contains the Google Actions field as specified (as per How can I integrate the Google Actions responses in a webhook response in Dialogflow?). I am sending the following response:
{
"fulfillmentText":"This is a carousel.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"conversationToken":"",
"expectUserResponse":true,
"expectedInputs":[
{
"inputPrompt":{
"initialPrompts":[
{
"textToSpeech":"This is a carousel."
}
],
"noInputPrompts":[
]
},
"possibleIntents":[
{
"intent":"actions.intent.OPTION",
"inputValueData":{,
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"key1",
"synonyms":[
"Option 1"
]
},
"title":"Option 1",
"description":"Option 2"
},
{
"optionInfo":{
"key":"key2",
"synonyms":[
"Option 2"
]
},
"title":"Option 2",
"description":"Option 2"
}
]
}
}
}
]
}
]
}
}
When trying this out in the console, no carousel is shown. Only the text This is a carousel. is displayed. For your information, I did not include the image field, as it is optional according to the specification, but even with images there is no carousel displayed.
It's hard to debug this, as my actions.intent.OPTION intent is not displayed in possibleIntents[] array in the response tab. I expected this actions.intent.OPTION intent to be merged with the other intents (such assistant.intent.action.TEXT) as generated by the Dialogflow response.
What am I doing wrong here? Am I maybe shooting myself in the foot by using V2 instead of V1 of the Dialogflow REST protocol?
update after initial feedback by Prisoner
I tried with the following response, but still not getting any carousel:
{
"fulfillmentText":"Here you go.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"expectUserResponse":true,
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Here are your results."
}
}
]
},
"systemIntent":{
"intent":"actions.intent.OPTION",
"data":{
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"Option1",
"synonyms":[
"Option2"
]
},
"title":"Option3",
"description":"Option4"
},
{
"optionInfo":{
"key":"Option5",
"synonyms":[
"Option6"
]
},
"title":"Option7",
"description":"Option8"
}
]
},
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
}
}
}
}
I also tried to manually create an intent in Dialogflow with returns a 'hardcoded' carousel (that is, without fulfillment callback) and this carousel is shown perfectly. So I am sure that the console is correctly configured.
I am also comparing my response with the ones in Google Assistant flow with multiple actions_intent_OPTION handlers, but without success so far.
You haven't shot yourself in the foot - but you have made something that was already a bit complex even more complex. There are two things to look out for:
CORRECTION: Make sure the payload is the same as what used to be data, but other fields have changed format.
You need to make sure the simulator is in Phone mode and not Speaker mode.
Details follow.
Documented Dialogflow Differences
The Actions on Google documentation for the Dialogflow response is a little confusing. Instead of providing the full example, it just says that the response that would be under expectedInputs.possibleIntents should, instead, be under data.google.systemIntent. For V2, that would be payload.google.systemIntent.
The inputPrompt object is also restructured somewhat so you should be sending a richResponse that contains a simpleResponse object.
UPDATE I've tested this. This is the entire JSON that should be returned. Note the contents of payload is exactly what the contents of data used to be. The source field is ignored and has nothing to do with the payload, apparently.
See also https://github.com/dialogflow/fulfillment-webhook-json which contain some examples.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is a carousel"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [
{
"optionInfo": {
"key": "key1",
"synonyms": [
"Option 1"
]
},
"title": "Option 1",
"description": "Option 2"
},
{
"optionInfo": {
"key": "key2",
"synonyms": [
"Option 2"
]
},
"title": "Option 2",
"description": "Option 2"
}
]
}
}
}
}
}
}
Simulator Surface Setting
Make sure that your simulator is set for the "Phone" surface rather than the "Speaker" surface. Options won't display on a Speaker.
The setting should look like this:
Not like this:
Related
IBM Watson Assistant apidoc v2 says that output.user_defined as
"An object containing any custom properties included in the response. This object includes any arbitrary properties defined in the dialog JSON editor as part of the dialog node output."
But it does not say where in JSON editor to set it up. Is it under output?
{
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"xxx": "aaa"
},
"context": {}
}
Can't set it to root level in JSON editor as the editor would complain that only output, output.generic, actions, context are allowed.
Where should I put it in JSON editor so that it appears in output.user_defined in response to /message REST call?
You can move that into the output section as user_defined. Here is what I tried:
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"user_defined": {
"test": "henrik"
}
}
I then used the V2 API of my test tool to verify. Here is the relevant part of how it was reported:
"output": {
"generic": [
{
"text": "Ok, checking the event information.",
"response_type": "text"
},
{
"text": "ok.",
"response_type": "text"
}
],
"debug": {...
},
"intents": [...
],
"user_defined": {
"test": "henrik"
},
"entities": [
{...
See also this section in the IBM Watson Assistan documentation with some more information on the response JSON.
As #data_henrik stated above, extra json elements added via the JSON editor to the output section of the response are moved into the user_defined section of the V2 output response.
These "extra" json elements do not have to be labelled user_defined. In my own case I have output.extra elements within my dialog responses. In V1 they remain output.extra, but in V2 they become output.user_defined.extra.
As you are just starting, it would be best to keep consistent and use output.user_defined as your starting point.
Sometimes Google Assistant does not answer me even though I receive correct response from the fulfillment. That happens only when I use voice command, by using keyboard it always works fine.
What I receive instead of the response
It's just 'thinking'.
After using conv.close('You've punched-in into demo as Jack'); in DialogFlow history I can see following response:
{
"queryText": "Jack",
"fulfillmentMessages": [
{
"text": {
"text": [
"[{\"type\":0,\"speech\":\"\"}]"
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You've punched-in into demo as Jack"
}
}
]
},
"expectUserResponse": false
}
},
"outputContexts": [
...
],
"intent": {
"id": "96f93154-0ae4-4bb4-91c3-c1b796d7cda3",
"displayName": "punch-in"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
Does any one experienced such issue?
Noticed on Galaxy S7, Android 6.0.1.
actions-on-google v.2.2.0
That mostly happens to me when the internet connection is not good. With voice, there is an extra layer of Voice to Text conversion. Same latency issue might be causing the issue in your case.
The google assistant team resolved the issues I created to them, and after that the issues is not reproduced.
So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.
I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}
I tried to connect DialogFlow and Actions on Google, so I created some intents, connected the services, added explicit and implicit invocations etc, but when I try the bot in the simulator https://console.actions.google.com/project/[projectId]/simulator/ it always gives me the error:
"Failed to parse Dialogflow response into AppResponse, exception
thrown with message: Empty speech response"
even tough inputType was "KEYBOARD".
What I tried so far:
I did set "Response from this tab will be sent to the Google Assistant integration" in Dialog Flow (do you have to set it for every single intent?), but I don't see any extra setting for speech.
I disabled the second language, first I had also intents in German
I also turned off the Fullfillment Webhook (implemented in API v1 and then also v2) with no change
I only found this user with the same problem https://productforums.google.com/forum/#!topic/dialogflow/xYjKlz31yW0;context-place=topicsearchin/dialogflow/Empty$20speech$20response but no resolution.
the fulfillment checkbox is checked at the intents
The bot works fine when I use it through "Try it now" on the very right in Dialog Flow or in the Web Demo https://bot.dialogflow.com/994dda8b-4849-4a8a-ab24-c0cd03b5f420
Unfortunately the docs don't say anything about this error. Any ideas?
Here a screenshot of the error on the Actions integration:
This is the full debug output:
{
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response",
"apiResponse": {
"id": "c12e1389-e887-49d4-b399-a332188ca946",
"timestamp": "2018-01-27T03:55:30.931Z",
"lang": "en-us",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1517025330705"
}
}
},
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"userId": "USER_ID",
"locale": "en-US",
"lastSeen": "2018-01-27T03:55:03Z"
},
"conversation": {
"conversationId": "1517025330705",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Talk to Mica, the Hipster Cat Bot"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
}
]
},
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=TOKEN' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: AUTH_TOKEN' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"USER_ID\",\"locale\":\"en-US\",\"lastSeen\":\"2018-01-27T03:55:03Z\"},\"conversation\":{\"conversationId\":\"1517025330705\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Mica, the Hipster Cat Bot\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
}
Also "debugInfo" sounds like an internal problem:
"API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: \": Cannot find field.\"."
Here a screenshot of the welcome intent:
ps.
It took me AGES to figure out, what
"Query pattern is missing for custom intent"
means - so I just document it here: In Dialog Flow - Intent - "User says" you have to DOUBLE CLICK on a word in the text input field when you want to set it as query parameter - which seems to be required for Actions on Google.
This happened to me. If this happens for an Intent you just added in the Dialogflow console and you are using Webhook fulfillment for the action, check the intent's fulfillment settings and ensure that the Webhook fulfillment slider is on. Evidently new intents don't automatically get webhook fulfillment: you have to opt each one in piecemeal (or at least, that was my experience).
I experienced this situation too.
My problem was that I used a SimpleResponse in my fulfillment index.js without referencing to it. So the solution for me was to add SimpleResponse like this in index.js:
const {dialogflow, SimpleResponse} = require('actions-on-google');
So, always check if you aren't using any dependencies without including it in your js-file.
Probably not the most common cause of the problem, but it can be.
I got this when running through the codelabs tutorial (https://codelabs.developers.google.com/codelabs/actions-1/index.html#4) and didn't name my intent the same name as it is referenced in the webhook script:
I came across this error when trying to develop my own WebHook. I first verified that my code was called by looking into the Nginx log, after which I knew there was a problem in my JSON output because I based my output on outdated examples.
The (up-to-date) documentation for both V1 and V2 of the API can be found here:
https://dialogflow.com/docs/fulfillment/how-it-works
This example response for v2 of the dialogflow webhook API helped me to resolve this error:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "this is a simple response"
}
}
]
}
}
}
}
Source: https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/RichResponses/SimpleResponse.json
You can find more examples in the official github repository linked above.
Another possibility is if you have a text response (even an empty one) like so:
Then you need to click the trash can next to the response to clear it out to use the webhook.
The Actions on Google support helped me fix this problem:
I needed to add a text as Default Response to the intent used for Explicit Invocation.