Google Home dialogFlow V2 API mediaResponse not working - actions-on-google

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details

First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Related

Response from dialogflow does not come to Google Assistant

Sometimes Google Assistant does not answer me even though I receive correct response from the fulfillment. That happens only when I use voice command, by using keyboard it always works fine.
What I receive instead of the response
It's just 'thinking'.
After using conv.close('You've punched-in into demo as Jack'); in DialogFlow history I can see following response:
{
"queryText": "Jack",
"fulfillmentMessages": [
{
"text": {
"text": [
"[{\"type\":0,\"speech\":\"\"}]"
]
}
}
],
"webhookPayload": {
"google": {
"userStorage": "{\"data\":{}}",
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You've punched-in into demo as Jack"
}
}
]
},
"expectUserResponse": false
}
},
"outputContexts": [
...
],
"intent": {
"id": "96f93154-0ae4-4bb4-91c3-c1b796d7cda3",
"displayName": "punch-in"
},
"intentDetectionConfidence": 1,
"languageCode": "en"
}
Does any one experienced such issue?
Noticed on Galaxy S7, Android 6.0.1.
actions-on-google v.2.2.0
That mostly happens to me when the internet connection is not good. With voice, there is an extra layer of Voice to Text conversion. Same latency issue might be causing the issue in your case.
The google assistant team resolved the issues I created to them, and after that the issues is not reproduced.

Why is the carousel not showing in the console simulator?

I am trying to figure out how I can embed Google Actions responses, such as the carousel, in a webhook response for DialogFlow.
I am using V2 of the REST protocol, so I am filling ACTIONS_ON_GOOGLE in the source field and the payload field contains the Google Actions field as specified (as per How can I integrate the Google Actions responses in a webhook response in Dialogflow?). I am sending the following response:
{
"fulfillmentText":"This is a carousel.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"conversationToken":"",
"expectUserResponse":true,
"expectedInputs":[
{
"inputPrompt":{
"initialPrompts":[
{
"textToSpeech":"This is a carousel."
}
],
"noInputPrompts":[
]
},
"possibleIntents":[
{
"intent":"actions.intent.OPTION",
"inputValueData":{,
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"key1",
"synonyms":[
"Option 1"
]
},
"title":"Option 1",
"description":"Option 2"
},
{
"optionInfo":{
"key":"key2",
"synonyms":[
"Option 2"
]
},
"title":"Option 2",
"description":"Option 2"
}
]
}
}
}
]
}
]
}
}
When trying this out in the console, no carousel is shown. Only the text This is a carousel. is displayed. For your information, I did not include the image field, as it is optional according to the specification, but even with images there is no carousel displayed.
It's hard to debug this, as my actions.intent.OPTION intent is not displayed in possibleIntents[] array in the response tab. I expected this actions.intent.OPTION intent to be merged with the other intents (such assistant.intent.action.TEXT) as generated by the Dialogflow response.
What am I doing wrong here? Am I maybe shooting myself in the foot by using V2 instead of V1 of the Dialogflow REST protocol?
update after initial feedback by Prisoner
I tried with the following response, but still not getting any carousel:
{
"fulfillmentText":"Here you go.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"expectUserResponse":true,
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Here are your results."
}
}
]
},
"systemIntent":{
"intent":"actions.intent.OPTION",
"data":{
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"Option1",
"synonyms":[
"Option2"
]
},
"title":"Option3",
"description":"Option4"
},
{
"optionInfo":{
"key":"Option5",
"synonyms":[
"Option6"
]
},
"title":"Option7",
"description":"Option8"
}
]
},
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
}
}
}
}
I also tried to manually create an intent in Dialogflow with returns a 'hardcoded' carousel (that is, without fulfillment callback) and this carousel is shown perfectly. So I am sure that the console is correctly configured.
I am also comparing my response with the ones in Google Assistant flow with multiple actions_intent_OPTION handlers, but without success so far.
You haven't shot yourself in the foot - but you have made something that was already a bit complex even more complex. There are two things to look out for:
CORRECTION: Make sure the payload is the same as what used to be data, but other fields have changed format.
You need to make sure the simulator is in Phone mode and not Speaker mode.
Details follow.
Documented Dialogflow Differences
The Actions on Google documentation for the Dialogflow response is a little confusing. Instead of providing the full example, it just says that the response that would be under expectedInputs.possibleIntents should, instead, be under data.google.systemIntent. For V2, that would be payload.google.systemIntent.
The inputPrompt object is also restructured somewhat so you should be sending a richResponse that contains a simpleResponse object.
UPDATE I've tested this. This is the entire JSON that should be returned. Note the contents of payload is exactly what the contents of data used to be. The source field is ignored and has nothing to do with the payload, apparently.
See also https://github.com/dialogflow/fulfillment-webhook-json which contain some examples.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is a carousel"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [
{
"optionInfo": {
"key": "key1",
"synonyms": [
"Option 1"
]
},
"title": "Option 1",
"description": "Option 2"
},
{
"optionInfo": {
"key": "key2",
"synonyms": [
"Option 2"
]
},
"title": "Option 2",
"description": "Option 2"
}
]
}
}
}
}
}
}
Simulator Surface Setting
Make sure that your simulator is set for the "Phone" surface rather than the "Speaker" surface. Options won't display on a Speaker.
The setting should look like this:
Not like this:

Google action console simulator request and response tab not working when using custom action.json

I have the problem that the google home/assitant action console simulators response and request tab are not working. At least when I am using a custom action.json.
For me I am not sure if all have this problem or only some. That are using an custom action sdk. Or is it a problem only because something of my action.json is maybe not 100% correct configured.
here is the action.json:
{
"actions": [
{
"description": "Default Welcome Intent",
"name": "MAIN",
"fulfillment": {
"conversationName": "testApp"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"open special manager",
"open s p m"
]
}
}
}
],
"types": [],
"conversations": {
"testApp": {
"name": "testApp",
"url": "https://572e66a2.ngrok.io/",
"fulfillmentApiVersion": 2,
"in_dialog_intents": [
{
"name": "actions.intent.NO_INPUT"
},
{
"name": "actions.intent.SIGN_IN"
}
]
}
}
here is a picture of the request:
As you can see it is only the dummy content in request tab anyway if the chat is working.
The response tab is completly empty. But the messages and voice is correctly working. Also on my google home.
Does anybody have an idea? I will add of course more debug informations if necessary. Can it be trouble with the response or request messages from my server?
But actually the messages, they are working...

Persistent menu not showing in Facebook Messenger chat bot

As I don't know why suggested, using Postman.
Per docs, have succesfully POSTed the configuration to facebook APIs:
which is not supposed to be anyways locale specific. Even I don't see here
Localization: Developers can now provide text in multiple languages (or entirely different menus) for each local your bot's users may come from.
Like my brother, I have tried almost everything so far
This looks like some crazy bug. Is there some work around to add a simplest persistent menu?
Wasted 2 hours on this issue. Until I realised you have to delete the conversation then refresh facebook with ignore cache (ctrl+shift+r in chrome) and then it will show.
The FB API document states that the API link to hit for applying persistent menu to the page specific bot is:
https://graph.facebook.com/v2.6/me/messenger_profile?access_token=<PAGE_ACCESS_TOKEN>
Notice the me after version number i.e v2.6 in this specific case. However, this did not worked for a lot of people
There is small change in the API link to hit:
graph.facebook.com/v2.6/Page ID/messenger_profile?access_token=PAGE ACCESS TOKEN
Notice that me is replaced with the fb Page Id.
And the sample payload can still be the same:
{
"get_started": {
"payload": "Get started"
},
"persistent_menu": [
{
"locale": "default",
"composer_input_disabled": false,
"call_to_actions": [
{
"title": "Stop notifications",
"type": "nested",
"call_to_actions": [
{
"title": "For 1 week",
"type": "postback",
"payload": "For_1_week"
},
{
"title": "For 1 month",
"type": "postback",
"payload": "For_1_month"
},
{
"title": "For 1 year",
"type": "postback",
"payload": "For_1_year"
}
]
},
{
"title": "fresh jobs",
"type": "postback",
"payload": "fresh jobs"
},
{
"title": "More",
"type": "nested",
"call_to_actions": [
{
"title": "like us",
"type": "web_url",
"url": "https://www.facebook.com/nordible/"
},
{
"title": "blog",
"type": "web_url",
"url": "http://xameeramir.github.io/"
}
]
}
]
}
]
}
Notice that it is mandatory to configure get_started button before setting up the persistent_menu.

Actions on Google Smart Home Skill: Room Hint / Home Graph

The documentation at https://developers.google.com/actions/smarthome/create-app#actiondevicessync mentions that the roomHint field of the JSON response to the sync request can be used to have Google automatically assign devices to correct rooms.
However, no matter what I return in that field, the user still has to manually assign every device to a room and I cannot get Google to automatically recognize the correct room using this roomHint field
Here's an example response:
{
"requestId": "500166151965294748",
"payload": {
"devices": [
{
"id": "9",
"type": "action.devices.types.LIGHT",
"traits": [
"action.devices.traits.OnOff"
],
"name": {
"name": "Light"
},
"willReportState": false,
"roomHint": "Attic"
}
]
}
}
Right now supplying a value for the roomHint is not used by the HomeGraph to determine which room this device is in.