I created a chatbot which contains only one additional intent (depr_intent) along with the Default Welcome Intent and the Default Fallback Intent. This intent includes only the following training phrase: "What causes a person to get depressed and how can this be treated?". I set the ML CLASSIFICATION THRESHOLD to 0.2 .
When I enter on Dialogflow "I am a depressed person" then the depr_intent is triggered and the json response from Dialogflow is the following:
{
"id": "*****************************",
"timestamp": "2018-04-17T12:41:07.662Z",
"lang": "en",
"result": {
"source": "agent",
"resolvedQuery": "I am a depressed person",
"action": "5",
"actionIncomplete": false,
"parameters": {},
"contexts": [],
"metadata": {
"intentId": "*****************************",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"webhookResponseTime": 253,
"intentName": "depr_intent"
},
"fulfillment": {
"speech": "...",
"source": "agent",
"displayText": "...",
"messages": [
{
"type": 0,
"speech": "..."
}
]
},
"score": 0.25
},
"status": {
"code": 200,
"errorType": "success",
"webhookTimedOut": false
},
"sessionId": "****************************"
}
Notice at the json response above that the score of this question is 0.25 .However, when I am entering the exact same phrase ("I am a depressed person") on Google Assistant (after entering "Talk to my test app" and getting triggered the Default Welcome Intent) then the Default Fallback Intent is triggered which responds "Sorry I did not find anything relevant to your question" and not the depr_intent. The DEBUG section on Google Assistant contains the following:
{
"response": "Sorry I did not find anything relevant to your question.",
"expectUserResponse": 1,
"conversationToken": "CiZDIzVhZD...",
"audioResponse": "//NExAARsA...",
"debugInfo": {
"assistantToAgentDebug": {
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=*************************' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: [token]' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"*************************\",\"locale\":\"en-US\",\"lastSeen\":\"2018-04-17T12:46:56Z\"},\"conversation\":{\"conversationId\":\"1523969239924\",\"type\":\"ACTIVE\",\"conversationToken\":\"[]\"},\"inputs\":[{\"intent\":\"actions.intent.TEXT\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"I am a depressed person\"}],\"arguments\":[{\"name\":\"text\",\"rawText\":\"I am a depressed person\",\"textValue\":\"I am a depressed person\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}'",
"assistantToAgentJson": "{\"user\":{\"userId\":\"A****************************\",\"locale\":\"en-US\",\"lastSeen\":\"2018-04-17T12:46:56Z\"},\"conversation\":{\"conversationId\":\"1523969239924\",\"type\":\"ACTIVE\",\"conversationToken\":\"[]\"},\"inputs\":[{\"intent\":\"actions.intent.TEXT\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"I am a depressed person\"}],\"arguments\":[{\"name\":\"text\",\"rawText\":\"I am a depressed person\",\"textValue\":\"I am a depressed person\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.SCREEN_OUTPUT\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"}]}]}"
},
"agentToAssistantDebug": {
"agentToAssistantJson": "{\"conversationToken\":\"[]\",\"expectUserResponse\":true,\"expectedInputs\":[{\"inputPrompt\":{\"richInitialPrompt\":{\"items\":[{\"simpleResponse\":{\"textToSpeech\":\"Sorry I did not find anything relevant to your question.\"}}]}},\"possibleIntents\":[{\"intent\":\"assistant.intent.action.TEXT\"}],\"speechBiasingHints\":[\"$Employee_names_MP\",\"$Employee_names_JT\",\"$Greeting_type\",\"$Objective_whoswho\",\"$Locations\"]}],\"responseMetadata\":{\"status\":{\"message\":\"Success (200)\"},\"queryMatchInfo\":{}}}"
},
"sharedDebugInfoList": []
},
"visualResponse": {
"visualElementsList": [
{
"displayText": {
"content": "Sorry I did not find anything relevant to your question."
}
}
],
"suggestionsList": [],
"agentLogoUrl": "https://www.gstatic.com/voice/opa/partner_icons/generic_3p_avatar.png"
},
"clientError": 0,
"is3pResponse": 1
}
Why is this happening?
Actually, even if I set the ML CLASSIFICATION THRESHOLD to 0.05 the same thing is happening on Google Assistant. Moreover, keep in mind that if I enter on Google Assistant "what causes a person to get depressed" then depr_intent is triggered (and apparently the same applies on Dialogflow). Finally, note that the fact that I am using a webhook at this basic bot does not make a difference (I think) since the same intents are triggered even without the webhook.
**
UPDATE
**
When I enter on Google Assistant "Why am I a person depressed" which has a score of 0.2800000011920929 (remember the ML CLASSIFICATION THRESHOLD is 0.2) then depr_intent is triggered. Personally, I am inclined to believe that Google Assistant has a minimum ML CLASSIFICATION THRESHOLD of 0.25. However, I entered two different phrases which each had score 0.25 and Google Assistant triggered depr_intent at the one and Default Fallback Intent at the other of them (while both triggered depr_intent on Dialogflow). Hence, I do not really know what is happening. But if we accept that probably there is a 0.25 minimum ML CLASSIFICATION THRESHOLD on Google Assistant then it may be that the former phrase was a bit above 0.25 and the latter one was a bit below 0.25 .
Related
I have just started with Google Actions. The first trait we want to support is the temperature reading from my device. It's not a thermostat so we'll need to use TemperatureControl trait (readonly-sensor, no control) .
The issue is that after implementing TemperatureControl, I can't request Google Assistant to read out the temperature. Has anyone encountered similar issue? I have search the topic on TemperatureControl, seems there was none similar issue reported. Thanks in advance.
The more detail flow is:
ask "What's the temperature in Bedroom?"
I receive a QUERY intent in my backend server and responded with Ambient Temperature
but, the answer is "Sorry I can't reach the bedroom right now. Please try again"
After that, I tried to add HumiditySetting Trait to verify my SYNC/ QUERY validity.
This is working.
ask "What's the Humidity level in Bedroom?"
I receive a QUERY intent in my backend server and responded with Ambient Humidity. Actually it's the same response as in the case of Temperature.
but, the answer is "the bedroom has a current humidity reading of xx%"
My SYNC Response is validated with https://developers.google.com/assistant/smarthome/tools/validator
Below is a sample:
{
"requestId": "10692316150281033205",
"payload": {
"agentUserId": "1-5671",
"devices": [
{
"id": "3466",
"type": "action.devices.types.CAMERA",
"traits"[
"action.devices.traits.HumiditySetting",
"action.devices.traits.TemperatureControl"
],
"name": {
"defaultNames": [
"bedroom"
],
"name": "bedroom",
"nicknames": [
"bedroom"
]
},
"willReportState": true,
"roomHint": null,
"deviceInfo": null,
"otherDeviceIds": null,
"customData": {
"deviceId": "CQAMNGF"
},
"attributes": {
"temperatureUnitForUX": "C",
"queryOnlyHumiditySetting": true,
"queryOnlyTemperatureControl": true
}
}
],
"errorCode": null
}
}
I found the issue.
I was using thermostatTemperatureSetpoint and thermostatTemperatureAmbient in Query response.
After changing them back to temperatureSetpointCelsius
temperatureAmbientCelsius, it works.
I've checked the other questions about this error and the only solutions offered are "subscribe to the messaging_postbacks at Messenger -> Settings -> Webhooks -> Edit events.
You can see here that I have done this since the original setup of my bot, and I have even re-subscribed to it since:
But I am still getting this error:
I am logging any request that comes into my webhook and there is no activity, even though clicking the button, shows the payload value in blue as if I typed and sent it as a message. Then the popup "Action Unsuccessful" shows, and my bot doesn't receive anything.
Here is the response to FB with the button attachment elements:
{
"recipient": {
"id": "xxxxxxxxxxx"
},
"message": {
"attachment": {
"type": "template",
"payload": {
"template_type": "list",
"top_element_style": "compact",
"elements": [{
"title": "transfer",
"subtitle": null,
"image_url": "http://xxxxxxx",
"buttons": [{
"type": "postback",
"title": "transfer",
"payload": "transfer"
}]
}, {
"title": "hourly",
"subtitle": null,
"image_url": "http://xxxxxxxx",
"buttons": [{
"type": "postback",
"title": "hourly",
"payload": "hourly"
}]
}]
}
}
}
}
The Page Access Token just needed to be updated in the webhook. Probably to apply the latest permissions that include messaging_postbacks.
Go back to developer app dashboard. Select Messenger >> Settings.
Scroll down to the "Token Generation" section:
Select your page from the dropdown, and copy the new access token for use in your webhook.
Found a lot of similar questions and no clear answers. So I hope this saves someone the days of headache it caused me.
I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}
I tried to connect DialogFlow and Actions on Google, so I created some intents, connected the services, added explicit and implicit invocations etc, but when I try the bot in the simulator https://console.actions.google.com/project/[projectId]/simulator/ it always gives me the error:
"Failed to parse Dialogflow response into AppResponse, exception
thrown with message: Empty speech response"
even tough inputType was "KEYBOARD".
What I tried so far:
I did set "Response from this tab will be sent to the Google Assistant integration" in Dialog Flow (do you have to set it for every single intent?), but I don't see any extra setting for speech.
I disabled the second language, first I had also intents in German
I also turned off the Fullfillment Webhook (implemented in API v1 and then also v2) with no change
I only found this user with the same problem https://productforums.google.com/forum/#!topic/dialogflow/xYjKlz31yW0;context-place=topicsearchin/dialogflow/Empty$20speech$20response but no resolution.
the fulfillment checkbox is checked at the intents
The bot works fine when I use it through "Try it now" on the very right in Dialog Flow or in the Web Demo https://bot.dialogflow.com/994dda8b-4849-4a8a-ab24-c0cd03b5f420
Unfortunately the docs don't say anything about this error. Any ideas?
Here a screenshot of the error on the Actions integration:
This is the full debug output:
{
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response",
"apiResponse": {
"id": "c12e1389-e887-49d4-b399-a332188ca946",
"timestamp": "2018-01-27T03:55:30.931Z",
"lang": "en-us",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1517025330705"
}
}
},
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"userId": "USER_ID",
"locale": "en-US",
"lastSeen": "2018-01-27T03:55:03Z"
},
"conversation": {
"conversationId": "1517025330705",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Talk to Mica, the Hipster Cat Bot"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
}
]
},
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=TOKEN' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: AUTH_TOKEN' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"USER_ID\",\"locale\":\"en-US\",\"lastSeen\":\"2018-01-27T03:55:03Z\"},\"conversation\":{\"conversationId\":\"1517025330705\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Mica, the Hipster Cat Bot\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
}
Also "debugInfo" sounds like an internal problem:
"API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: \": Cannot find field.\"."
Here a screenshot of the welcome intent:
ps.
It took me AGES to figure out, what
"Query pattern is missing for custom intent"
means - so I just document it here: In Dialog Flow - Intent - "User says" you have to DOUBLE CLICK on a word in the text input field when you want to set it as query parameter - which seems to be required for Actions on Google.
This happened to me. If this happens for an Intent you just added in the Dialogflow console and you are using Webhook fulfillment for the action, check the intent's fulfillment settings and ensure that the Webhook fulfillment slider is on. Evidently new intents don't automatically get webhook fulfillment: you have to opt each one in piecemeal (or at least, that was my experience).
I experienced this situation too.
My problem was that I used a SimpleResponse in my fulfillment index.js without referencing to it. So the solution for me was to add SimpleResponse like this in index.js:
const {dialogflow, SimpleResponse} = require('actions-on-google');
So, always check if you aren't using any dependencies without including it in your js-file.
Probably not the most common cause of the problem, but it can be.
I got this when running through the codelabs tutorial (https://codelabs.developers.google.com/codelabs/actions-1/index.html#4) and didn't name my intent the same name as it is referenced in the webhook script:
I came across this error when trying to develop my own WebHook. I first verified that my code was called by looking into the Nginx log, after which I knew there was a problem in my JSON output because I based my output on outdated examples.
The (up-to-date) documentation for both V1 and V2 of the API can be found here:
https://dialogflow.com/docs/fulfillment/how-it-works
This example response for v2 of the dialogflow webhook API helped me to resolve this error:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "this is a simple response"
}
}
]
}
}
}
}
Source: https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/RichResponses/SimpleResponse.json
You can find more examples in the official github repository linked above.
Another possibility is if you have a text response (even an empty one) like so:
Then you need to click the trash can next to the response to clear it out to use the webhook.
The Actions on Google support helped me fix this problem:
I needed to add a text as Default Response to the intent used for Explicit Invocation.
As I don't know why suggested, using Postman.
Per docs, have succesfully POSTed the configuration to facebook APIs:
which is not supposed to be anyways locale specific. Even I don't see here
Localization: Developers can now provide text in multiple languages (or entirely different menus) for each local your bot's users may come from.
Like my brother, I have tried almost everything so far
This looks like some crazy bug. Is there some work around to add a simplest persistent menu?
Wasted 2 hours on this issue. Until I realised you have to delete the conversation then refresh facebook with ignore cache (ctrl+shift+r in chrome) and then it will show.
The FB API document states that the API link to hit for applying persistent menu to the page specific bot is:
https://graph.facebook.com/v2.6/me/messenger_profile?access_token=<PAGE_ACCESS_TOKEN>
Notice the me after version number i.e v2.6 in this specific case. However, this did not worked for a lot of people
There is small change in the API link to hit:
graph.facebook.com/v2.6/Page ID/messenger_profile?access_token=PAGE ACCESS TOKEN
Notice that me is replaced with the fb Page Id.
And the sample payload can still be the same:
{
"get_started": {
"payload": "Get started"
},
"persistent_menu": [
{
"locale": "default",
"composer_input_disabled": false,
"call_to_actions": [
{
"title": "Stop notifications",
"type": "nested",
"call_to_actions": [
{
"title": "For 1 week",
"type": "postback",
"payload": "For_1_week"
},
{
"title": "For 1 month",
"type": "postback",
"payload": "For_1_month"
},
{
"title": "For 1 year",
"type": "postback",
"payload": "For_1_year"
}
]
},
{
"title": "fresh jobs",
"type": "postback",
"payload": "fresh jobs"
},
{
"title": "More",
"type": "nested",
"call_to_actions": [
{
"title": "like us",
"type": "web_url",
"url": "https://www.facebook.com/nordible/"
},
{
"title": "blog",
"type": "web_url",
"url": "http://xameeramir.github.io/"
}
]
}
]
}
]
}
Notice that it is mandatory to configure get_started button before setting up the persistent_menu.