Error "Empty speech response" - actions-on-google

I tried to connect DialogFlow and Actions on Google, so I created some intents, connected the services, added explicit and implicit invocations etc, but when I try the bot in the simulator https://console.actions.google.com/project/[projectId]/simulator/ it always gives me the error:
"Failed to parse Dialogflow response into AppResponse, exception
thrown with message: Empty speech response"
even tough inputType was "KEYBOARD".
What I tried so far:
I did set "Response from this tab will be sent to the Google Assistant integration" in Dialog Flow (do you have to set it for every single intent?), but I don't see any extra setting for speech.
I disabled the second language, first I had also intents in German
I also turned off the Fullfillment Webhook (implemented in API v1 and then also v2) with no change
I only found this user with the same problem https://productforums.google.com/forum/#!topic/dialogflow/xYjKlz31yW0;context-place=topicsearchin/dialogflow/Empty$20speech$20response but no resolution.
the fulfillment checkbox is checked at the intents
The bot works fine when I use it through "Try it now" on the very right in Dialog Flow or in the Web Demo https://bot.dialogflow.com/994dda8b-4849-4a8a-ab24-c0cd03b5f420
Unfortunately the docs don't say anything about this error. Any ideas?
Here a screenshot of the error on the Actions integration:
This is the full debug output:
{
"agentToAssistantDebug": {
"agentToAssistantJson": {
"message": "Failed to parse Dialogflow response into AppResponse, exception thrown with message: Empty speech response",
"apiResponse": {
"id": "c12e1389-e887-49d4-b399-a332188ca946",
"timestamp": "2018-01-27T03:55:30.931Z",
"lang": "en-us",
"result": {},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "1517025330705"
}
}
},
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"userId": "USER_ID",
"locale": "en-US",
"lastSeen": "2018-01-27T03:55:03Z"
},
"conversation": {
"conversationId": "1517025330705",
"type": "NEW"
},
"inputs": [
{
"intent": "actions.intent.MAIN",
"rawInputs": [
{
"inputType": "KEYBOARD",
"query": "Talk to Mica, the Hipster Cat Bot"
}
]
}
],
"surface": {
"capabilities": [
{
"name": "actions.capability.MEDIA_RESPONSE_AUDIO"
},
{
"name": "actions.capability.WEB_BROWSER"
},
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
},
"isInSandbox": true,
"availableSurfaces": [
{
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
},
{
"name": "actions.capability.SCREEN_OUTPUT"
}
]
}
]
},
"curlCommand": "curl -v 'https://api.api.ai/api/integrations/google?token=TOKEN' -H 'Content-Type: application/json;charset=UTF-8' -H 'Google-Actions-API-Version: 2' -H 'Authorization: AUTH_TOKEN' -A 'Mozilla/5.0 (compatible; Google-Cloud-Functions/2.1; +http://www.google.com/bot.html)' -X POST -d '{\"user\":{\"userId\":\"USER_ID\",\"locale\":\"en-US\",\"lastSeen\":\"2018-01-27T03:55:03Z\"},\"conversation\":{\"conversationId\":\"1517025330705\",\"type\":\"NEW\"},\"inputs\":[{\"intent\":\"actions.intent.MAIN\",\"rawInputs\":[{\"inputType\":\"KEYBOARD\",\"query\":\"Talk to Mica, the Hipster Cat Bot\"}]}],\"surface\":{\"capabilities\":[{\"name\":\"actions.capability.MEDIA_RESPONSE_AUDIO\"},{\"name\":\"actions.capability.WEB_BROWSER\"},{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]},\"isInSandbox\":true,\"availableSurfaces\":[{\"capabilities\":[{\"name\":\"actions.capability.AUDIO_OUTPUT\"},{\"name\":\"actions.capability.SCREEN_OUTPUT\"}]}]}'"
},
"sharedDebugInfo": [
{
"name": "ResponseValidation",
"subDebugEntry": [
{
"debugInfo": "API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: \": Cannot find field.\".",
"name": "UnparseableJsonResponse"
}
]
}
]
}
Also "debugInfo" sounds like an internal problem:
"API Version 2: Failed to parse JSON response string with
'INVALID_ARGUMENT' error: \": Cannot find field.\"."
Here a screenshot of the welcome intent:
ps.
It took me AGES to figure out, what
"Query pattern is missing for custom intent"
means - so I just document it here: In Dialog Flow - Intent - "User says" you have to DOUBLE CLICK on a word in the text input field when you want to set it as query parameter - which seems to be required for Actions on Google.

This happened to me. If this happens for an Intent you just added in the Dialogflow console and you are using Webhook fulfillment for the action, check the intent's fulfillment settings and ensure that the Webhook fulfillment slider is on. Evidently new intents don't automatically get webhook fulfillment: you have to opt each one in piecemeal (or at least, that was my experience).

I experienced this situation too.
My problem was that I used a SimpleResponse in my fulfillment index.js without referencing to it. So the solution for me was to add SimpleResponse like this in index.js:
const {dialogflow, SimpleResponse} = require('actions-on-google');
So, always check if you aren't using any dependencies without including it in your js-file.
Probably not the most common cause of the problem, but it can be.

I got this when running through the codelabs tutorial (https://codelabs.developers.google.com/codelabs/actions-1/index.html#4) and didn't name my intent the same name as it is referenced in the webhook script:

I came across this error when trying to develop my own WebHook. I first verified that my code was called by looking into the Nginx log, after which I knew there was a problem in my JSON output because I based my output on outdated examples.
The (up-to-date) documentation for both V1 and V2 of the API can be found here:
https://dialogflow.com/docs/fulfillment/how-it-works
This example response for v2 of the dialogflow webhook API helped me to resolve this error:
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "this is a simple response"
}
}
]
}
}
}
}
Source: https://github.com/dialogflow/fulfillment-webhook-json/blob/master/responses/v2/ActionsOnGoogle/RichResponses/SimpleResponse.json
You can find more examples in the official github repository linked above.

Another possibility is if you have a text response (even an empty one) like so:
Then you need to click the trash can next to the response to clear it out to use the webhook.

The Actions on Google support helped me fix this problem:
I needed to add a text as Default Response to the intent used for Explicit Invocation.

Related

Watson assistant error "Http response code is [401]"

I created a very simple cloud function in IBM, that I try to call via Watson assistant. When I call it the error "Direct CloudFunctions call was not successful. Http response code is [401]" appears.
Underneath is the code I am using. "prodname" is taken from the user and sent to the function to get an answer. The function is working fine when I invoke it. I get the ID and password from https://cloud.ibm.com/openwhisk/learn/api-key.
{
"context": {
"credentials": {
"user": "userID",
"password": "password"
}
},
"output": {
"text": {
"values": [
"$answer"
]
}
},
"actions": [
{
"name": "arllambi%2540gmail.com_Only/Watson/MovieBot",
"type": "cloud_function",
"parameters": {
"prodname": "$prodname"
},
"result_variable": "answer",
"credentials": "$credentials"
}
]
}
Is it possible that there is some problem with the credentials?
Thanks in advance for any help.
****** reEDIT ******
As suggested by data_henrik I provide futher info. The function I am calling is the following, a very simple echo function:
function main(msg){
return {answer: "You said " +msg.prodname};
}
I changed the cloud_function to web_action, web_action works fine via postman:
{
"output": {
"text": {
"values": [
"$answer"
]
}
},
"actions": [
{
"name": "arllambi#gmail.com_Only/Watson/MovieBot.json",
"type": "web_action",
"parameters": {
"prodname": "<?input.text?>"
},
"result_variable": "context.answer"
}
]
}
The message I get now is "Direct CloudFunctions call was not successful. Http response code is [404]". Also the assistant answers "with {"cloud_functions_call_error":"The requested resource does not exist."}
Adrià
Hi #data_henrik and thanks for the help. I did see the # and corrected it in the edited code, was giving the same problem. BUT I figured it out: the assistant was deployed in Washington... I moved it to London and now it works. Thanks again for the help and sorry for my newbie mistake...
My guess is that the org part in your action name is wrong. Try to replace any "#" with "%40". Else, it will be interpreted by Watson as something else. Next, after you update the dialog node, wait for some seconds for the changes to take effect.
I just tried something with my deployed web actions and could cause the 401 and 404.
"name": "arllambi%40gmail.com_Only/Watson/MovieBot.json"

Continue a conversation after Browsing carousel in DialogFlow/Actions

So I have been experimenting with different Response types for DialogFlow through Actions: Actions Responses and Webhook/Fulfillment.
And so far, I have been able to generate proper responses for types like List, Basic Card, Suggestion Chips successfully. What I need now is a list-based response that lets the user open a link in a browser when touched as well as "not" generate a chat bubble. "Browsing carousel" fits the criteria: Browsing Carousel.
I have successfully created and simulated the output with 2 sample items. The issue is when the user wants to continue the conversation. As per the Guidance section in the help above, the browsing carousel:
By default, the mic remains closed after a browse carousel is sent. If you want to continue the conversation afterwards, we strongly recommend adding suggestion chips below the carousel.
From this what I understood is that the user has to invoke the App again by saying "Ok Google, talk to [app]". This doesn't seem very user-friendly as the user expects to return back to the conversation she was having with the agent after she has looked through the links from the carousel. Please note, I have simulated the flow using the Google Actions Simulator on Console.Actions page.
As soon as I invoke the intent with the Browsing Carousel, it is shown to me with the sample Items. But when I enter/say the next command to continue the conversation, the agent simply returns with:
We're sorry, but something went wrong. Please try again.
And the REQUEST/REQUEST window as well as the ERRORS/DEBUG are empty. I have logged calls to the Webhook and there is no call received.
The question: Is there a way to give the user the ability to browse an informative link from a response "list" (not Basic Card) and return to the conversation without ending it.
Here is the response for Browsing Carousel from RESPONSE window in Actions > Simulator (note I've removed non-relevant parts):
{
"conversationToken": "[token info]",
"expectUserResponse": true,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "You have the following 2 options:",
"displayText": "You have the following 2 options:"
}
},
{
"carouselBrowse": {
"items": [
{
"title": "Test 1",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
},
{
"title": "Test 2",
"description": "Desc 2",
"image": {
"url": "[some url]"
},
"openUrlAction": {
"url": "[some url]"
}
}
]
}
}
],
"suggestions": [
{
"title": "Continue"
},
{
"title": "End"
}
]
}
}
}
],
"responseMetadata": {
"status": {
"message": "Success (200)"
},
"queryMatchInfo": {
"queryMatched": true
}
}
}
For all those who are using the Simulator to test the Browsing Carousel and seeing that it stops responding after the output, please use a device instead to test it. When you use a device to render the output and use the Carousel response, it doesn't kill the conversation but turns off the mic. This is the intended behavior. One can introduce Suggestion chips to assist the user to continue the conversation.
Updated: Also make sure each Webhook Intent call has a proper response with Simple Response. As long as the response has required textual and audio information correctly setup, the Simulator will not fail.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

Simulator keeps replying that endpoint could not be called, while I see a POST request in the logs

I am currently writing an integration into Actions on Google using PHP. I have generated a action.json file with my test endpoint as the fulfillment. I use ngrok to expose my local development machine publicly.
Unfortunately the simulator keeps insisting that the app isn't responding. In the access logs, and the ngrok Inspector I do see that a request came in, and it has been neatly answered with a JSON reply.
In an act of pure desperation I even upload a JSON response, directly taken from the Fulfillment documentation page to a server and set that as the fulfillment URL. The result is the same, same error.
I do not see a way to get a more detailed error message from Actions on Google, explaining why it does not work.
My action.json:
{
"actions": [
{
"name": "MAIN",
"intent": {
"name": "actions.intent.MAIN"
},
"fulfillment": {
"conversationName": "development4"
}
},
{
"name": "TEXT",
"intent": {
"name": "actions.intent.TEXT"
},
"fulfillment": {
"conversationName": "development4"
}
}
],
"conversations": {
"development4": {
"name": "development4",
"url": "https:\/\/02c085c0.ngrok.io\/actionsongoogle\/process\/development4"
}
}
}
The json I response with:
{
"expectUserResponse": false,
"expectedInputs": [{
"inputPrompt": {
"richInitialPrompt": {
"items": [{
"simpleResponse": {
"textToSpeech": "hello"
}
}]
}
},
"possibleIntents": [{
"intent": ["actions.intent.TEXT"]
}]
}]
}
The result output in the Simulator then displays:
{
"response": "my test app isn’t responding right now. Try again soon.\n",
"audioResponse": "//NExAARq...",
"debugInfo": {
"sharedDebugInfo": [
{
"name": "ExecutionResponse",
"debugInfo": "Failed to call your endpoint."
}
]
},
"visualResponse": {}
}
Two things to investigate:
Check how long it is taking for the reply to be sent. Agents need to send a response back within about 5 seconds or the Assistant will time out and give the "not responding" message.
Check what the reply code being issued is. If you think you're sending something back, but your server actually crashes before an HTTP 200 response code is sent back with the body, the Assistant never gets the response. I've also seen it where people are thinking they're sending a response - but no response is actually sent at all.
Testing with curl or wget to your ngrok URL (so it makes a full round trip) might be able to help diagnose both of these.
The issue was in the returned JSON. I have set up the nodejs SDK and reply in the same way as the PHP cod will (a simple hello). The response generated by the NodeJS SDK is very different. Apparently the example that I used from the docs earlier is also invalid.
I advice anyone building their own implementation using Webhooks to first generate the desired response via de NodeJS sdk, the reference in the manual is not always clear (at least for me).
Response by NodeJS SDK:
{
"expectUserResponse": false,
"finalResponse": {
"speechResponse": {
"textToSpeech": "hello"
}
}
}
Implementation in SDK:
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))
// parse application/json
app.use(bodyParser.json())
app.use(function (req, res) {
let ActionsSdkApp = require('actions-on-google').ActionsSdkApp;
const actionsApp = new ActionsSdkApp({request: req, response: res});
//const inputPrompt = actionsApp.buildInputPrompt(false, 'hello')
actionsApp.tell('hello');
})
app.listen(80);

Wit.ai node package buggy

I was working with wit.ai today. I was using the node-wit module. But the responses I was acting were very weird.
When I used the node-wit module. I got the response as -
{
"msg_id": "0f4rOWRXQMIhVuf5i",
"_text": "what is your name",
"entities": {
"intent": [{
"confidence": 0.9425254893432,
"value": "get_name"
}]
}
}
Whereas when i used the cURL command to get the response the response was very different.
{
"msg_id": "0KJdIPedYbYwWOgOL",
"_text": "what do you do",
"entities": {
"intent": [{
"confidence": 0.97713342030998,
"value": "get_job"
}]
}
}
Can anyone tell why this is happening or if I am implementing the function wrong?
Are you using the same service, there are two services, one for conversations/stories and other for messages.
if that is not the reason try to check the context and take a look on the log section inside wit.ai web app