IBM Watson. How to pass context from node to node? - rest

I"m trying to string together multiple IBM Watson requests:
Request #1: Play music.
Watson responds with the following:
{
"intents": [
{
"intent": "turn_on",
"confidence": 0.9498470783233643
}
],
"entities": [
{
"entity": "appliance",
"location": [
5,
10
],
"value": "radio",
"confidence": 1
}
],
"input": {
"text": "play music"
},
"output": {
"text": [
"What kind of music would you like to hear?"
],
"nodes_visited": [
"node_1_1510258504338",
"node_2_1510258615227"
],
"log_messages": []
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
Request #2: The patron would type rock.
My problem is that I'm getting an error message that states the following
No dialog node matched for the input at a root level. (and there is 1 more warning in the log)",
"log_messages": [
I'm pretty sure I have to pass a context into the 2nd request but I'm not sure what I need to include. Right now I'm only passing in the conversation_id. is there something specific from the above response that I need to pass in? For example, I'm passing this:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4"
}
}

You send back your whole context object. In this case it would be:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
But there are SDK's that will make this easier for you.
https://github.com/watson-developer-cloud

Your node that actions the type of music people select, is it a child of your 'turn_on' node [node_2_1510258615227]?
If so, as Simon demonstrates above, you need to also pass back as part of the API call the complete context packet. This informs Watson Conversation where in the dialog flow you were last. As the conversation system is state free, i.e. It does not store any state information about individual conversations, it will not by default know where it is within a conversation. This is why you then need to return the context element of the previous response, to allow watson to know where you were in the conversation flow.
Your error above states that Watson looked down the list of dialog nodes you have defined at your root level, and could not find a matching condition. Due to the fact your matching condition was within a child node.

Related

Handle multiple answers from IBM Cloud Function in Watson Assistant

I need to show an unknown number of buttons in a Watson Assistant dialogue node. The data for the buttons comes from a IBM Cloud Function.
If I manually set up a response type "option" answer in my node the JSON-object looks like this:
{
"output": {
"generic": [
{
"title": "Välj mötestyp",
"options": [
{
"label": "Rådgivning familjerätt 30 min",
"value": {
"input": {
"text": "447472"
}
}
},
{
"label": "Rådgivning familjerätt 60 min",
"value": {
"input": {
"text": "448032"
}
}
}
],
"description": "Välj typ av möte du vill boka",
"response_type": "option",
"preference": "dropdown"
}
]
}
}
My cloud function can create this JSON with x no of options. But how can I use this data in Assistant?
The easiest would be to let the cloud function generate the complete JSON and then just output the returned JSON like this:
{
$context.output"
}
..but that's not allowed.
Generated output-object from my function:
[{"serviceId":447472,"serviceName":"Rådgivning Familjerätt 30 min"},{"serviceId":448032,"serviceName":"Rådgivning Familjerätt 60 min"}]
Any advice on how to do this?
I don't see a simple way of generating the entire output and options. What you could do is this:
Generate the option labels and values
Pass them into a generic output node that has predefined structures for 1, 2, 3, etc. options. Check based on the array size of your context variable which predefined response structure to fill.
I tested the following:
{
"context": {"my": [ {
"label": "First option",
"value": "one"
},
{
"label": "Second",
"value": "two" }]},
"output": {
"generic": [
{
"title": "This is a test",
"options": [{"label": "<? $my[0].label ?>",
"value": {
"input": {
"text": "my[0].value"
}
}
},{"label": "<? $my[1].label ?>", "value": {
"input": {
"text": "<? $my[1].value ?>"
}
}
}],
"response_type": "option"
}
]
}
}
It defined a context variable with the options, analogous to the options structure. In the output access the labels and values, later modifying the to proove that they are used and can be modified.

Google Home dialogFlow V2 API mediaResponse not working

I decided to upgrade my Google Assistant action to use "dialogFlow V2 API" and my webhook returns an object like this
{
"fulfillmentText": "Testing",
"fulfillmentMessages": [
{
"text": {
"text": [
"fulfillmentMessages text attribute"
]
}
}
],
"payload": {
"google": {
"richResponse": {
"items": [
{
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
},
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"ssml": "simpleResponse: ssml",
"displayText": "simpleResponse displayText"
}
}
]
}
}
},
"source": "webhook-play-sample"
}
But I get an error message saying my action it is not available, is mediaResponse supported by V2?, should I format my object differently?, also, when I remove "mediaResponse" object works just fine and the assistant will speak the simpleResponse part.
This action was re-created this Mid March 2018 and I read about May deadline and that is why I decide to upgrade to V2, do you think I should go back to V1, I know I will have to delete it and re-created but that is fine. This is a link to the JSON object I see in the debug tab. Thanks once again
I set "API V2" in my action dialogFlow console, this is a screenshot of that setting
Here is an screenshoot of my action's integration -> Google Assistant
Thanks Allen, Yes I do have "expectUserResponse": false, I added the suggestion object you recommended but, unfortunately nothing changed, I am still getting this error
Simulator debug tag details
First of all - this is not a problem with Dialogflow V2. You also seem to be confusing the sunset of Actions on Google V1 with the release of Dialogflow V2 - they are two different creatures completely. If your project was using AoG V1, there would be a setting on the Actions integration screen, and thee isn't.
It is fine if you want to move to Dialogflow V2, but it isn't required. Media definitely works under Dialogflow V2.
The array of items must include a simpleResponse item first, before any of the other items in the RichResponse. (You also shouldn't include both ssml and textToSpeech - just one of them.) You also don't need the fulfillmentText and fulfillmentMessages components, since those are provided by the richResponse.
You also need to include suggestions chips unless you have set expectUserResponse to false. Somewhere in the simulator debug is probably a block that says
{
"name": "MalformedResponse",
"debugInfo": "expected_inputs[0].input_prompt.rich_initial_prompt: Suggestions must be provided if media_response is used..",
"subDebugEntryList": []
}
So something more like this should work:
{
"payload": {
"google": {
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "simpleResponse: testing",
"displayText": "simpleResponse displayText"
},
"mediaResponse": {
"mediaType": "AUDIO",
"mediaObjects": [
{
"name": "mediaResponse name",
"description": "mediaResponse description",
"largeImage": {
"url": "https://.../640x480.jpg"
},
"contentUrl": "https://.../20183832714.mp3"
}
]
}
}
]
"suggestions": [
{
"title": "This"
},
{
"title": "That"
}
]
}
}
},
"source": "webhook-play-sample"
}

How to identify health check request?

My app is getting health check about 10 health check request per hour, and that makes my conversation log messy.
Because of the health check does not have screen capability, our backend server response the request as Google Home is requesting.
Is there any way to detect if the request is health check request or not?
For starters, you should be responding as if it was a Google Home. You have to respond with valid output, or it will reject you. So don't try to be too fancy in your response - just use this to avoid cluttering your analytics and logs.
The health check will look like a normal welcome request. The ping will contain an argument named is_health_check with a boolValue of true and a textValue of 1. If you're using Dialogflow, this will be one of the arguments at originalRequest.data.inputs[0]. For the Actions SDK, it will be at data.inputs[0].
Here is a partial sample from Dialogflow:
{
"originalRequest": {
"source": "google",
"version": "2",
"data": {
"surface": {
"capabilities": [
{
"name": "actions.capability.AUDIO_OUTPUT"
}
]
},
"inputs": [
{
"rawInputs": [
{
"query": "Sample",
"inputType": "VOICE"
}
],
"arguments": [
{
"textValue": "1",
"name": "is_health_check",
"boolValue": true
}
],
"intent": "actions.intent.MAIN"
}
],
...
}

Google Actions SDK not passing arguments

I have been unable to get google-actions sdk for node.js to pass arguments. I installed the https://github.com/actions-on-google/actionssdk-eliza-nodejs sample project and noticed arguments are not working for that project either. Any insight?
In the web simulator I entered "i am feeling sad"
Here is the request I get
{
"query": "i am feeling sad",
"accessToken": "**masked**",
"expectUserResponse": true,
"conversationToken": "CiZDIzU4O..."content_copy,
"debugInfo": {
"assistantToAgentDebug": {
"assistantToAgentJson": {
"user": {
"user_id": "**masked**"
},
"conversation": {
"conversation_id": "1484523995718",
"type": 2,
"conversation_token": "{\"elizaInstance\":{\"noRandom\":false,\"capitalizeFirstLetter\":true,\"debug\":false,\"memSize\":20,\"version\":\"1.1 (original)\",\"quit\":false,\"mem\":[],\"lastchoice\":[[-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1],[-1],[-1],[-1],[-1,-1,-1],[-1,-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,0,-1],[-1,-1,-1],[-1],[-1,0,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1],[-1,-1,-1,-1],[-1],[-1,-1],[-1,-1],[-1],[-1],[-1],[-1],[-1],[-1],[-1,-1,-1],[-1]],\"sentence\":\"i am feeling sad\"}}"
},
"inputs": [
{
"intent": "assistant.intent.action.TEXT",
"raw_inputs": [
{
"input_type": 2,
"query": "i am feeling sad"
}
],
"arguments": [
{
"name": "text",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
}
]
}
}
}
}
text_value should = "sad", not "i am feeling sad" based on eliza.json which has this:
{
"versionLabel": "Eliza v1",
"agentInfo": {
"languageCode": "en-US",
"projectId": "**masked**",
"invocationNames": [
"eliza"
],
"voiceName": "female_1"
},
"actions": [
{
"description": "Start an Eliza consultation",
"initialTrigger": {
"intent": "assistant.intent.action.MAIN"
},
"httpExecution": {
"url": "https://**masked**"
}
},
{
"description": "Deep link to Eliza consultation",
"initialTrigger": {
"intent": "raw.input",
"queryPatterns": [
{
"queryPattern": "my emotional state is $SchemaOrg_Text:text"
},
{
"queryPattern": "I am concerned about $SchemaOrg_Text:text"
},
{
"queryPattern": "I am feeling $SchemaOrg_Text:text"
},
{
"queryPattern": "I need to talk about my feelings"
}
]
},
"httpExecution": {
"url": "**masked**"
}
}
],
"deploymentStatus": {
"state": "NEW"
},
"versionId": "1"
}
The Google Actions SDK will parse patterns and correctly bind arguments for intents that activate your action. For example, if you say
At my action I am feeling sad
You will recieve arguments that look like this
"arguments": [
{
"name": "text",
"raw_text": "sad",
"text_value": "sad"
},
{
"name": "trigger_query",
"raw_text": "i am feeling sad",
"text_value": "i am feeling sad"
}
]
However, if you request input from the action by setting expect_user_response to true, the arguments are always returned in raw form using the intent "assistant.intent.action.TEXT". It is up to you to parse those raw arguments as indicated in the other answer to this question.
Also note that as far as I can tell, the Schema_Org types are not returned per the documentation. More information here.
If you are using the Actions SDK, then you have to use your own NLP for understanding the raw user query and for extracting arguments. If you don't have you own NLP, then we recommend that you use API.AI.

ACRCloud external meta data and IDs not returning

When making valid requests to http://ap-southeast-1.api.acrcloud.com/v1/identify I get successful responses, however both external_ids and external_metadata always come back as empty objects.
Example response:
{
"external_ids": {},
"play_offset_ms": 97480,
"external_metadata": {},
"label": "Universal Music Ltd.",
"release_date": "2012-01-01",
"album": {
"name": "The Love Club EP"
},
"title": "Royals",
"duration_ms": "190185",
"genres": [
{
"name": "Pop"
}
],
"acrid": "b748d828aba29c699f732bd660123bae",
"result_from": 3,
"artists": [
{
"name": "Lorde"
}
]
}
Anyone know why all my identifications wouldn't contain this data?
Please select the 3rd party ID integration while creating the projects.