Multi-Part answers for a node in IBM Watson Conversation service - ibm-cloud

I would like to be able to split up an response into multiple messages that are sent in a row - e.g. "Hi, i am a chatbot", "What can i do for you?" as 2 separate messages (that a client would render individually)
Is there a way to do this without separate child-nodes?
e.g.
{
"output": {
"text": {
"append": true,
"values": [
"Hi, i am a chatbot",
"What can i do for you"
],
"selection_policy": "sequential"
}

You can simply use
{
"output": {
"text": [
"Hi, i am a chatbot",
"What can i do for you"
],
"selection_policy": "sequential"
}
}
Remove Values from the json.

If you mean with "chained response" several messages that are printed below each other, you can simply use \n to denote a carriage return:
{
"output": {
"text": {
"append": true,
"values": [
"Hi, I am a chatbot.\nWhat can I do for you?"
],
"selection_policy": "sequential"
}
This would render to:
Hi, I am a chatbot.
What can I do for you?
I have a couple such reponses in this workspace. It is part of this tutorial for building a database-backed Slackbot. In that example, several of those output lines are computed dynamically and would, e.g., produce a formatted ASCII table.

Related

Amazon Lex & Dialogflow suggestion text as quick buttons for conversation

Just like Suggestion( Quick button ) like in Actions on Google, do we also have any class/Function to use in Amazon Lex ?
Actions on Google : agent.add(new Suggestion(' Your text here ')); // to provide quick suggestion to the user.
Like Google amazon lex also have a quick suggestion,i.e response cards. Below one is the sample image and JSON object you will get from lex with response cards.
Sample JSON Object from aws-lex which will generate response cards. i.e quick suggestions
for slots.
{
"sampleUtterances": [
],
"slotType": "",
"slotTypeVersion": "4",
"slotConstraint": "Optional",
"valueElicitationPrompt": {
"messages": [
{
"contentType": "PlainText",
"content": "what do you want to schedule"
}
],
"responseCard": "{\"version\":1,\"contentType\":\"application/vnd.amazonaws.card.generic\",\"genericAttachments\":[{\"buttons\":[{\"text\":\"sales\",\"value\":\"sales consultation\"}]}]}",
"maxAttempts": 2
},
"priority": 11,
"name": "appointment"
}

IBM Watson Assistant: Where to set output.user_defined object in JSON editor?

IBM Watson Assistant apidoc v2 says that output.user_defined as
"An object containing any custom properties included in the response. This object includes any arbitrary properties defined in the dialog JSON editor as part of the dialog node output."
But it does not say where in JSON editor to set it up. Is it under output?
{
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"xxx": "aaa"
},
"context": {}
}
Can't set it to root level in JSON editor as the editor would complain that only output, output.generic, actions, context are allowed.
Where should I put it in JSON editor so that it appears in output.user_defined in response to /message REST call?
You can move that into the output section as user_defined. Here is what I tried:
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
},
"user_defined": {
"test": "henrik"
}
}
I then used the V2 API of my test tool to verify. Here is the relevant part of how it was reported:
"output": {
"generic": [
{
"text": "Ok, checking the event information.",
"response_type": "text"
},
{
"text": "ok.",
"response_type": "text"
}
],
"debug": {...
},
"intents": [...
],
"user_defined": {
"test": "henrik"
},
"entities": [
{...
See also this section in the IBM Watson Assistan documentation with some more information on the response JSON.
As #data_henrik stated above, extra json elements added via the JSON editor to the output section of the response are moved into the user_defined section of the V2 output response.
These "extra" json elements do not have to be labelled user_defined. In my own case I have output.extra elements within my dialog responses. In V1 they remain output.extra, but in V2 they become output.user_defined.extra.
As you are just starting, it would be best to keep consistent and use output.user_defined as your starting point.

IBM Watson Conversation & IBM Cloud Functions : User Input For Parameters

I have already created a function in IBM Cloud Functions, but how would I implement the parameters from user input?
What I'm trying to do is
For example: When a user types in "I need product" / "Buy product now" / Show me products. The product input is taken as a parameter and implements it into my Cloud Function, which displays all products that uses product as a keyword.
The response text would get info from the Cloud Function return output (which is a JSON array)
(res.body.items[?].name)
Example layout from IBM:
{
"context": {
"variable_name" : "variable_value"
},
"actions": [
{
"name":"getProducts",
"type":"client | server",
"parameters": {
"<parameter_name>":"<parameter_value>"
},
"result_variable": "<result_variable_name>",
"credentials": "<reference_to_credentials>"
}
],
"output": {
"text": "response text"
}
}
There is a full tutorial I wrote available in the IBM Cloud docs which features IBM Cloud Functions and a backend database. The code is provided on GitHub in this repository: https://github.com/IBM-Cloud/slack-chatbot-database-watson/.
Here is the relevant part from the workspace file that shows how a parameter could be passed into the function:
{
"type": "response_condition",
"title": null,
"output": {
"text": {
"values": []
}
},
"actions": [
{
"name": "_/slackdemo/fetchEventByShortname",
"type": "server",
"parameters": {
"eventname": [
"<? $eventName.substring(1,$eventName.length()-1) ?>"
]
},
"credentials": "$private.icfcreds",
"result_variable": "events"
}
],
"context": {
"private": {}
},
Later on, the result is presented, e.g., in this way:
"output": {
"text": {
"values": [
"ok. Here is what I got:\n ```<? $events['result'] ?>```",
"Data:\n ``` <? $events['data'] ?> ```"
],
"selection_policy": "sequential"
},
"deleted": "<? context.remove('eventDateBegin') ?><? context.remove('eventDateEnd') ?> <? context.remove('queryPredicate') ?>"
},
Some fancier formatting can be done, of course, by iterating over the result. Some tricks are here. The code also shows how to use a child node to process the result and to clear up context variables.
To obtain the parameter, in your case a product name or type, you would need to access either the input string and find the part after "product". Another way is to use the beta feature "contextual entity" which is designed for such cases.

Why is the carousel not showing in the console simulator?

I am trying to figure out how I can embed Google Actions responses, such as the carousel, in a webhook response for DialogFlow.
I am using V2 of the REST protocol, so I am filling ACTIONS_ON_GOOGLE in the source field and the payload field contains the Google Actions field as specified (as per How can I integrate the Google Actions responses in a webhook response in Dialogflow?). I am sending the following response:
{
"fulfillmentText":"This is a carousel.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"conversationToken":"",
"expectUserResponse":true,
"expectedInputs":[
{
"inputPrompt":{
"initialPrompts":[
{
"textToSpeech":"This is a carousel."
}
],
"noInputPrompts":[
]
},
"possibleIntents":[
{
"intent":"actions.intent.OPTION",
"inputValueData":{,
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"key1",
"synonyms":[
"Option 1"
]
},
"title":"Option 1",
"description":"Option 2"
},
{
"optionInfo":{
"key":"key2",
"synonyms":[
"Option 2"
]
},
"title":"Option 2",
"description":"Option 2"
}
]
}
}
}
]
}
]
}
}
When trying this out in the console, no carousel is shown. Only the text This is a carousel. is displayed. For your information, I did not include the image field, as it is optional according to the specification, but even with images there is no carousel displayed.
It's hard to debug this, as my actions.intent.OPTION intent is not displayed in possibleIntents[] array in the response tab. I expected this actions.intent.OPTION intent to be merged with the other intents (such assistant.intent.action.TEXT) as generated by the Dialogflow response.
What am I doing wrong here? Am I maybe shooting myself in the foot by using V2 instead of V1 of the Dialogflow REST protocol?
update after initial feedback by Prisoner
I tried with the following response, but still not getting any carousel:
{
"fulfillmentText":"Here you go.",
"source":"ACTIONS_ON_GOOGLE",
"payload":{
"expectUserResponse":true,
"richResponse":{
"items":[
{
"simpleResponse":{
"textToSpeech":"Here are your results."
}
}
]
},
"systemIntent":{
"intent":"actions.intent.OPTION",
"data":{
"carouselSelect":{
"items":[
{
"optionInfo":{
"key":"Option1",
"synonyms":[
"Option2"
]
},
"title":"Option3",
"description":"Option4"
},
{
"optionInfo":{
"key":"Option5",
"synonyms":[
"Option6"
]
},
"title":"Option7",
"description":"Option8"
}
]
},
"#type":"type.googleapis.com/google.actions.v2.OptionValueSpec"
}
}
}
}
I also tried to manually create an intent in Dialogflow with returns a 'hardcoded' carousel (that is, without fulfillment callback) and this carousel is shown perfectly. So I am sure that the console is correctly configured.
I am also comparing my response with the ones in Google Assistant flow with multiple actions_intent_OPTION handlers, but without success so far.
You haven't shot yourself in the foot - but you have made something that was already a bit complex even more complex. There are two things to look out for:
CORRECTION: Make sure the payload is the same as what used to be data, but other fields have changed format.
You need to make sure the simulator is in Phone mode and not Speaker mode.
Details follow.
Documented Dialogflow Differences
The Actions on Google documentation for the Dialogflow response is a little confusing. Instead of providing the full example, it just says that the response that would be under expectedInputs.possibleIntents should, instead, be under data.google.systemIntent. For V2, that would be payload.google.systemIntent.
The inputPrompt object is also restructured somewhat so you should be sending a richResponse that contains a simpleResponse object.
UPDATE I've tested this. This is the entire JSON that should be returned. Note the contents of payload is exactly what the contents of data used to be. The source field is ignored and has nothing to do with the payload, apparently.
See also https://github.com/dialogflow/fulfillment-webhook-json which contain some examples.
{
"payload": {
"google": {
"expectUserResponse": true,
"richResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "This is a carousel"
}
}
]
},
"systemIntent": {
"intent": "actions.intent.OPTION",
"data": {
"#type": "type.googleapis.com/google.actions.v2.OptionValueSpec",
"carouselSelect": {
"items": [
{
"optionInfo": {
"key": "key1",
"synonyms": [
"Option 1"
]
},
"title": "Option 1",
"description": "Option 2"
},
{
"optionInfo": {
"key": "key2",
"synonyms": [
"Option 2"
]
},
"title": "Option 2",
"description": "Option 2"
}
]
}
}
}
}
}
}
Simulator Surface Setting
Make sure that your simulator is set for the "Phone" surface rather than the "Speaker" surface. Options won't display on a Speaker.
The setting should look like this:
Not like this:

How do you get arguments of correct type in Google Actions SDK?

Google's action api seems to find the right pattern in my intent and bind to the right type, but does not return the parsed type data. For example, if I have the intent defined below in my actions.json file:
{
"description": "",
"initialTrigger": {
"intent": "RepeatIntent",
"queryPatterns": [
{
"queryPattern": "say $SchemaOrg_Number:mynumber"
},
{
"queryPattern": "say $SchemaOrg_Date:mydate"
},
{
"queryPattern": "say $SchemaOrg_Time:mytime"
}
]
},
"httpExecution": {
"url": "https://myurl/repeat"
}
}
and I enter, "at my action say tomorrow" into the simulator, I receive the following arguments:
"arguments": [
{
"name": "mydate",
"raw_text": "tomorrow",
"text_value": "tomorrow"
},
{
"name": "trigger_query",
"raw_text": "say tomorrow",
"text_value": "say tomorrow"
}
]
Note that the action SDK correctly identified "tomorrow" as type "$SchemaOrg_Date" and bound it to the mydate variable, however, it did not return the "date_value" element in the return json as specified in the documentation. I would have expected that "date_value" element to contain a parsed date structure (per the document).
The same is true of numbers although they behave slightly differently. For example, if I use the phrase "at my action say fifty", I'll receive these arguments:
"arguments": [
{
"name": "mynumber",
"raw_text": "50",
"text_value": "50"
},
{
"name": "trigger_query",
"raw_text": "say fifty",
"text_value": "say fifty"
}
]
Note that the $SchemaOrg_Number was recognized and "fifty" was correctly parsed to "50", but the int_value wasn't populated in the argument json per the documentation.
Google is actively parsing these complex types and has documented that they should be returned so I don't want to go to the trouble of parsing them myself. Any thoughts as to if this will be fixed any time soon?
The Actions SDK does not support NLU for actions. You have to use your own NLU. If you don't have your own NLU, we recommend using API.AI.