Currently I'm working with the chatbot service provided by the IBM Watson Coversation api. Now I'm facing a problem, related to adding a new line in the text reply from the chatbot. Can anyone tell me how to do this?
in this case, you can use HTML for that, inside conversation flow with <br>.
Check my example:
You can see that does not work in "Try it out":
But if you open with a browser, you see that work:
Check JSON example:
{
"output": {
"text": {
"values": [
"Hey, <br>Can I help you?",
"",
""
],
"selection_policy": "random"
}
}
}
You can use other tags, example: <button>, <id>, etc.
Related
I have been trying to make my own chatbot with dialogflow CX,
I cant see to find enough DOC about this tool.
I am trying to make the bot start the conversation when i join the session but i cant find a way to do it.
Right now my chatbot needs a "hello" or some training word to start the dialog, but i want the chatbot to start this.
I think you can do it with "Custom payload" but i cant find an example of how to do it.
Also i know in DialogFlow ES you had a "Suggestion Chip" option where you could put in a button the answer options,
but i cant find it on CX, do i have to code it now?, can any kind hearth give me an example or extra documentation about how to code this bot?
Pd: I am new and learning how to do this chatbot, google cloud and object programming need advice in general, thanks!
Right now i am using https://cloud.google.com/dialogflow/cx/docs official doc
custom buttons with hints/suggestions as you outline in your question are only available in Dialogflow CX for integrated services. You can find information about which service is supported on this page. Otherwise, if you are able, you can develop your own integration through their API, i'm using the Python one.
If you decide, for example, to activate the Messenger integration to make your bot available via FB Messenger, you can visit the specific page and find, for example, that buttons can be set up this way.
There are many other response types, you can browse them in the same page (list, button, description, image, card): for each of them google provides a sample code to put in the "Custom Payload" box for the fulfillment. For example a box to www.yoursite.org would work like this:
{
"richContent": [
[
{
"type": "button",
"icon": {
"type": "chevron_right",
"color": "#FF9800"
},
"text": "Button text",
"link": "https://yoursite.org",
"event": {
"name": "",
"languageCode": "en",
"parameters": {}
}
}
]
]
}
by specifying "parameters" or "event" you can trigger Dialogflow events to manage the conversation flow.
Hope this made things clearer for you!
I have an application that communicates with Google Assistant via Webhook. When a user asks for something, my app sends the question to AI (Watson IBM). After it gets the response, I want to show it to the user and end the conversation. So I send a text from Watson and nextSecene =actions.scene.END_CONVERSATION. But Google Assistant just ends the conversation without showing the response to user. So is it possible to show the response message to the user and than end the conversation?
Example of my app JSON format response:
GAResponse(prompt=GAPrompt(override=false, firstSimple=GAFirstSimple(speech=<speak>You are very smart bro,y<break time="100ms"/> and i love monsters like you.</speak>, text=You are very smart bro and i love monsters like you), content=null, lastSimple=null, link=null, canvas=null, orderUpdate=null), scene=GAScene(name=null, slotFillingStatus=null, slots=null, next=actions.scene.END_CONVERSATION) ...)
Yes this is possible.
I'm not sure which library your using to generate your response json, but below is an example that provides the speech and text data and ends the conversation. You can learn more on the fulfillment (aka webhook) on the reference documentation
{
"session": {
"id": "example_session_id",
"params": {}
},
"prompt": {
"override": false,
"firstSimple": {
"speech": "<speak>You are very smart bro, <break time="100ms"/> and i love monsters like you.</speak>",
"text": "You are very smart bro and i love monsters like you"
}
},
"scene": {
"name": "SceneName",
"slots": {},
"next": {
"name": "actions.scene.END_CONVERSATION"
}
}
}
If you're interested in using Assistant Conversation libraray, check out this link to see an example response
I have created a Webhook (https://moviebotdf.herokuapp.com/get-movie-details), it is tested with postman and dialogflow and working properly.
I want to integrate it with IBM Watson Assistant via programmatic call, but this is not returning anything (i.e. the output is "").
I checked the IBM support (https://cloud.ibm.com/docs/services/assistant?topic=assistant-dialog-actions&locale=en) and also other solutions as calling a function that could call the webhook but I am having even less succcess there. As I understand from the support, a direct call from the Assistant to the Webhook should be possible (and easier for newbies like me), hence is the solution I seek. Code in the Assistant is as follows:
{
"context": {
"skip_user_input": true,
"prodname": "$prodname"
},
"output": {
"text": {
"values": [
"$dataToSend"
],
"selection_policy": "sequential"
}
},
"actions": [
{
"name": "https://moviebotdf.herokuapp.com/get-movie-details",
"type": "client",
"parameters": {
"prodname": "$prodname"
},
"result_variable": "context.dataToSend"
}
]
}
So "prodname" is captured by Watson Assistant in the previous node (I checked that and it is working correctly) and sent to the Webhook. The variable used in the Webhook is also called "prodname". The expected output from the Webhook is stored in the variable "dataToSend", but as said above the answer in Watson is only "" as "$dataToSend" is "".
I tried also with "result_variable": "dataToSend" and "result_variable": "$dataToSend" without success, so what I guess is that the webhook is not being called...
I am new in the topic, so please do not hesitate to correct any problems in my post.
Thanks in any case in advance!
AdriĆ
IBM Watson Assistant lists three different options of making a programmatic call from within a dialog node:
client: your app is in charge of calling out to the action
server or cloud_function: IBM Cloud Functions action is invoked from Watson Assistant
web_action: The web action of an IBM Cloud Functions action is invoked from Watson Assistant
If you host your webhook on IBM Cloud Functions, then Watson Assistant can directly call it. With your current hosting and client specified, your app is in charge. In that case your app needs to check that the context includes the information about a client action, extract that related metadata, invoke the webhook and send the data back to Watson Assistant.
I have written an example for such a client action for my Watson conversation tool. See that repo for instructions.
I'm trying to create a chatbot which once "greetings" process is done goes on and initiate a new topic without any user query. It has to be something like the following:
bot : hello
user : hello
bot : how old are you?
user : 35
bot : Great.
bot : Let's talk about politics. Are you american?
Until the "great" line everything works but then I cannot trigger the event that will prompt the line "Let's talk about politics...."
The doc is vague, can I do this without webhooks? And if not, how would a webhook like this look like?
You can define multiple responses in Dialogflow's console as seen in the screenshots below by clicking the Add Message Content button in the response section of the intent you'd like to add the response to. You can also send multiple messages for some platforms (depending on platform feature availability) with webhook fulfillment using rich messaging responses documented here: https://dialogflow.com/docs/rich-messages
Go to the response section of the intent you'd like to add a 2nd response to:
Click ADD MESSAGE CONTENT and select Text response:
Enter you second message in the second text box provided:
Yes, you can define multiple responses. If you are planning to use Facebook Messenger platform to show the responses you can use the code below. Change "Response 1" and "Response 2" to your desired text and dump the my_result object as json and return it back. You need to change the "platform" if you want to use any other platforms than messenger.
my_result = {
"fulfillmentMessages": [
{
"text": {
"text": [
"Response 1"
]
},
"platform": "FACEBOOK"
},
{
"text": {
"text": [
"Response 2"
]
},
"platform": "FACEBOOK"
}
]
}
Question related to API.AI Bot , Facebook Messanger
When a Quick Reply is tapped, a text message will be sent to your webhook Message Received Callback. The text of the message will correspond to the title of the Quick Reply, when the content type is 'text'. How can we get the text of message! when the content type is 'location'?. It is mentioned that when we use location quick reply, we don't add title field. so how can we get a text message without using title?
I am unable to call webhook, due to not getting text of the message.
Please help me out. I am stuck from last 2 days.
You can manipulate with ChannelData.
hehe I got your Problem
I hope you know the responses coming from webhook if not here is sample responses
responses for quick_reply with type text
"message": {
"quick_reply": {
"payload": "productId-12345678"
},
"mid": "mid.$cAAFXVKjn1KtjtBAtHFdgsrkbGWwm",
"seq": 15453,
"text": "buy this"
}
responses for quick_reply with type location
"message": {
"mid": "mid.$cAAFXVLGKMJ1jtApB51dgsTnITNet",
"seq": 25413,
"attachments": [
{
"title": "Hi-tech city Hyderabad",
"url": "https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.bing.com%2Fmaps%2Fdefault.aspx%3Fv%3D2%26pc%3DFACEBK%26mid%3D8100%26where1%3DHyderabad%2B500081%26FORM%3DFBKPL1%26mkt%3Den-US&h=ATPXrPSDsyApPyqD9ozWt82dL9M28VZPQCqmICpsmBfXY0BCffiP4ychQ36sSWUNNBOeiJZq8tq8DLF7-A0_7VViPwwC64LM1XR-uAUN0sXdcgP5rDg&s=1&enc=AZPs1nCI5B8J4s27b7zAJKJDYaa2KSlhxQ5ppN30fb5lI3KUFcnQlSn_g4796j3p4ShwnzPvRyqXS470lEluzN06",
"type": "location",
"payload": {
"coordinates": {
"lat": 17.44521051,
"long": 78.38363399
}
}
}
]
}
as you can see that in text type quick_reply we are getting the previous context as text to which user respond and we can use respective payload for processing. This is what facebook try to incorporate context of chat for a single time for quick reply they havnt incorporated this to other may be because of frequent use of context in case of text quick_reply rather than location.
Now the things which you are wondering about is context . Yes you need to maintain context of chat and thats how a real bot thing came into picture. you can maintain context of chat using many free nlp engine like wit.ai , api.ai and other