I'm using Watson conversation to do a messenger chatbot, and i need something like that
[![inserir a descrição da imagem aqui][1]][1]
https://i.stack.imgur.com/UTOyI.png
Watson Conversation API does not have built-in UI tools to create the type of buttons or options that you want in a response.
In order to achieve that, what you need to do is send back a flag or variable in your context object inside Watson Conversation's response. Then, in your frontend code you can test that variable and programmatically decide if you need to display certain HTML components like buttons, options, etc.
Watson's response in your dialog node should look something like this:
{
"context": {
"showOptions": true
},
"output": {
"text": {
"values": [
"Hi, do you want to hear a joke?"
],
"selection_policy": "sequential"
}
}
}
And then in your code, you should check inside the context. If the showOptions property is true, then, while displaying the answer to the user's input you could add the options that you need (Yes, No, I don't know).
In your case, you don't even need to display the output.text.values[0], just the options.
Remember to turn the showOptions variable back to null in another dialog or your code will always display the options, even when you don't need to.
Related
I have been trying to make my own chatbot with dialogflow CX,
I cant see to find enough DOC about this tool.
I am trying to make the bot start the conversation when i join the session but i cant find a way to do it.
Right now my chatbot needs a "hello" or some training word to start the dialog, but i want the chatbot to start this.
I think you can do it with "Custom payload" but i cant find an example of how to do it.
Also i know in DialogFlow ES you had a "Suggestion Chip" option where you could put in a button the answer options,
but i cant find it on CX, do i have to code it now?, can any kind hearth give me an example or extra documentation about how to code this bot?
Pd: I am new and learning how to do this chatbot, google cloud and object programming need advice in general, thanks!
Right now i am using https://cloud.google.com/dialogflow/cx/docs official doc
custom buttons with hints/suggestions as you outline in your question are only available in Dialogflow CX for integrated services. You can find information about which service is supported on this page. Otherwise, if you are able, you can develop your own integration through their API, i'm using the Python one.
If you decide, for example, to activate the Messenger integration to make your bot available via FB Messenger, you can visit the specific page and find, for example, that buttons can be set up this way.
There are many other response types, you can browse them in the same page (list, button, description, image, card): for each of them google provides a sample code to put in the "Custom Payload" box for the fulfillment. For example a box to www.yoursite.org would work like this:
{
"richContent": [
[
{
"type": "button",
"icon": {
"type": "chevron_right",
"color": "#FF9800"
},
"text": "Button text",
"link": "https://yoursite.org",
"event": {
"name": "",
"languageCode": "en",
"parameters": {}
}
}
]
]
}
by specifying "parameters" or "event" you can trigger Dialogflow events to manage the conversation flow.
Hope this made things clearer for you!
We have a chatbot for our website today, that is not build using Google technology. The bot has a JSON REST API where you can send the question to and which replies with the corresponding answers. So all the intents and entities are being resolved by the existing chatbot.
What is the best way to wrap this functionality in Google Assistant / for Google Home?
To me it seems I need to extract the "original" question from the JSON that is send to our webservice (when I enable fullfilment).
But since context is used to exchange "state" I have to find a way to exchange the context between the dialogflow and our own chatbot (see above).
But maybe there are other ways ? Can it (invoke our chatbot) be done directly (without DialogFlow as man in the middle) ?
This is one of the those responses that may not be enough for someone who doesn't know what I am talking about and too much for someone who does. Here goes:
It sounds to me as if you need to build an Action with the Actions SDK rather than with Dialog flow. Then you implement a text "intent" in your Action - i.e. one that runs every time the user speaks something. In that text intent you ask the AoG platform for the text - see getRawInput(). Now you do two things. One, you take that raw input and pass it to your bot. Two, you return a promise to tell AoG that you are working on a reply but you don't have it yet. Once the promise is fulfilled - i.e. when your bot replies - you reply with the text you got from your bot.
I have a sample Action called the French Parrot here https://github.com/unclewill/french_parrot. As far as speech goes it simply speaks back whatever it hears as a parrot would. It also goes to a translation service to translate the text and return the (loose) French equivalent.
Your mission, should you choose to accept it, is to take the sample, rip out the code that goes to the translation service and insert the code that goes to your bot. :-)
Two things I should mention. One, it is not "idiomatic" Node or JavaScript you'll find in my sample. What can I say - I think the rest of the world is confused. Really. Two, I have a minimal sample of about 50 lines that eschews the translation here https://github.com/unclewill/parrot. Another option is to use that as a base and add code to call your bot and the Promise-y code to wait on it to it.
If you go the latter route remove the trigger phrases from the action package (action.json).
So you already have a Backend that process user inputs and sends responses back and you want to use it to process a new input flow (coming from Google Assistant)?
That actually my case, I've a service as a Facebook Messenger ChatBot and recently started developing a Google Home Action for it.
It's quite simple. You just need to:
Create an action here https://console.actions.google.com
Download GActions-Cli from here https://developers.google.com/actions/tools/gactions-cli
Create a JSON file action.[fr/en/de/it].json (choose a language). The file is your mean to define your intents and the URL to your webhook (a middleware between your backend and google assistant). It may look like this:
{
"locale": "en",
"actions": [
{
"name": "MAIN",
"description": "Default Welcome Intent",
"fulfillment": {
"conversationName": "app name"
},
"intent": {
"name": "actions.intent.MAIN",
"trigger": {
"queryPatterns": [
"Talk to app name"
]
}
}
}
],
"conversations": {
"app name": {
"name": "app name",
"url": "https://your_nodejs_middleware.com/"
}
}
}
Upload the JSON file using gactions update --action_package action.en.json --project PROJECT_ID
AFAIK, there only a Node.js client library for Actions-on-google https://github.com/actions-on-google/actions-on-google-nodejs that why you need a Node.js middleware before hitting your backend
Now, user inputs will be sent to your Node.js middleware (app.js) hosted at https://your_nodejs_middleware.com/ which may look like:
//require express and all required staff to build a Node.js server,
//look on internet how to build a simple web server in Node.js
//if you a new to this domain. const {
ActionsSdkApp } = require('actions-on-google');
app.post('/', (req, res) => {
req.body = JSON.parse(req.body);
const app = new ActionsSdkApp({
request: req,
response: res
});
// Create functions to handle requests here
function mainIntent(app) {
let inputPrompt = app.buildInputPrompt(false,
'Hey! Welcome to app name!');
app.ask(inputPrompt);
}
function respond(app) {
let userInput = app.getRawInput();
//HERE you get what user typed/said to Google Assistant.
//NOW you can send the input to your BACKEND, process it, get the response_from_your_backend and send it back
app.ask(response_from_your_backend);
}
let actionMap = new Map();
actionMap.set('actions.intent.MAIN', mainIntent);
actionMap.set('actions.intent.TEXT', respond);
app.handleRequest(actionMap); });
Hope that helped!
Thanks for all the help, the main parts of the solution are already given, but I summarize them here
action.json that passes on everything to fullfilment service
man in the middle (in my case IBM Cloud Function) to map JSON between services
Share context/state through the conversationToken property
You can find the demo here: Hey Google talk to Watson
I'm trying to create a chatbot which once "greetings" process is done goes on and initiate a new topic without any user query. It has to be something like the following:
bot : hello
user : hello
bot : how old are you?
user : 35
bot : Great.
bot : Let's talk about politics. Are you american?
Until the "great" line everything works but then I cannot trigger the event that will prompt the line "Let's talk about politics...."
The doc is vague, can I do this without webhooks? And if not, how would a webhook like this look like?
You can define multiple responses in Dialogflow's console as seen in the screenshots below by clicking the Add Message Content button in the response section of the intent you'd like to add the response to. You can also send multiple messages for some platforms (depending on platform feature availability) with webhook fulfillment using rich messaging responses documented here: https://dialogflow.com/docs/rich-messages
Go to the response section of the intent you'd like to add a 2nd response to:
Click ADD MESSAGE CONTENT and select Text response:
Enter you second message in the second text box provided:
Yes, you can define multiple responses. If you are planning to use Facebook Messenger platform to show the responses you can use the code below. Change "Response 1" and "Response 2" to your desired text and dump the my_result object as json and return it back. You need to change the "platform" if you want to use any other platforms than messenger.
my_result = {
"fulfillmentMessages": [
{
"text": {
"text": [
"Response 1"
]
},
"platform": "FACEBOOK"
},
{
"text": {
"text": [
"Response 2"
]
},
"platform": "FACEBOOK"
}
]
}
Is it possible yet to initiate a phone call? E.g. if I'm making a branch finder action a dialogue might go like:
"Hi, where's my nearest store?"
"Your nearest store is our Oxford Street branch, at 300 Oxford St, Marylebone."
"Call it"
"Sure"
It then initiates a call to the store, like an Android app using an ACTION_DIAL intent.
I would think something like this should be possible, especially considering the current devices supporting Assistant are phones and Google Home, both of which can make calls (I guess future devices with assistant built in might not, but then there can be a check like app.phoneCapabilities). I've tried using .addSuggestionLink with a tel: address with no luck.
I actually made a dodgy work around for this, if anyone comes back to this and is interested.
You can suggest a webpage URL, which can be a page which has a tel: link from there. Either using server side work or just simple JavaScript (mine uses simple JavaScript), you can update the link.
My link is below - feel free to use it, I use it in my app. It's pretty basic, the documentation is in the comments:
https://domdomegg.github.io/linkgenerator?href=tel%3A%2B442070313000&bgcolor=607d8b&buttontext=Click%20to%20call%20Google
For starters, the Google Home cannot (yet) make calls. That feature was announced at I/O and will be rolling out later this year. It is not yet known if there will be API access to that feature when it does roll out. (There is certainly potential for abuse of the feature, although there are some ways that can mitigate that abuse.)
I haven't tested, but I'm a little surprised that the tel: url form didn't work since I thought that would just launch an intent on Android (tho I don't know how iOS would handle it) and tel: goes to the dialer intent.
You can show a call button which will redirect to the specified number on the dialer app.
Here's a way to do that from fulfillment response:
"buttons": [
{
"title": "Call",
"openUrlAction": {
"url": "tel:+91123456789",
"androidApp": {
"packageName": "com.android.phone"
},
"versions": []
}
}
]
Add this JSON to your response and it will show a button which will redirect to default call app and shows +91123456789 number filled.
EXTRA
Similarly, if you want to send mail then you can add:
{
"title": "Send Mail to Jay",
"openUrlAction": {
"url": "mailto:Jp9573#gmail.com",
"androidApp": {
"packageName": "android.intent.extra.EMAIL"
},
"versions": []
}
}
my boss give me the task to creata a chatbot, not made with Telegram or Slack, in which use Watson Conversation service.
More, the chat bot has to be inserted inside a web page, then it has to be embeddable in html as javascript.
Are there anyone who knows other good platforms to performe these tasks?
Thanks for any help.
After replying in comments and I had another look and realised Microsoft Bot Framework could work with minimal dev investment (in the beginning).
https://docs.botframework.com/en-us/support/embed-chat-control2/
This little guy is fun. You should give him a try.
http://www.program-o.com/
I strongly suggest you to build more an assistant than a simple bot, using a language understanding service tool like Microsoft LUIS, that is part of the Microsoft Cognitive Services.
You can then combine this natural language processing tool with bot SDK like MicroSoft Botframework mentioned above, so that you could easily run queries in natural language, parse the response in a dialog in entities and intents, and provide a response in natural language.
By an example, a parsed dialog response will have something like this json
{
"intent": "MusicIntent",
"score": 0.0006564476,
"actions": [
{
"triggered": false,
"name": "MusicIntent",
"parameters": [
{
"name": "ArtistName",
"required": false,
"value": [
{
"entity": "queen",
"type": "ArtistName",
"score": 0.9402311
}
]
}
]
}
]
}
where you can see that this MusicIntent has a entity queen of type ArtistName that has been recognized by the language understanding system.
that is, using the BotFramework like doing
var artistName=BotBuilder.EntityRecognizer.findEntity(args.entities, Entity.Type.ArtistName);
A good modern bot assistant framework should support at least a multi-turn dialog mode that is a dialog where there is an interaction between the two party like
>User:Which artist plays Stand By Me?
(intents=SongIntent, songEntity=`Stand By Me`)
>Assistant:The song `Stand by Me` was played by several artists. Do you mean the first recording?
>User:Yes, that one!
(intents=YesIntent)
>Assistant: The first recording was by `Ben E. King` in 1962. Do you want to play it?
>(User)Which is the first album composed by Ben E.King?
(intents=MusicIntent, entity:ArtistName)
>(Assistant) The first album by Ben E.King was "Double Decker" in 1960.
>(User) Thank you!
(intents=Thankyou)
>(Assistant)
You are welcome!
Some bot frameworks use then a WaterFall model to handle this kind of language models interactions:
self.dialog.on(Intent.Type.MusicIntent,
[
// Waterfall step 1
function (session, args, next)
{
// prompts something to the user...
BotBuilder.Prompts.text(session, msg);
},
// waterfall step 2
function (session, args, next)
{
// get the response
var response=args.response;
// do something...
next();//trigger next interaction
},
// waterfall step 3 (last)
function (session, args)
{
}
]);
Other features to consider are:
support for multi-languages and automatic translations;
3rd party services integration (Slack, Messenger, Telegram, Skype, etc);
rich media (images, audio, video playback, etc);
security (cryptography);
cross-platforms sdk;
I've started to do some work in this space using this open source project called Talkify:
https://github.com/manthanhd/talkify
It is a bot framework intended to help orchestrate flow of information between bot providers like Microsoft (Skype), Facebook (Messenger) etc and your backend services.
I'd really like people's input to see if how it can be improved.