IBM Watson chat bot - ibm-cloud

I am currently working on Watson chat bot with the aim of creating a virtual assistant for customers which hopefully will be capable of handling the requests. Within a node I ask a question like "Can you provide me the serial number?" to the customer. What I want Watson to do is that save that serial number as a variable so that I can respond the customer like "Okay so the number you provided is "that number", do you confirm?" I would be very happy if someone can help me out with this. How am I going to integrate a variable capable of storing the customerĀ“s input?
Thank you in advance!

I recommend starting with this tutorial on building a database-driven Slack chatbot. The source code is available on GitHub.
The tutorial shows how to gather event data. The typical way of getting data from an user is to save the relevant parts in so-called context variables. You can access those variables from within dialog nodes, print values in responses or pass them to other code for backend processing (like in a database system).

Related

Watson Chat Bot see logs of conversations

I have a question regarding the IBM Watson chat bot. All my intents and entities are ready and in some of my nodes I used "literal" function to store the input of the user. Now I want to document the chat somehow and I want to be able to see especially those stored values within the documentation. Is documenting the chat possible (e.g as a notepad file, etc.) ? Thank you for your support in advance. Ciao!
I dont have any code to actually help you, but you can get the logs via the /logs api here:
https://www.ibm.com/watson/developercloud/conversation/api/v1/curl.html?curl
which would contain all the info you're looking for, then you could write it to a file or database or whatever you want.

Using "speechBiasingHints" with Dialogflow Webhook

First time posting, so feel free to give me feedback if I could improve something about this post... Now on to my question.
I am currently developing a Google Action, the Action will allow the user to define important events, such as Bob's Birthday or Fred's Graduation, and save data about said events. Later, the user will be able to ask for info about the event and get it returned back to them.
I am using the Dialogflow API with "Inline Editor" fulfillment to keep it as simple as possible for right now. The problem I am running into is this, the event has an entity type of #sys.any, so anything the user says is excepted as valid input. I would like some way to bias towards events I already have stored for the user however, so that they are more likely to find the event they are looking for.
I found another answer on here discussing speech biasing (What is meant by speech bias and how to use speechBiasHints in google-actions appResponse) which defined speech biasing as the ability to"influence the speech to text recognition," which is exactly what I believe I want. While that answer provided sample code, it was for the Actions SDK, not the Dialogflow SDK, which I am using.
Can anyone provide an example of how to fill the "speechBiasingHints" section of the ExpectedInput response of the Conversation Webhook using the DialogFlow Webkook?
Note: This is for a student project, and I'm new to developing Google Actions and still very much learning about everything that is capable with Google Actions. Any feedback or suggestions are very welcome.
The question you link to does quite a few things differently than the approach you're taking. The Action SDK provides more low-level control, but doesn't have much Natural Language Processing (NLP) capabilities, which Dialogflow provides.
Dialogflow handles biasing a little differently through the use of Entities, so you don't need to control the speech biasing directly, Dialogflow can handle that for you, to some extent.
Since each user may have a different event name, you'll probably want to use a User Entity, which is an entity you define and then populate on a user-by-user basis through Dialogflow's API. In your sample phrases, you can then use this entity name instead of #sys:any, or create another set of phrases that use this entity in addition.

Ready to use intents & dialogs for Chatbot

I'm using IBM Watson Conversation to build a bot. I'm looking for ready to use dialogs & intents regarding the most commonly used conversation statements.
Like: Welcome, Good Morning, Aha, Good Evening, How are you, Who are you, etc...
Actually when I used api.ai from Google, there's a default WELCOME intent, it's AMAZING. I'm looking for something similar.
In Conversation service, you get the default "welcome" message node and another node for "anything_else" which will be executed when no other intent matches the user query. These two nodes get created for you the moment you go to the Dialogue tab of the service for the first time.
This gives you a skeleton of how you can add new nodes as per your need. Currently there aren't any other intents that Conversation service provides by default, which also makes sense in someway as everyone's need might be different.
But the service provides some default entities which are called "system entities". These are common entities like Person's name, location, currency etc that might be used in almost all sorts of chatbot scenarios. By default, these will be disabled, but you can turn them on from the "Entity" tab and click on system entities. For better designing I recommend you check this documentation

Can we configure our Bluemix chatbot Application with Multiple Conversation Workspace?

Can we configure our Bluemix chatbot Application with Multiple Conversation Workspace? If yes then how we can call to particular conversation service on the basis of user questions asked on chatbot?
This can be done by your application. A scenario could be that
the user input is analyzed by your app or by sending it to the national language classifier or NL understanding service.
based on the results of the analysis your app would then send the input to the specific workspace
calls into a conversation workspace are stateless, but have an ID for the individual conversation (chat) and metadata about where in the dialog you are
that info could be used to later jump back to where in the conversation for a workspace the user was
IMHO, that technique could be used to support multiple spoken languages or to separate different more complex subjects into individual workspaces. Take a look at the architecture diagram in the documentation for the general idea.

Create a chatbot for stock market using IBM Watson

I would like to create stock bot which can have basic conversation and give me stock price in conversation.
To get stock price i am using yahoo finance api.
For basic conversation i am using
IBM watson conversation api
I have also used
IBM NLU (natural language understanding) Api
to verify different company names asked in different manner but i am not getting expected result.
For example if i search
"What is price of INFY?"
then it should give me correct answer and should filtered out as my action should be to pass INFY in yahoo finance api. This should also work if i change format of question asked.
Below is the flow chart setup which i made on node-red panel of bluemix (IBM).
Could you help me to find out exact api's and flow which could help me achieve my goal.
This is a pretty big one, but at least some first impression comments...
Watson Conversation Service is already integrated with NLU component - the intents and entities TAB. The company names could be extracted from the input text with the use of entities and entities synonyms. Drawback here is that the user needs to list all the possible variants of how the company name can look like, but on the other hand, the entities specification can be imported in the Conversation through a csv file.
In general the integration of Watson Conversation service and some 3rd party services needs to be done outside the Conversation service - as it as of now - does not explicitly support calling of 3rd party APIs, so the node.js solution here seems a sound one. What you need to specify is how the integration of WCS and 3rd party services will look like. The general pipeline could look like:
user inputs text to the system
text goes to Watson Conversation Service
the intent and company name is extracted in WCS
WCS sends text output + sets a special variable in the node output field such as "stocks" : "Google" that will tell the node.js component that sits after the conversation service to find out and include stocks market value of Google inside the output text
Now - back to your solution - it might make sense to have also a dedicated NLC service that will be used only to extract the companies name in the system. However I would use this only if it would turn out that e.g. entities in WCS service are not robust enough to capture the companies properly (my feeling here is that for this particular use case the entities with synonyms might work ok).