I am currently making a dialog with Watson Assistant for a store, I need the asistant to be able to save the description of a product in a context variable using slots. The description should be a sentence of about 50 digits, any idea on how I can approach this?
Related
I couldn't figure out a problem with IBM Watson Assistant. I've chosen to use an Option type as response. That way, I can see a list on my chatbot where each item is clickable and has an associated value.
When a user clicks one of the options, the associated user input value is sent to the assistant. How can I give this value to a context variable? Is it possible?
Yes, that is possible for option responses and is the typical use case. I have demonstrated that in a simple chatbot I use in some of my talks.
I use different language code as options. Users can click on them and the result is saved in the variable langcode. Because users could have already specified what I would ask for, I check for it and save it.
The simple bot has more uses of options and the full skill is available in case you want to see all the details.
I need to add more than 5 skills in Watson Assistant, but I don't know how to add them.
I have a Plus plan with 100 assistants and 5 skills. When I add 5 skills in 5 assistants, the system responds "You have limit 5 skills". How can I use the other 95 assistants?
See the pricing and plan overview for IBM Watson Assistant for how many skills can be created in each service plan. The documentation details how to add skills to an assistant.
A dialog skill defines the intents, entities and the dialog structure. A search skill can be used for integrating IBM Watson Discovery. Once you have skills defined, typically only a single dialog skill, you add them to an assistant. The assistant is the wrapper around the skill for building the chatbot and integrating it into a website, Slack or Facebook Messenger or hooking it up with a phone system.
You say that you have a Plus plan. Note that the docs point out restrictions for the "Plus Trial" plan which only has 5 skills.
Create a new resource group.
Then choose this new resource group while creating the new service.
I made an action on Google where the assistant asks a question regarding country name and the user tells a country name to describe its demography. But the Flash Card template I used knows the answer and doesn't takes user's answer. I want to make it user driven and not assistant driven.
I tried other templates but none of them solves this.
I am using IBM Watson Assistant to create a troubleshooting guide. I wish to put a disclaimer in the beginning and only if the user checks the checkbox saying he agrees to the statements in the disclaimer, they will be able to go further in the conversation.
I tried include the HTML checkbox code, but it doesn't seem to work. I don't want "options" which is present in the Assistant. i wish to have a checkbox.
Note that Watson Assistant typically is part of a solution and not a chatbot alone. What is your app that is used as user interface? You could let your app react to a context variable and display the checkbox. Not all user interfaces and integrations may support displaying a checkbox.
Another option is to ask the user to agree to the terms. If they answer "I agree" or "Yes", then the dialog moves forward.
I'm evaluating watson and part of this to upload Wikipedia data and then ask questions of this data. To achieve this I create a conversation service :
Note the text : 'You can input: Your domain expertise in the form of intents, entities and crafted conversation'
My understanding was that I could upload piece of text , in this case a wikipedia article' and then train watson on this text. After training I can then ask questions of watson in relation to the text.
This article seems to suggest this : https://developer.ibm.com/answers/questions/11659/whats-the-easiest-way-to-populate-a-corpus-with-content-like-wikipedia-or-twitter.html with regard to uploading data 'You could always pull the latest wikipedia dump and upload it. You do the upload through the experience manager, which is a web UI.'.
Reading https://developer.ibm.com/answers/questions/29133/access-to-watson-experience-manager-and-watson-developer-portal.html states : 'Currently Watson Experience Manager is only available to Watson Ecosystem and Watson Developer Cloud Enterprise partners.' The article is dated 2014, is this still valid ? I cannot upload a piece of text and train watson against this unless I'm a 'Watson Ecosystem and Watson Developer Cloud Enterprise' partner ? My only alternative is to train watson using 'intents, entities and crafted conversation' ?
Watson conversation service has three components to it.
1. Intents. These are questions with an "intent" key. For example "I can't log in" would have an intent of USER_ACCESS. For more details on this, read up on the NLC service.
2. Entities. These are keywords that share a common theme. For example: Weather is "sunny, rain, cloudy".
3. Dialog. This is the conversational flow or answers to questions asked in relation to Intent + Entity.
The conversation service documentation has a demo which explains it better. There is also a useful blog post outlining how to get started in six steps, and a video demonstrating how to quickly build a chatbot.
If it is a case where you want to analyse documents, there is Watson Explorer or Retrieve & Rank.
In relation to Watson Experience Manager. That is the older pre-bluemix version of Watson. It is no longer accessible. It had the functionality of NLC, Dialog, Retrieve & Rank, Document Conversion.