I am working with IBM Watson Asistant for Korean and found the failure rate to detect the correct intent is so high. Therefore, I decided to check language support and I can see the important missing features that is Entity Fuzzy Matching:
Partial match - With partial matching, the feature automatically suggests substring-based synonyms present in the user-defined entities, and assigns a lower confidence score as compared to the exact entity match.
This result in the chatbot that is not very intelligent for which we need to provide synonyms for each word. Check out the example below where Watson Assistant in English can detect an intent from words that is not included in the example by any means. I tested and found it is not possible for Korean language to do so.
I wonder If I understood something wrong or there is away to workaround this issue that I do not know of?
By default, you start with IBM Watson Assistant and an untrained dialog. You can significantly improve the understood intents and entities by providing more examples and then using the dashboard to tag correctly understood conversations and to change incorrect intents / entities to the right ones. This is the preferred way and is just part of the regular development process which includes training the model.
Another method, this time as workaround, is to preprocess a dialog using Watson Natural Language Understanding which has Korean support, too.
BTW: I use German language for some of my bots and it requires training for some scenarios.
In addition to Henrik's answer, here are couple of tips while creating an intent
Provide at least five examples for each intent.
Always re-train your system
If the system does not recognize the correct intent, you can correct
it. To correct the recognized intent, select the displayed intent and
then select the correct intent from the list. After your correction is
submitted, the system automatically retrains itself to incorporate the
new data.
Remember, The Watson Assistant service scores each intent’s confidence independently, not in relation to other intents.
Avoid conflicts and if there are any resolve the conflicts - The Watson Assistant application detects a conflict when two or more intent examples in separate intents are so similar that Watson Assistant is confused as to which intent to use.
Related
I am currently making a Watson Assistant in which I need to store data collected in a database. I am trying to do this by calling an endpoint I have created that will insert data into my database, however I do not know how I can use webhooks in my action skills.
The documentation has an overview of where and how webhooks are supported by Watson Assistant. Action skills only offer pre / post message hooks. When you compare an Action skill with a Dialog skill, Action skills are simpler to get started with, but Dialog skills offer all the features. What you are looking for is available for dialog nodes in Dialog skills.
Actions Skill is still quite new compared to Dialog Skill, and we have a lot of really exciting plans for Actions that will be coming out over time. The ability to call out to external systems is being totally revamped, at least compared to Dialog/webhooks, to be much easier to use and scale across your business. It's one of our top priorities for the future but will take some time to release. I can't give a date now but if you need this for your production assistant, as data_henrik said we recommend you use the Dialog skill
Can the IBM Watson Conversation / Assistant service detect more than one intention in a single sentence?
Example of input:
play music and turn on the light
Intent 1 is #Turn_on
Intent 2 is #Play
==> the answer must be simultaneous for both intents: Music played and light turned on
If so, how can I do that?
Yes, Watson Assistant returns all detected intents with their associated confidence. See here for the API definition. In the response returned by Watson Assistant is n array of intents recognized in the user input, sorted in descending order of confidence.
The documents have an example on how to deal with multiple intents and their confidence. Be also aware of a setting alternate_intents to allow even more intents with lower confidence to be returned.
While #data_henrik is correct in how to get the other intents, it doesn't mean that the second question is related.
Take the following example graph, where we map the intents versus confidence that comes back:
Here you can clearly see that there are two intents in the persons question.
Now look at this one:
You can clearly see that there is only one intent.
So how do you solve this? There are a couple of ways.
You can check if the first and second intent fall within a certain percentage of each other. This is the easiest to detect, but tricker to code to select two different intents. It can get messy, and you will sometimes get false positives.
At the application layer you can do a K-Means on the intent result. K-Means will allow you to group intents by buckets, so you create two buckets (K=2), and if there is more than one in the first bucket, you have a compound question. I wrote about this and a sample on my site.
There is a new feature you can play with in Beta called "Disambiguation". This allows you to flag intent nodes with a question to ask to get it. Then if two questions are found it will say "Did you mean? ...." and the user can select.
IS this disambiguation feature available in non production environments, on Beta?
I know that DialogFlow can be trained for particular entities. But I wanted an insight on whether or not Google Assistant can understand my entities?
I've tried to search on official site but could not get clear understanding on whether or not I need to go for dialogflow.
Actions on Google will allow you to extend Google Assistant by writing your own app (i.e. an Action). In your Action, you can tailor conversational experience between the Google Assistant and a user. To write an action you will need to have a natural language understanding mechanism, which is what Dialogflow provides.
You can learn more about Actions on Google development in the official docs. There are also official informational talks about Actions on Google and Dialogflow online, such as
"An introduction to developing Actions for the Google Assistant (Google I/O '18)"
I'm not quite sure what you mean with your last sentence, there is no way to define entities for Google Assistant other than Dialogflow. Regarding your question, there is indeed no information on how entities are handled and how good one can reasonably expect the recognition to be. This is especially frustrating for the automated expension feature, where it is basically a lottery which values will be picked up and which will not. Extensive testing is really the only thing one can do there.
We've been using Microsoft's LUIS cognitive service as an ML tool for our Chatbot. We've observed that whenever there is a swear word entered, there is no response from the bot. I couldn't find anything about this in the documentations, except that LUIS can identify slang words.
I would also like to know if anyone knows how to customize your Chatbot's response in such a scenario?
Any help would be great. Thank you!
LUIS doesn't filter swearwords. Regarding an explanation for the lack of response from your chatbot, it would be necessary to see the code for the bot. If the user isn't in a dialog and utters a swearword, your bot should either map it to a defined intent, map it to the crowd-favorite "None" intent, or do nothing with it. To my knowledge the only time the chatbot will do nothing, is when a handler for the "None" intent isn't defined.
To handle an utterance that contains swearwords it's necessary to know the context behind it.
At certain points, the SDK you're using may block swearwords indirectly. E.g. a user saying, "#$%! yes!" to a confirm prompt may have the bot asking the user to repeat themselves with either a yes or no response.
An extremely simple and intrusive way to handle swear words in the Node SDK would be to create a bot.dialog() that activates through the use of .triggerAction(). You can use regexp so the chatbot responds to swearwords by switching to this dialog. You can also use a custom Intent Recognizer to recognize swearwords.
The 'Swear' intent needs to be implemented by hand in LUIS. I suggest to separate it from the None intent in LUIS.
In the Bot, it is possible to have the same handler for None and Swear intents, or to have separate handlers and potentially different Bot behaviors for these two intents.
I am wondering how to go about implementing Location Automation Field, as suggested in this article: http://uxmovement.com/forms/new-form-techniques-proven-to-save-time-and-money/
Are there libraries or services that can help me figure out City/State given the zipcode? I know Google has the Geocoder/decoder or Google Places search that could potentially be useful but their Terms of Use mandate that you must use their services in conjunction with displaying the results on their map, which is a weird thing to do when the user is filling out billing info...
Cheers to you for observing TOS. That's good business.
You could use SmartyStreets. LiveAddress API supports city/state and ZIP code lookups. It now has auto-complete as you type an address, too.
I work at SmartyStreets. In fact, we developed the jQuery plugin which does address validation and auto-complete for you. It even supports freeform address fields.
Related question by yours truly, over on UX stack exchange: https://ux.stackexchange.com/questions/22196/combining-all-the-address-fields-into-one