Does IBM Watson Assistant support Indian regional languages? - ibm-cloud

I'm having an idea to develop a health care assistant that can suggest remedies to cure health related ailments. I thought it could be better if it can understand regional languages like Tamil, Malayalam, Hindi etc, to provide better insights. Is this possible with IBM Watson Assistant?

Those regional languages are not (yet) supported by IBM Watson Assistant. For reference, this is the list of supported languages for IBM Watson Assistant.
It is possible to switch between languages during a dialog. The IBM Watson Natural Language Understanding service supports Hindi, Tamil and Malayalam. You could use that to at least detect in which language the bot is ask and react and maybe ask if it is ok to continue in another language or provide some basic help.

Related

IBM Watson Assistant for non-English language - Intent is not recognized

I am working with IBM Watson Asistant for Korean and found the failure rate to detect the correct intent is so high. Therefore, I decided to check language support and I can see the important missing features that is Entity Fuzzy Matching:
Partial match - With partial matching, the feature automatically suggests substring-based synonyms present in the user-defined entities, and assigns a lower confidence score as compared to the exact entity match.
This result in the chatbot that is not very intelligent for which we need to provide synonyms for each word. Check out the example below where Watson Assistant in English can detect an intent from words that is not included in the example by any means. I tested and found it is not possible for Korean language to do so.
I wonder If I understood something wrong or there is away to workaround this issue that I do not know of?
By default, you start with IBM Watson Assistant and an untrained dialog. You can significantly improve the understood intents and entities by providing more examples and then using the dashboard to tag correctly understood conversations and to change incorrect intents / entities to the right ones. This is the preferred way and is just part of the regular development process which includes training the model.
Another method, this time as workaround, is to preprocess a dialog using Watson Natural Language Understanding which has Korean support, too.
BTW: I use German language for some of my bots and it requires training for some scenarios.
In addition to Henrik's answer, here are couple of tips while creating an intent
Provide at least five examples for each intent.
Always re-train your system
If the system does not recognize the correct intent, you can correct
it. To correct the recognized intent, select the displayed intent and
then select the correct intent from the list. After your correction is
submitted, the system automatically retrains itself to incorporate the
new data.
Remember, The Watson Assistant service scores each intent’s confidence independently, not in relation to other intents.
Avoid conflicts and if there are any resolve the conflicts - The Watson Assistant application detects a conflict when two or more intent examples in separate intents are so similar that Watson Assistant is confused as to which intent to use.

Can I make google assistant understand my entities and train it for the same or I need DialogFlow?

I know that DialogFlow can be trained for particular entities. But I wanted an insight on whether or not Google Assistant can understand my entities?
I've tried to search on official site but could not get clear understanding on whether or not I need to go for dialogflow.
Actions on Google will allow you to extend Google Assistant by writing your own app (i.e. an Action). In your Action, you can tailor conversational experience between the Google Assistant and a user. To write an action you will need to have a natural language understanding mechanism, which is what Dialogflow provides.
You can learn more about Actions on Google development in the official docs. There are also official informational talks about Actions on Google and Dialogflow online, such as
"An introduction to developing Actions for the Google Assistant (Google I/O '18)"
I'm not quite sure what you mean with your last sentence, there is no way to define entities for Google Assistant other than Dialogflow. Regarding your question, there is indeed no information on how entities are handled and how good one can reasonably expect the recognition to be. This is especially frustrating for the automated expension feature, where it is basically a lottery which values will be picked up and which will not. Extensive testing is really the only thing one can do there.

How to make voice assistants to handle scientific terminology?

As a POC i would like to explore how i can make alexa skill or google home action to understand non English words,like chemical names ,scientific terminology.
Google Cloud Speech to text api allows to add a context but not a full grown solution.
How to approach this problem?

Is API.AI the native way to build conversational skills for Google Assistant?

I have developed a conversational skill using API.AI and deployed to Google Home but API.AI's support seems limited and I am unable to do certain things like playing an audio file. The question I have is whether it's better to stick with API.AI or switch to Actions on Google for the long term.
Google has said that API.AI is the recommended way to build an agent for 'actions on google' for those who don't need/want to do their own NLU. They seem to expect that most developers will use API.AI because it does some of the work for you, with the NLU being the prime example, cf. Alexa where the developer is expected to specify all the different utterence variations for an intent (well, almost all - it will do some minor interpretation for you).
On the other hand, keep in mind that API.AI was created/designed before 'actions on google' existed and before they were purchased by Google - it was designed to be a generic bot creation service. So, where you gain something in creating a single bot that can fulfill many different services and having it do some of the messy work for you, you will certainly lose something compared to the power and control you have when writing to the API of one specific service - something more then just the NLU IMO, though I can't speak to playing an audio file specifically.
So, if you plan to just target the one service (and an audio bot is not relevant to most of the other services supported by API.AI) and you are finding the API.AI interface to be limiting then you should certainly consider writing your service with the 'actions on google' sdk.

Does personality insights have language support for hebrew?

I am using personality insights and I do not see language support for hebrew. Does personality insights have language support for hebrew? If so, what is the code? If not, any idea when it will?
Currently Personality Insights only has support for content in English and Spanish.
We are working on adding more languages currently, but don't have anything we can announce yet.