How to train watson conversation service? - ibm-cloud

I'm evaluating watson and part of this to upload Wikipedia data and then ask questions of this data. To achieve this I create a conversation service :
Note the text : 'You can input: Your domain expertise in the form of intents, entities and crafted conversation'
My understanding was that I could upload piece of text , in this case a wikipedia article' and then train watson on this text. After training I can then ask questions of watson in relation to the text.
This article seems to suggest this : https://developer.ibm.com/answers/questions/11659/whats-the-easiest-way-to-populate-a-corpus-with-content-like-wikipedia-or-twitter.html with regard to uploading data 'You could always pull the latest wikipedia dump and upload it. You do the upload through the experience manager, which is a web UI.'.
Reading https://developer.ibm.com/answers/questions/29133/access-to-watson-experience-manager-and-watson-developer-portal.html states : 'Currently Watson Experience Manager is only available to Watson Ecosystem and Watson Developer Cloud Enterprise partners.' The article is dated 2014, is this still valid ? I cannot upload a piece of text and train watson against this unless I'm a 'Watson Ecosystem and Watson Developer Cloud Enterprise' partner ? My only alternative is to train watson using 'intents, entities and crafted conversation' ?

Watson conversation service has three components to it.
1. Intents. These are questions with an "intent" key. For example "I can't log in" would have an intent of USER_ACCESS. For more details on this, read up on the NLC service.
2. Entities. These are keywords that share a common theme. For example: Weather is "sunny, rain, cloudy".
3. Dialog. This is the conversational flow or answers to questions asked in relation to Intent + Entity.
The conversation service documentation has a demo which explains it better. There is also a useful blog post outlining how to get started in six steps, and a video demonstrating how to quickly build a chatbot.
If it is a case where you want to analyse documents, there is Watson Explorer or Retrieve & Rank.
In relation to Watson Experience Manager. That is the older pre-bluemix version of Watson. It is no longer accessible. It had the functionality of NLC, Dialog, Retrieve & Rank, Document Conversion.

Related

Building a custom chatbot with GPT-3

I want to build a chatbot by training on my custom data. The procedure is provided by OpenAI at https://platform.openai.com/docs/guides/fine-tuning. I would like the bot to recommend my Facebook page at the end( Which is about movies) after having some meaningful conversation about movies with a customer. I want to leverage the power of Davinci model, as it is the most powerful.
My only question is, is there a chance I can include all the important information in the prompt, like features of my page, my website, etc, and then train it in the custom model(so I dont have to rewrite the whole prompt again for each user)? Because if I repeat the whole prompt again with all the information, my tokens will be consumed very quickly. Thank you.

Watson Assistant does not transfer conversations correctly in Intercom

I'm developing a chatbot on the Watson Assistant platform, currently with the "Plus trial" plan.
I am using the integration with Intercom, as it is offered by Watson Assistant.
The chatbot has its own seat and inbox in Intercom and the conversations between the users and the bot have no problem. The problem appears when I want to transfer the conversations from the bot to another specific human agent in Intercom.
To do this, I have followed the instructions in the Watson Assistant documentation (https://cloud.ibm.com/docs/assistant?topic=assistant-deploy-intercom#deploy-intercom-config-backup).
The default team inbox to receive transferred messages is set to the "Unassigned". Then I have created a rule that states that for a specific dialog branch the conversation should be transferred to a specific agent. The transfers are triggered with a "Connect to human agent" action in the dialog branch specified in the rule.
Unfortunately this setup does not work. The transfers do happen, but to the unassigned inbox, not to the inbox of the agent specified in the rule.
How could I make this setup work?
More information :
No code is involved in this integration. I attach two screenshots from the relevant configuration pages in Watson.
Home page of the Intercom integration
Page of the transfer settings

IBM Watson Assistant: How to add more then 5 skills?

I need to add more than 5 skills in Watson Assistant, but I don't know how to add them.
I have a Plus plan with 100 assistants and 5 skills. When I add 5 skills in 5 assistants, the system responds "You have limit 5 skills". How can I use the other 95 assistants?
See the pricing and plan overview for IBM Watson Assistant for how many skills can be created in each service plan. The documentation details how to add skills to an assistant.
A dialog skill defines the intents, entities and the dialog structure. A search skill can be used for integrating IBM Watson Discovery. Once you have skills defined, typically only a single dialog skill, you add them to an assistant. The assistant is the wrapper around the skill for building the chatbot and integrating it into a website, Slack or Facebook Messenger or hooking it up with a phone system.
You say that you have a Plus plan. Note that the docs point out restrictions for the "Plus Trial" plan which only has 5 skills.
Create a new resource group.
Then choose this new resource group while creating the new service.

Being able to interrupt a chatbot in realtime

My goal is to make a chatbot capable of reading a list of article titles (long list, could take 5 minutes for the chatbot to read orally the entire list) and the user can navigate in real-time on the answer saying: next, previous, share facebook, share twitter, read full article, go back to article list, etc. I think a rule based chatbot could make it, no NLP needed.
I started to develop my own conversation logic with Google Dialogflow. The chatbot reads the article list but it is not possible (orally !) to 'cut' the chatbot in the middle of the list to say: 'next', 'previous', etc... without having to at least repeat the name of the article or the article number which is not convenient for my needs at all.
For instance, the bot starts reading article titles: "article 1, Elon Musk reads books. article 2, self driving car industry hot. article 3, ... article n." When the bot reads "self driving car industry hot", the user says in the middle: "tell me more" (orally). And then the bot understands it was reading article 2 when it has been interrupted by the user so it goes to the article itself and read the full article.
Would be happy to hear about how to be able to build such kind of chatbot (only vocal is needed), in python would be appreciated. Ideally I would be able to put the bot as many places as possible (amazon alexa, google home, etc).

where are questions for a Microsoft Azure QnA bot stored?

I run a QnA bot (Animal-Rights-Bot) that answers questions via skype or email but there is a discrepancy between the number of questions shown in the Azure dahsboard and the bot mailbox. I`m wondering where I find those questions that were sent to the bot in Skype and per Mail.
You get all the chatlogs from QnAMaker in the test tab on the portal. There is a download chat logs link.
Every knowledge base content is stored in Azure storage by the QnAMaker tool. You need a combination of knowledge base id and subscription key to access the knowledge base. The knowledge base contents are not used by the tool for any other purpose.
Here is the API reference documentation for the the QnA maker . you can use them to programmatically query the knowledge base.
https://qnamaker.ai/Documentation/ApiReference