I am new to Watson, and have a fairly basic question. I understand that custom models can be created for discovery, but not sure about the sentiment model
In my domain there are certain verbs and adjectives that indicate positives and negatives, and Id like to train the sentiment model to identify these. Is this posible?
Thanks in advance,
JDG
For sentiment models, I'm assuming you're referring to Watson NLU. If so, then yes, you can create custom models via Watson Knowledge Studio and use them in the API. Full documentation can be found here.
Related
Can you tell me how to use Watson Knowledge Studio model improve Watson Discovery Service query ability. I am not sure WKS model can work on the way of Natural Language or Discovery Query Language or both? If it can work on both, how can we get the direct-viewing effects? By confidence or other parameters? Thank you.
You can definitely add entity and relation enrichments by applying a custom model to improve or more appropriately customize Watson Discovery Service. Train your model in WKS and deploy in discovery, then go to discovery top right - click configure data and go to enrichments tab. Add entity and relation and enter your deployed custom model id from WKS and apply. I hope it should help your discovery service to be customized.
For ibm chatbot can I tell chatbot that a word in a conversation is a entity or do I have to make it an intent.
For example
What are your interests? Question
Sports answer
Can I add sports as a entity right from the menu?
Entities by themselves will work if you know what the person is going respond with, and doesn't deviate much.
Where you don't know, but you know the structure in how they ask then you can use contextual entities.
The last option is to shape your message to the end user to change their behavior.
For example "What are you interests" is very broad. Two examples:
"Do you like to play sports?" = Gives a yes/no answer, which you can drill down on.
'What kind of sports do you like?" = Allows you to make a narrow entity to catch the answer.
I would recommend you to make interests an entity, because it makes more sense.
This quote from IBM developer blog talks about the use case for entities:
In some scenarios, it makes sense to define the entities of interest in your Conversation workspace; for example, when defining a “toppings” entity for a pizza ordering bot. However, in other scenarios, it is impractical to define all possible variations for an entity and it is best to rely on a service to extract the entity; for example, when extracting a “city” entity for a weather bot.
Also, How to build a Chatbot with Watson Assistant (free chatbot course) is a great beginner course.
one simple question: how can I create more than one classifier within a instance of Natural Language Classifier using the beta toolkit?
I've asked that because I don't know how to upload and train a new classifier after I've just deployed one.
Thanks for the help.
Your question is about the Toolkit. You can manage your training data and classifiers by using the IBM Watson™ Natural Language Classifier Toolkit web application. The toolkit gives you a unified view of all the classifiers that are running in the same Bluemix service instance. So you need to create another classifier and use the toolkit to manage.
I think you can view this document, about Natural Language Classifier using Toolkit.
Obs.: The first classifier is free, but each other you will need to pay.
See the API Reference to use NLC.
As #Sayuri mentions above, use the Toolkit to manage your Classifiers.
Something to keep in mind that when you create the first NLC instance (the little box in Bluemix), this is called a service instance. Within this service instance, you can have up to 7 unique classifiers. If you need to create an 8th classifier, you will need to create a new service instance.
I need to create a UML class diagram and a use case diagram for a RESTFul API that I developed using API Platform and Symfony 3.2 (backend) and Ionic 2 (frontend).
But i dont exactly know how to describe the structure of my backend API through the class diagram.
If anyone has any idea or could be of any help, i'd truly appreciate it. Thank you!
So the solution to my problem was a bit of all of the following :
https://www.ibm.com/developerworks/rational/library/design-implement-restful-web-services/ :
I concluded from this that, although it's difficult to model a restful api in a class diagram (since it's basically just a bunch of methods), consider the classes like "Resources", add the methods (basically the HTTP methods), and the paths to each resource.
This was also of huge help:
https://firstinfinity.wordpress.com/modeling_rest_web_services/
Another (simpler) way was to use tools like Pikturr, which transform your Swagger into a UML diagram.
Softwares for designing REST APIs :
Visual Paradigm
IBM Rational Software Architect
I hope this helps.
Api Platform automatically generates a Swagger documentation for your API at the URL http://localhost/docs.json.
You can generate an UML diagram from the Swagger documentation using tools like https://github.com/nrekretep/pikturr
It is too late, but this is another approach.
If you do not know about Visual Paradigm you can give it a try.
You can use this content, Visual Paradigm - How to Design REST API with UML?. It is easy, to represent your REST API. I prefer Postman Collections BTW (it is not UML).
But if you are doing some other kinds UML artifacts for documentation purposes. Well, Visual Paradigm gives you a bunch of UML diagrams to do that. So you keep that ecosystem of diagrams.
See ya.
The real answer for the question is to use an UML Component Diagram, since its very purpose is to model architectures based on services. Check this link for more info: https://diagramasuml.com/componentes/
I have been learning on building a simple application by replicating the Watson Health Q/A application(https://watsonhealthqa.mybluemix.net/) but the Q/A application seems to have been deprecated and documentation/forums seem to say that I should use the dialog service with the natural language classifier. Any pointers if this is the case. Second, where can I get the template file(dialog xml file with classifier), i.e for the dialog that is based on the already trained health data that previously existed in the Q/A Health application. i.e would need to train the classifier with the same data they used for their Q/A application? Is this now under health services?
#simon's answer is correct and has links to the new services. The older QA service has been deprecated and replaced by other, singularly focused services. The new services are more flexible and more composable.
Side note... I guess I should update my demos (the URL you mention above) b/c those old services are no longer active.