IBM Watson Natural Language Classifier - ibm-cloud

one simple question: how can I create more than one classifier within a instance of Natural Language Classifier using the beta toolkit?
I've asked that because I don't know how to upload and train a new classifier after I've just deployed one.
Thanks for the help.

Your question is about the Toolkit. You can manage your training data and classifiers by using the IBM Watson™ Natural Language Classifier Toolkit web application. The toolkit gives you a unified view of all the classifiers that are running in the same Bluemix service instance. So you need to create another classifier and use the toolkit to manage.
I think you can view this document, about Natural Language Classifier using Toolkit.
Obs.: The first classifier is free, but each other you will need to pay.
See the API Reference to use NLC.

As #Sayuri mentions above, use the Toolkit to manage your Classifiers.
Something to keep in mind that when you create the first NLC instance (the little box in Bluemix), this is called a service instance. Within this service instance, you can have up to 7 unique classifiers. If you need to create an 8th classifier, you will need to create a new service instance.

Related

Integrate SuccessFactors and S4HANA using S4 SDK

Currently we are referring https://blogs.sap.com/2017/05/10/first-steps-with-sap-s4hana-cloud-sdk/ blogs for our side by side extensibility use cases.
We are trying to create a scenario for Building a side by side extensibility application to integrate successfactor and S/4 hana system using S4 SDK.
We couldn't find any blogs, sample code implementation to integrate successfactors and hana system.We found the below relevant blog.
https://github.com/SAP/cloud-s4-sdk-examples/tree/master/Employee-Browser-Neo
But we couldn't understand the detailed scenario and output of this GitHub sample code.
It would be great if someone help us with blogs/urls/code.
currently, there is no dedicated material for SuccessFactors. However, I recommend the following blog post:
https://blogs.sap.com/2018/04/30/deep-dive-10-with-sap-s4hana-cloud-sdk-generating-java-vdm-for-s4hana-custom-odata-service/
It describes how to generate the virtual data model for a custom OData service.
You can use the same approach to connect to SuccessFactors.
As input for the generator, you would use the metadata file provided by SuccessFactors. You can for example access the $metadata path of the OData service.
Afterwards, can can use the generated Java classes to access the SuccessFactors API.
To connect to SuccessFactors, create a destination for your SuccessFactors system, e.g. called SuccessFactorsODataEndpoint. That works locally as environment variable or in the destination service on SAP Cloud Platform as explained here: https://blogs.sap.com/2017/05/21/step-4-with-sap-s4hana-cloud-sdk-calling-an-odata-service/
In the execute method in the virtual data model you can define which destination to use:
.execute(new ErpConfigContext("SuccessFactorsODataEndpoint"))

RDF or OWL based Rapid Application Development Framework?

I am looking for an easy-to-implement solution for form-based ontology editing and I wonder if there are any active projects and which of them is the right path to follow.
I need to create instances of an ontology (lets call it ontology A) using forms (either web or desktop) and store them in a triple store (e.g. Virtuoso). I would like to hide as much details as possible regarding the ontological relationships between the entities defined in ontology A and provide a plain simple user interface for CRUD (Create-Retrieve-Update-Delete) operations based on the entity schema defined in ontology A.
For example, I have found two possible solutions in the protege ecosystem:
PropertyFormPortlet It is not a live feature in the current webprotege version.
facsimile project
As described in the respective paper this is a solution that has been implemented for a specific context. Therefore, adapting it to another domain would not be as straightforward as I would like.
I wonder, is there another solution (even out of the protege ecosystem) that could facilitate such a form-based ontology editing. Could somebody provide some guidance?
Just in case someone lands in this question, I write down my conclusions. Practically, none of which I tried worked, but still I found out some interesting things.
OpenLink Structured Data Editor:
OSDE is a browser plugin which aims at populating RDF graphs in the form of files, based on Linked Vocabularies. In my case it didn't work as my locally hosted ontology cannot play the role of a "Linked Vocabulary". However, OpenLink team said they will work on it.
OData2SPARQL:
In our test, the WebIDE did not manage to create the forms out of the box as suggested by the respective video tutorial. However, we managed to use OpenUI library as a client of the OData services automatically created by OData2SPARQL, providing a web service interface for our ontology.
Ontowiki:
In our test environment, OntoWiki partially worked. We could save data but there were some bugs when trying to add properties etc. OntoWiki developers said that they plan to refactor in order to actively support it in newer hosting settings, but this is not the case right now.

Watson Custom sentiment

I am new to Watson, and have a fairly basic question. I understand that custom models can be created for discovery, but not sure about the sentiment model
In my domain there are certain verbs and adjectives that indicate positives and negatives, and Id like to train the sentiment model to identify these. Is this posible?
Thanks in advance,
JDG
For sentiment models, I'm assuming you're referring to Watson NLU. If so, then yes, you can create custom models via Watson Knowledge Studio and use them in the API. Full documentation can be found here.

Watson Q/A Health to Dialog Service/Natural language classifier

I have been learning on building a simple application by replicating the Watson Health Q/A application(https://watsonhealthqa.mybluemix.net/) but the Q/A application seems to have been deprecated and documentation/forums seem to say that I should use the dialog service with the natural language classifier. Any pointers if this is the case. Second, where can I get the template file(dialog xml file with classifier), i.e for the dialog that is based on the already trained health data that previously existed in the Q/A Health application. i.e would need to train the classifier with the same data they used for their Q/A application? Is this now under health services?
#simon's answer is correct and has links to the new services. The older QA service has been deprecated and replaced by other, singularly focused services. The new services are more flexible and more composable.
Side note... I guess I should update my demos (the URL you mention above) b/c those old services are no longer active.

WSO2 CEP Extension ML with Collaborative Filtering

It's possibile to integrate a Collaborative Filtering Explicit Data model generated with WSO2 Machine Learner module? I want to query model with Siddhi, but in WSO2 docs i not found any way to do.
Yes it is possible to integrate machine learning models with WSO2 CEP and use Siddhi to get the predictions. Please use this guide.
Tishan
No. The current released versions of WSO2 Machine Learner (1.0.0 and 1.1.0) do not provide support for collaborative filtering as a CEP extension, therefore you cannot use collaborative filtering models created with Machine Learner with a Siddhi query.
At the moment, only models created for numerical prediction, classification, anomaly detection and deep learning can be used with a Siddhi query.
Not only collaborative filtering algorithm, any machine learning model you developed using WSO2 Machine Learning Server can be easily integrated with other products in the WSO2 echo system. For instance, you can integrate WSO2 ML models easily with WSO2 ESB using a special mediator called predict mediator [1]. Also, we have written an extension for the WSO2 CEP Server [2] as well. In addition to that, we are planning to add few more extensions in upcoming releases.
Sometimes, you might want to install machine learning models built using WSO2 ML Server outside of the WSO2 echo system. For this purpose, we have provided two options namely Predictive Model Markup Language (PMML) and pure Java serialized object support.
[1]. https://docs.wso2.com/display/ML110/Predict+Mediator+for+WSO2+ESB
[2]. https://docs.wso2.com/display/ML110/WSO2+CEP+Extension+for+ML+Predictions