Does the model get trained in Watson Machine Learning, some other 3rd party engines or micro services in OpenScale?
After the model is trained, where does it get served? Can I inspect/modify the model?
The model is trained in the Fairness micro-service of the IBM AI OpenScale and is not deployed anywhere. It is used to de-bias the output of the user's model and as and when required. The model is not available for users to use as it is not a replica of the user's model.
Related
As personal training for working with the Bonsai platform, I've used a model called "Activity Based Cost Analysis" and developed a Brain using Bonsai to optimize the models performance.
The brain has been fully trained and exported into an Azure Web App. Locally it worked well but when I upload the model to AnyLogic cloud I get the following error message:
This is how the Bonsai Connector block properties look in AnyLogic:
The AnyLogic public cloud prohibits any access to the Web, so this will not work.
I've found an API labeled aws-ecomm-customers-who-viewed-x-also-viewed but I can't figure out if it's for any product or only works for products on your own store.
The reason I ask is because, it seems that the Amazon website no longer displays "Customers Also Viewed" for any products at all.
Amazon Personalize trains ML models to provide recommendations based on your data only. These models are private to your AWS environment and do not include data from Amazon.com or any other source outside of what you provide.
https://docs.aws.amazon.com/personalize/latest/dg/how-it-works.html
I’m trying to analyze customer feedback data (unstructured data) using IBM Data Science Experience (DSX) and Bluemix services. The objective is to do sentiment analysis.
Is it possible to call the Bluemix instances from DSX for this exercise? If yes, I’m looking for a sample Watson Machine Learning Flow.
Any alternative idea?
We are able to provide an initial training model and ask for recommendations. When asking for recommendations we can provide new usage events. Are these persisted at all into the model? Do they manipulate the model at all?
Is there another way the data is supposed to be updated or do we need to retrain a new model every time we want to enrich the model?
https://azure.microsoft.com/en-us/services/cognitive-services/recommendations/
EDIT:
We are trying to use the "Recommendations Solution Template" which deploys a solution to Azure and provides a swagger endpoint for working with the model (https://gallery.cortanaintelligence.com/Tutorial/Recommendations-Solution)
It appears the Cognitive Services API is much richer than this. Can the swagger version's models be updated?
After more experience with this I discovered a few things as of August 21st, 2017:
While not intuitive for the uninitiated, new data requires training a new model for the data to be persisted into the model.
This allows a form of versioning the model, and means when you make new models you can switch recommendations to work how they did before if they don't work as well.
The recommended method appears to be to batch usage data and create new builds of the model on an interval.
The APIs do allow passing in recent usage data to allow recent data to be accounted for at scoring time, it's just not persisted.
The "upload usage events" call in the cognitive services API does not seem to work. Uploading the new usage data via a file does appear to work.
The Recommended Solutions Template vs. The Cognitive Services API
It appears the Recommended Solutions Template is a packaged version of the SAR (Smart Adaptive Recommendations) model inside the Cognitive Services API that is optimized for ease of use.
I'm presuming for other popular recommendation models like FBT the Cognitive Services API should be used as the deployable template only allows one model type.
Additional note on the Preview Status of the API
It seems microsoft is deprecating the datamart as of February and sending people to this preview API instead. Therefore it seem reasonable to presume this Preview is highly likely to move on past preview and not be killed.
Looking at Retrieve and Rank Web UI bound to the conversation-enhanced application:
https://github.com/watson-developer-cloud/conversation-enhanced
no questions have been uploaded for training, though there is a trainingdata.csv.
I would like to understand how trainingdata.csv was constructed.
Thank you !
That training data was created manually, not using the UI, using the approach described in https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.shtml (because it was prepared before the tooling was available)