Trying to upload a AnyLogic model to AnyLogic Cloud that uses an AI Brain developed in Bonsai - anylogic

As personal training for working with the Bonsai platform, I've used a model called "Activity Based Cost Analysis" and developed a Brain using Bonsai to optimize the models performance.
The brain has been fully trained and exported into an Azure Web App. Locally it worked well but when I upload the model to AnyLogic cloud I get the following error message:
This is how the Bonsai Connector block properties look in AnyLogic:

The AnyLogic public cloud prohibits any access to the Web, so this will not work.

Related

Where does the de-biased model get trained in Watson OpenScale?

Does the model get trained in Watson Machine Learning, some other 3rd party engines or micro services in OpenScale?
After the model is trained, where does it get served? Can I inspect/modify the model?
The model is trained in the Fairness micro-service of the IBM AI OpenScale and is not deployed anywhere. It is used to de-bias the output of the user's model and as and when required. The model is not available for users to use as it is not a replica of the user's model.

Sentiment analysis using IBM Data Science Experience (DSX) and Bluemix Services

I’m trying to analyze customer feedback data (unstructured data) using IBM Data Science Experience (DSX) and Bluemix services. The objective is to do sentiment analysis.
Is it possible to call the Bluemix instances from DSX for this exercise? If yes, I’m looking for a sample Watson Machine Learning Flow.
Any alternative idea?

Does the Recommendation service allow enriching an existing model with new data?

We are able to provide an initial training model and ask for recommendations. When asking for recommendations we can provide new usage events. Are these persisted at all into the model? Do they manipulate the model at all?
Is there another way the data is supposed to be updated or do we need to retrain a new model every time we want to enrich the model?
https://azure.microsoft.com/en-us/services/cognitive-services/recommendations/
EDIT:
We are trying to use the "Recommendations Solution Template" which deploys a solution to Azure and provides a swagger endpoint for working with the model (https://gallery.cortanaintelligence.com/Tutorial/Recommendations-Solution)
It appears the Cognitive Services API is much richer than this. Can the swagger version's models be updated?
After more experience with this I discovered a few things as of August 21st, 2017:
While not intuitive for the uninitiated, new data requires training a new model for the data to be persisted into the model.
This allows a form of versioning the model, and means when you make new models you can switch recommendations to work how they did before if they don't work as well.
The recommended method appears to be to batch usage data and create new builds of the model on an interval.
The APIs do allow passing in recent usage data to allow recent data to be accounted for at scoring time, it's just not persisted.
The "upload usage events" call in the cognitive services API does not seem to work. Uploading the new usage data via a file does appear to work.
The Recommended Solutions Template vs. The Cognitive Services API
It appears the Recommended Solutions Template is a packaged version of the SAR (Smart Adaptive Recommendations) model inside the Cognitive Services API that is optimized for ease of use.
I'm presuming for other popular recommendation models like FBT the Cognitive Services API should be used as the deployable template only allows one model type.
Additional note on the Preview Status of the API
It seems microsoft is deprecating the datamart as of February and sending people to this preview API instead. Therefore it seem reasonable to presume this Preview is highly likely to move on past preview and not be killed.

Training data for Conversation Enhanced Watson Application

Looking at Retrieve and Rank Web UI bound to the conversation-enhanced application:
https://github.com/watson-developer-cloud/conversation-enhanced
no questions have been uploaded for training, though there is a trainingdata.csv.
I would like to understand how trainingdata.csv was constructed.
Thank you !
That training data was created manually, not using the UI, using the approach described in https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.shtml (because it was prepared before the tooling was available)

Use Azure ML methods like an API

Is that possible to use machine learning methods from Microsoft Azure Machine Learning as an API from my own code (without ML Studio) with possibility to calculate everything on their side?
You can publish an experiment (machine learning functions you hooked together in Azure ML Studio) as an API. When you call that API in your custom code you give it your data and all the computation runs in the cloud in Azure ML.
I am reasonably new to Azure machine learning but I do not believe it is possible to use their API without using the MS studio at all. For example you will need the API key to call the API and authenticate with it and the only way I am aware of that you can obtain this key is via the ML studio (after you have published a trained experiment).