Running Simulink with inputs in real time - matlab

I am trying to test a controller I have developed. I would like to use simulink to create a model of the plant and run that on my computer. Separately I would like to have the controller modeled in c++ on an embedded chip. I would like to exchange the data from the simulink model to the controller and then take the controller outputs and feed it back into simulink. Is there a simulink feature that supports this?

Related

How to save BERT Huggingface Question Answer transformer pipeline as a reusable model

I have written a Question/Answer BERT application that uses the transformer pipeline protocol. I would like to port this to the Raspberry PI 4. Is there a way to capture the complete cached inference transformers pipeline model, quantize it, save it, and convert it to Tensorflow Lite model? I want the user to be able to ask multiple ad hoc questions against the fine-tuned model.
You might want to check for Pickle.
That's a practical way to encapsulate objects so you can save/export and import/load.
You can save your whole model / pipeline in a pickle file and load it in your Raspberry.

NetLogo LevelSpace Extension for Passing Agents Between Models

I'm using NetLogo's LevelSpace extension to pass agents between two models.
Specifically, I am wanting to make agents leave one model and appear with all of their attributes in another model, and vice versa.
For example, if one model was of a work environment and an agent spent 8 hours at work then, the other model was of the agent's home. That same agent would have all of the same attributes, just be in a different environment. The agent would need to have their attributes updated in each model as well.
I currently have a controller setup to control both model environments and, hopefully, will be able to record and plot data received from both models but I need to be able to pass the agents back and forth first. Does anyone know if this is actually possible and how to achieve it. The LevelSpace dictionary doesn't really have a solution and I can't find a tutorial.

How to exclude (ignore) a GIS route programmatically in Anylogic?

While working with Anylogic user can open properties of some GIS route (or some other object) and push checkbox "Ignore". The GIS route will be excluded from the model and after running the model user wont see the object because it wont exist.
Is it possible to exclude some GIS routes in interactive mode? For example I drew in Anylogic big GIS network with many GIS routes and after running the model (!) I want to have a possibility to choose routes which should not be included in the network. E.g. it can be implemented in Simulation window.
I was looking for suitable JAVA code but I did not find anything except Visible-property.
you can not programmatically "ingore" something, this is not how Java works.
In your case, the only way is to create the GIS network programmatically as well at the start of the model. Now you have full control because you could decide to only create part of your network, depending on user choice.
Check the AnyLogic API on how to create GIS routes programmatically.
cheers

How can I allow only one instance of app to run which created in APP DESIGNER of MATLAB?

I created application in app designer of matlab and install it in apps.
I can run one or more instance this apps but I need only one. For example in "GUIDE" of matlab is option: "GUI allows only one instance to run (singleton)"
If you check the code GUIDE generates you will find out that it uses global application data for keeping count of application instances. So you could implement something similar with setappdata(0, ...) and getappdata(0, ...).

Continuously train CoreML model after shipping

In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand.
Is there any way to add training data to my trained model after shipping?
EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to my server, re-generate the trained model and download it into the app? Seems like it would work, but this completely defeats the privacy aspect of being able to use ML without the user's data leaving the device.
The .mlmodel file is compiled by Xcode into a .mlmodelc structure (which is actually a folder inside your app bundle).
Your app might be able to download a new .mlmodel from a server but I don't think you can run the Core ML compiler from inside your app.
Maybe it is possible for your app to download the compiled .mlmodelc data from a server, copy it into the app's Documents directory, and instantiate the model from that. Try it out. ;-)
(This assumes the App Store does not do any additional processing on the .mlmodelc data before it packages up your app and ships it to the user.)
Apple has recently added a new API for on-device model compilation. Now you can download your model and compile it on device
CoreML 3 now supports on-device model personalization. You can improve your model for each user while keeping its data private.
https://developer.apple.com/machine-learning/core-ml/
In order to dynamically update the model (without updating the whole app), you need to use MPS (Metal Performance Shader) directly instead of relying on .mlmodel, which must be bundled with the app.
It means you need to manually build the neural network by writing some Swift code (instead of using coremltools to converts existing models directly), and feed various weights for each layer, which is a little bit of work, but not a rocket science.
This is a good video to watch if you want to know more about MPS.
https://developer.apple.com/videos/play/wwdc2017/608/
Core ML supports inference but not training on device.
You can update the model by replacing it with a new one from a server, but that deserves its own question.
Now with iOS11 beta4, you can compile the model, download from server.
(Details)
As alternative to bundling an mlmodel with application, you can download and then compile models within your CoreML app. For this you just need to download the model definition file onto the user’s device by using, for example, URLSession. And after this you have to compile the model definition by calling a throwing compileModel(at:) type method.
let newCompiledModel = try MLModel.compileModel(at: modelDescriptionURL)
You'll get a new compiled model file with the same name as the model description but its ending will be mlmodelc. Eventually, create a new MLModel instance by passing the compiled model URL to its initialiser.
let model = try MLModel(contentsOf: newCompiledModel)
However, remember that compiling process may be time consuming and it shouldn't be done on main thread.