Is it possible to determine model input at runtime with seldon - deployment

I'm thinking of deploying ml models with seldon core on kubernetes. Seldon provides ways to do pre-processing, post-processing, predicting, combining and routing models. But, I think, these all assume that the input data is fixed. Is the input data for an entire seldon graph fixed, or is it possible to call models at runtime? In other words, is it possible use the output of one model to determine which other model should be called.
The thing I'm trying to do is to run one model that has a variable number of outputs (let's say an image instance segmentation model) and run a second model for each of the outputs (let's say an image classification model). In this case the input of the second model depends on the output of the first model.
Is this supported by seldon? Is there a way to do it with seldon?

Related

Prepping to update a Core ML Model

Core ML 3 now gives us the ability to perform on-device training. After creating an updatable Core ML Model, we need to perform some function to update it on-device by using the MLUpdateTask function which requires 3 parameters: Model URL, MLBatchProvider and MLModelConfiguration.
Since Core ML 3 was just released, documentation for it is very limited; specifically on how to prepare the training data or an MLBatchProvider
Question: How do you prepare training data or create an MLBatchprovider.
If your Model is Named TestModel a TestModelTrainingInput class should be available.
let singleTrainingData = try TestModelTrainingInput(input: .init[1,2,3], output_true: .init([4,5,6]))
let trainingData = MLArrayBatchProvider(array: [singleTrainingData])
To provide data to Core ML, you create an MLFeatureProvider object. This returns one or more MLFeatureValue objects, one for each input in your model. Normally the auto-generated class does this behind the scenes.
If you want to work with a batch, you create an MLBatchProvider that has multiple MLFeatureProviders, one for each example.
Making an MLBatchProvider for predictions isn't so hard: just put your MLFeatureProviders in an array and then use MLArrayBatchProvider. Again, the auto-generated class has a helper method for this.
For training you probably want to load the data on-the-fly, do random augmentations, and so on. In that case, you'll want to make a new class that adopts the MLBatchProvider protocol. It should return an MLFeatureProvider for each example. This time the MLFeatureProvider does not just have the MLFeatureValue for the example but also an MLFeatureValue for the target / true label. (The auto-generated class has a helper class for this training feature provider, but not for the training batch provider.)
I haven't actually gotten any of the new training APIs to work yet (they crash on beta 2), so I'm not 100% sure yet how the MLBatchProvider would cycle through an entire epoch of training examples.

Share data between two simulink models

Lets say I have following model
And I want the block in red to come from another model.I want them the two models to run independently and having them talk to each other.
I have read this https://www.mathworks.com/help/simulink/ug/share-data-with-other-matlab-system-blocks.html but I didn't help me
You're not really sharing data, you are asking about using one model inside another model. For this you want to use the Model Reference block.

Archimate - Application Layer - Interfaces + Database

I am quite new to the Achrimate 3.0 and I am trying to make my model in it. I put an example below. My goal is to model this stream of data where I have a Source System which is creating output files in specific format -> next Step is Pulling the data by processing component and looking for some values in connected DB -> final step is then deliver this data (pushed by processing component) to Target Systems.
Q1: What relationship is correct to Application Component and Interface? In the picture is triggering (but maybe FLOW fits better) ?
Q2: Database is joined via Access relationship ?
Q3: For my purposes it will need to hold information about DB (columns+types+notes) structure in the diagram, any tips, how to manage it in Archimate ?
Diagram Example Here:
First, it would be nice to read the specification (http://pubs.opengroup.org/architecture/archimate3-doc/chap09.html#_Toc489946063).
Are you sure that you use only application layer? If so, then the interface is not defined correctly, the interface that you specified is rather related to the technology layer.
So:
Wrong interface (IMHO). Figure 67: Application Layer Metamodel of spec show you how elements can be linked on this layer.
Database can be represent as component, not DataObject.
In my experience - no good way. Use the standard reverse engineering mechanism. Associate the resulting UML objects with the Archimate elements if you need it.

Difference between a Simulink library and a model reference

What are (If there are) the differences between a Simulink library and a model reference. There's advantadges in using either of them in different situations?
The main purpose of libraries and model reference are the same: facilitate the reuse of simulink models. When you work with libraries, simulink "imports" the content of the referenced models in to the main model. Sometimes, this leads to the developer dealing with gigantic models (more than 50k blocks), which can be time consuming. When you are designing a library, the lib file cannot be run. You have to "instantiate" it in the main model. On the other side, model reference deals with separated models. They are put together when you press the simulate button, but during the design time, you deal with completely separated models. With model reference, you can also select acceleration methods (it basically compiles the model) and this can't be done with libraries.
Adding some more to danielmsd's answer:
Configuration Management: Model references can be put easily into
version control and developers can work independently from each other. A library is one file, so the blocks
cannot be versioned individually and developers cannot work in parallel.
You can protect model references.
Code Generation: Incremental build is only possible with model referencing.
BUT:
Model referencing has quite a few limitations, so check them out carefully before picking this option. See Model Referencing Limitations.
From a system design perspective model references should be used for components of your system. That is the different parts your system is made from. Libraries should be used as utilities. That is reuable blocks that are used through out your design.
For example a robot control system includes components: navigation, control, path_plannen etc. These are components and should be implemented with model references. In that case they are developed as independent models and can be tested independently.
Inside the components you could need utility blocks such as low_pass_fileter, error_state_handler and check_input_range, they are libraries.
Advantages of mdl ref:
- Code generation: Model references allow partial builds when using the coder product. Assuming you have a really large model with 100k blocks and it takes 20 minutes to build, splitting it up in model references will reduce the build time since only the changed model will need to rebuild.
Model update: only changed model references are updated "CTRL+D" therefore this helps when having really large models.
Simulation: In simulations mdl refs are generated as dlls which makes your simulations much faster (the effect is much bigger than the difference between normal and accellerator mode)
Disadvantages pains:
- In general Mdl referencing is somehow a pain to use due to limitations
There is no possibility to pass a Simulink.parameter.object which has a tree structure. (When using type:BusObject only the value property has a structure, the other properties don't)
When a subsystem has a bus signal as input, a mdl ref needs a bus object to specify the input interface, and the library block doesen't. (Even if its quite ugly to use unspecified bus inputs in a lib block). (Note that busobject are always global in the base workspace... risk of naming collisions)

Business logic and data mappers in Zend Framework

I reluctantly adopted a data mapper approach for my current project but I've been left a little confused.
Business Model
A worker has a name, a set of working hours, and a cost.
Data Model
A worker is made up of a labour type and a working pattern (three different tables). The cost of the worker is calculated based on a combination of the labour type and the working pattern.
My problem...
I seem to have two different models, one that represents business logic and one that represents data structure. I was of the understanding that my model should represent the business logic, but what happens when I want to insert a new worker? This is done using a form with two drop downs, the working pattern & the cost, the id's of which are not needed by the business model.
Confused? I am.
There is no real support for data models with the zend framework. But weierophinney does a realy good job to show how they could be implemented. Another very good description is this one.
Usually a model represents the data and includes the logic. The data model is a backend independed way to write/get data. For the model and the application it doesn't matter from where the data comes. Thus the datastorage can be exchanged without having to touch anything else.
Default data modelling in Zend Framework (Zend_Db_Table) isn't probably the best choice for object oriented data modelling.
Try using ORM like Doctrine (http://www.doctrine-project.org/) it allows you to create domain object model and almost transparently store it in database. This way you can have business model and data model combined in single classes.