Referenced Model Block with varying inport dimensions - matlab

I am having some severe trouble ith Simulink right now. I basically use parameterized Referenced Models for easy reusing of Model Blocks inside a larger model. However I have run into a problem:
Model Arguments can't be used on non-tunable parameters (like inport dimensions)
Variable-Sizdes signals cannot simply be created by MUXing signals together. (So using variable-sized inports is also difficult.)
The required dimensional information sadly gets lost at the model boundary.
A few pictures to showcase my problem:
it is easy enough to define the port dimensions inside this model. But I want this model to be reusable various dimensionalities (Maybe I want to Mux together 3 instead of 2 signals).
The math inside should be fine with any dimension and in a subsystem this should work just fine, but the model boundary makes it hard for me to create this in an easy to re-use way...
Any tips would be appreciated.

Related

create deep network in matlab with logsig layer instead of softmax layer

I want to create a deep classification net, but my classes aren't mutually exclusive (that is what sofmaxlayer do).
Is it possible to define a non mutually exclusive classification layer (i.e., a data can be in more than one class)?
One way to do it, it would be with a logsig function in the classification layer, instead of a softmax, but I have no idea how to acomplish that....
In CNN you can have multiple class in last layer as you know. But if I understand correctly your need in last layer an out put with that is in a range of numbers instead of 1 or 0 for each class. Its mean you need regression. If your labels support this task it's OK and you can do it with regression just like what happen in bounding box regression for localization. And you don't need soft-max in last layer. just use other activation functions that produce sufficient out put for your task.

How to add a custom layer and loss function into a pretrained CNN model by matconvnet?

I'm new to matconvnet. Recently, I'd like to try a new loss function instead of the existing one in pretrained model, e.g., vgg-16, which usually uses softmax loss layer. What's more, I want to use a new feature extractor layer, instead of pooling layer or max layer. I know there are 2 CNN wrappers in matconvnet, simpleNN and DagNN respectively, since I'm using vgg-16,a linear model which has a linear sequence of building blocks. So, in simpleNN wrapper, how to create a custom layer in detail, espectially the procedure and the relevant concept, e.g., do I need to remove layers behind the new feature extractor layer or just leave them ? And I know how to compute the derivative of the loss function so the details of computation inside the layer is not that important in this question, I just want to know the procedure represented by codes. Could someone help me? I'll appreciate it a lot !
You can remove the older error or objective layer
net.layer('abc')=[];
and you can add new error code in vl_nnloss() file

Can I use next layer's output as current layer's input by Keras?

In text generate mission, we usually use model's last output as current input to generate next word. More generalized, I want to achieve a neural network that regards next layer's finally hidden state as current layer's input. Just like the following(what confuses me is the decoder part):
But I have read Keras document and haven't found any functions to achieve it.
Can I achieve this structure by Keras? How?
What you are asking is an autoencoders, you can find similar structures in Keras.
But there are certain details that you should figure it out on your own. Including the padding strategy and preprocessing your input and output data. Your input cannot get dynamic input size, so you need to have a fixed length for input and outputs. I don't know what do you mean by arrows who join in one circle but I guess you can take a look at Merge layer in Keras (basically adding, concatenating, and etc.)
You probably need 4 sequential model and one final model that represent the combined structure.
One more thing, the decoder setup of LSTM (The Language Model) is not dynamic in design. In your model definition, you basically introduce a fixed inputs and outputs for it. Then you prepare the training correctly, so you don't need anything dynamic. Then during the test, you can predict each decoded word in a loop by running the model once predict the next output step and run it again for next time step and so on.
The structure you have showed is a custom structure. So, Keras doesn't provide any class or wrapper to directly build such structure. But YES, you can build this kind of structure in Keras.
So, it looks like you need LSTM model in backward direction. I didn't understand the other part which probably looks like incorporating previous sentence embedding as input to the next time-step input of LSTM unit.
I rather encourage you to work with simple language-modeling with LSTM first. Then you can tweak the architecture later to build an architecture depicted in figure.
Example:
Text generation with LSTM in Keras

Heterogeneous data from workspace into Simulink

I have different matrices to import into a Simulink Matlab function from the workspace. These matrices have all different dimension, which I don´t know at priori.
At the beginning I tried using the block 'constant' putting the data all together in a structure like this:
But then, I cannot pick the right matrix since I don´t know the dimension of each element (and also 'mux' cannot be used to split matrices).
I think I will have the same problem also with the block 'from workspace'.
I was wondering if there is a smart way to import heterogeneous structures like these. I tried also with cell-arrays, but it seems to be not supported by Simulink.
Thanks for any suggestions.
If the data is to be used in a Matlab Function block you could define the workspace matricies as parameters in the model explorer and in the Matlab Function port editor. You then have them accessible inside that function without even needing the "const" blocks or drawing any signals.
Even if your final intent is not to have data into a Matlab Function block those blocks are quite useful for extracting signals from heterogeneous data since you can do some size/type checking in them. Then you can output "simulink friendly" signals for use elsewhere.

What is the best way to implement a tree in matlab?

I want to write an implementation of a (not a binary) tree and and run some algorithms on it. The reason for using the matlab is that the rest of all programs are in matlab and it would be usful for some analysis and plotting. From an initial search in matlab i found that there aren't thing like pointers in matlab. So I'd like to know the best ( in terms on convinience) possible way to do this in matlab ? or any other ways ?
You can do this with MATLAB objects but you must make sure you use handle objects and not value objects because your nodes will contain cross-references to other nodes (i.e. parent, next sibling, first child).
This question is very old but still open. So I would just like to point readers to this implementation in plain MATLAB made by yours truly. Here is a tutorial that walks you through its use.
Matlab is very well suited to handle any kind of graphs (not only trees) represented as adjacency matrix or incidence matrix.
Matrices (representing graphs) can be either dense or sparse, depending on the properties of your graphs.
Last but not least, graph theory and linear algebra are in very fundamental ways related to each other see for example, so Matlab will be able to provide for you a very nice platform to harness such relationships.