Create new merge layer operation for Keras - neural-network

I would like to create my own operations to merge networks. So I've taken a look to the code, and I modified engine/topology.py to create my new operation.
I didn't modified layers/wrappers.py because it's only for RNN and when I modify it I get an error.
Are there other files/classes to modify? Don't I have to do something else somewhere else to specify what to do during the backward pass?

You don't have to change any other files if you have implemented your operation properly with backend operations only. The backend is clever and takes care of the computation of the gradients for the backpropagation by itself.
This means that all parameters that will change over time have to be defined with K.variable, and you only use mathematical operations defined in keras.backend. Otherwise the backend will not be able to perform the backpropagation properly.
Side-note: Instead of modifying the source code of keras, you could implement your own class that extends the Merge class and override the call function for your custom operation.

Related

How to extract an MSL model, modify the code, and use locally?

I am interested to replace my own PID-regulator models with MSL/Blocks/Continuous/LimPID. The problem is that this model restricts limitations of output signals to be parameters and thus do not allow time-varying limits, which I need to have.
Studying the code I understand that the output limitation is created by a block MSL/Blocks/Nonlinear/Limiter and I just want to change this to the block VariableLimiter.
I can imagine that you need to ensure that changes of output-limitations vary in a time-scale slower than the regulator in order to not excite unwanted behaviour of the controller. Still here is a class of problems where it would be very useful to allow this limits to vary slowly.
Thanks for the good input to my question and below a very simple example to refine my question. (The LimPID is more complicated and I come back to that).
Let us instead just modify the block Add to a local block in MyModel.
I copy the code from Modelica.Blocks.Math.Add and call it Addb in MyModel. Since here is a dependence of Interfaces.SI2SO I need to make an import before the extends-clause. This import I take from the ordinary general MSL package, instead of copying also that in to MyModel. Then I introduce a new parameter "bias" and modify the equation. The annotation may need some update as well but we do not bother with that now.
MyModel
...
block Addb "Output the sum of the two inputs"
import Modelica.Blocks.Interfaces;
extends Interfaces.SI2SO;
parameter Real k1=+1 "Gain of input signal 1";
parameter Real k2=+1 "Gain of input signal 2";
parameter Real bias=0 "Bias term";
equation
y = k1*u1 + k2*u2 + bias;
annotation (...);
end Addb;
MyModel;
This code seems to work.
My added new question is whether it is enough to look up "extends-clauses" and other references to MSL and make the proper imports since the code is now local, or here are more aspects to think of? The LimPID code is rather complex with procedures for initialization etc so I just wonder if here is more to do than just bring in a number of import-clauses?
The models in Modelica Standard Library (MSL) should only be seen as exemplary models, not covering all possible applications. MSL is write protected and it is not possible to replace the limiter block in LimPID (and add max/min input connectors). Also, it wouldn't work out if you shared your simulation model with others, expecting their MSL to work like your modified MSL.
Personally, I have my own libraries of components where MSL models are inadequate. For example, I have PID controllers with variable limits, manual/automatic functions and many more functions which are needed in my applications.
Often, I create a copy of an MSL model, place it in the same package in my own library and make the necessary modifications and additions, e.g. MyLibrary.Blocks.Continuous.PID.

can on_epoch_end just be defined as a regular function in fastai?

why does it have to use callback, it may as well just do the job if you just define the function on_epoch_end as a regular function and place it at the end of the epoch in the train function in fastai right?
The callback architecture is there to make fastai extendable without the need for modifying the code of the framework itself.
If you add a function call to the end of the train method, you will obviously change the code of fastai. Not only you will have to manage another version of the repository (the fork with your changes), but also this is bad from the architectural point of view. The fastai framework will become dependent on your code (tightly coupled with your code - you can google low coupling).
Of course, it is fine if you just want to try things out quickly. However, you should not do it if you want to build your project on top of fastai.

Initialize buildable Matlab Simulink Model with parameters from SQLite Database

Context: I have a huge Simulink Model that is going to be used for automated simulations on a Debian 10. Therefore it has to be built as standalone C-Code using the Matlab Coder. This code is then called to start the simulation.
What I need: I need to find a way to initialize my built model with ~500 parameters. These change with each simulation run and are stored in a SQLite file. The goal is to have parameters written to the database, then start the Model which reads the parameters from SQLite during initialization (presumably using the InitFcn Model Callback, although I'm open to alternatives).
What I have tried:
Direct SQL interface: I tried to use a direct Matlab-SQL interface such as JDBC (since I don't have access to the Database-toolbox) but those are not supported for Code generation.
Write a C-function that reads the SQLite file, then call the function during initialization in the InitFcn Callback using coder.ceval like this:
data = 0;
err = coder.ceval('read_function',4, 2, 12, coder.wref(data));
parameter = data;
Problem here is that coder.wref is not supported in Matlab and therefore doesn't work in the InitFcn. (Please correct me if I'm wrong)
This only seems to work inside a Matlab-Function-Block:
Error evaluating 'InitFcn' callback of block_diagram 'Model'.
Caused by:
The coder.wref function is not supported in MATLAB.
So my problem with the second approach is, that I can't call the C-function during initialization.
Using a Matlab-function-Block to read the parameters isn't really an option, since I would have to route all the signals out which makes maintaining and further development of the model really hard. Also my suggestion is, that the model would not even run because the parameters are needed to initialize the model.
Questions:
Is there a way to make one of the above approaches work? If yes, how? Where is my mistake?
Is there another (simpler) option to pass the data as an array or struct to my model?
Database looks like this:
Identifier Default
latitude 52.5
longitude 13.4
electricity_consumption 4000.0
ventilation_stream 50.0
PV_peak 30.0
PV_orientation 0.0
no_vessels 28.0
heatpump_exists 1.0
hotwater_consumption 1000.0
.
.
.
After having spent so much time on this issue, I would like to share my experience on this problem:
SQLite: This approach did not work out for me because the direct SQL-Matlab interfaces are not supported for code generation.
It is in fact possible to write a C-function, that reads from SQLite and call that function in a Matlab-function-block via coder.ceval wich allows to read in a signal during simulation. This works for code generation (Simulink coder) as well. However this will not work for initialization (see question).
So none of my original approaches ended up working.
Workaround: I ended up switching to an approach based on the Simulink RSIM-target wich generates code (also for Linux) and can be parametrized via a .mat file wich contains all the parameters. The .mat file can be modified to update parameters. This required some additional code wich automates this step. Also the model configuration for RSIM is a bit tricky.

#karate How to pass parameter to a feature file in gatling simulation class?

Let's consider a scenario, we have to run the performance test for "create an account api" which takes input as header/path param "Auth token" and input data like user account name . So for above scenario we have 2 feature file as,
to run performance test for POST http://baseUrl/auth_param/create/input_data
1. One feature(e.g: generateAuth.feature) file which will have the auth
token
2. Second feature(createAccount.feature) file which take parameter as a
auth token, input data.
Here is my simulation class,
class <MyClass> extends Simulation {
before {
println("Simulation is about to start!")
}
val generateAuthTest = scenario("generateAuth").exec(karateFeature("classpath:path/generateAuth.feature"))
val createAccountTest = scenario("test").exec(karateFeature("classpath:path/createAccount.feature"))
setUp(
createAccountTest.inject(rampUsers(1) over (10 seconds))).maxDuration(1 minutes)
after {
println("Simulation is finished!")
}
}
Here, can i read auth from generateAuth.feature file which is input for createAccount.feature file, so that i can pass as a parameter?
Please suggest me how to pass parameters to createAccount.feature while calling in karateFeature method.
Let me put a requirement here,
let's say we have some feature files for CRUD operations on a particular data. Here how i go to write functional scenario,
I will create new feature file to write a scenario
just use CRUD files to test a SINGLE flow.
Now if i go for Performance test cases on individual operation, i feel there are 2 ways,
Create new 4 performance test feature files (one for each CRUD
method) and call these CRUD feature files in the respective test
feature file. Finally we just call test feature files in the
respective gatling simulation class.
**(In this case, I will end up with creating more test feature files as well simulation classes for
performance, which I want to avoid) **
Just call CRUD files in the respective gatling simulation class and
pass the required parameters to them.(In this case , we just need to create only 4 simulation
classes and run them on the basic of operation like create,read,delete and so on)
Here just wanted to know 2nd way of performance test, is it achievable or not in karate and if yes please let me know how?
Summary- I think its achievable using 3rd feature file (extra) for
individual use case but I do not want to make an extra feature file
for each case so that I can avoid maintenance work and can take
advantage of re-usability of existing feature file from functional
test to performance test.
Just use the normal Karate concepts such as karate-config.js
You can easily switch environments by setting the karate.env system property.
For example:
mvn test -DargLine="-Dkarate.env=e2e"
EDIT: after you edited your question, it is clear you have a SINGLE flow you want to test. please use a SINGLE feature. I suggest you move the generateAuth into the Background of the feature. Also refer to the docs on callSingle() for advanced options.
If you are expecting 2 feature files to magically share data that is not possible and not needed if you structure your tests correctly.
If you really really need this, please create a Java singleton and access it from each feature. Totally don't recommend this though.
EDIT: In Karate 0.9.0 onwards, you can call a single scenario within a feature if it has a tag:
classpath:animals/cats/create.feature#sometagname

How to pass heatPorts.T to DynamicPipe flowModel?

In the implementation of a flow models that function with Modelica Standard Library DynamicPipe (or a similar model that builds from PartialTwoPortFlow) there are examples of flow models that take place in an environment with heat transfer that requires wall properties (e.g., heatPorts.T and/or heatPorts.Q_flow) in order to calculate the pressure drop.
For example, a pressure drop model may need to calculate a new visocisty or Prandtl number based on the medium pressure and the wall temperature to capture cooling/heating effects, etc.
The heat transfer model obtains properties of the medium via passing the "states" however there is no existing connection in DynamicPipe or PartialTwoPortFlow that goes the other way.
I've tried numerous variations of ideas and have had no success, including creating a new PartialTwoPortFlow that contains all the heat transfer calls that exist in DynamicPipe.
I hesitate to post this question as I am surprised I am having so much difficulty with this and would not be surprised to find a straight forward solution. Nevertheless I need this ability and curious if others have already solved this issue as I am running short on ideas.
So my questions is:
What is a proper/efficient means of passing the heatPorts.T values to the flowModel?
For those familiar with the MSL Fluids library and more specifically the Pipe models provided, this answer should (hopefully) make sense.
Aside:
It seems the dynamic pipe could be improved a little bit by not restricting the heat transfer area to the perimeter x lengths and instead introduce a parameter (e.g., heatTransferArea) that would permit the user to define it and default to perimeter x lengths. See below
parameter SI.Area heatTransferArea = perimeter*lengths "Total heat transfer area";
HeatTransfer heatTransfer(
...
final surfaceAreas=heatTransferArea , //perimeter*lengths <- replaced
...
End Aside:
In order to communicate heatPorts.T to the flowModel and to not have errors when I checked each of the models I had to do the following:
Make an "input" in the flowModel for Ts_w. Not parameter (take a look at how mediums.state is passed)! Might have to do some finagling with it like "diameters" (see DetailedPipeFlow) to make it be used how you think it's going to be used.
Duplicate PartialTwoPortFlow and add the final Ts_w = Ts_wFM to flowModel. Additionally define the variable SI.Temperature[nFM+1] Ts_wFM in PartialTwoPortFlow and establish definitions similar to statesFM in the equation section.
This will require adding a HeatPorts model to be added.
Duplicate DynamicPipe and change the extension to the new PartialTwoPortFlow. Set use_HeatTransfer to true (as I've set it up this has to be true now for this to work which isn't ideal but manageable). Might be good to make it a final parameter so it can't be changed.
Don't forget to connect heatPorts to the heatports added in step 2.
I believe that this capture a quick version of how I was able to get the wall temperature passed to the flowModel. Perhaps there is a more elegant way but I though this was pretty serviceable. I now simply have one more Partial model and one more pipe model called PartialTwoPort_wTemp and GenericDynamicPipe (I also incorporated my surfaceArea correction in the new pipe).