Backflow Orifice (Zeta) - modelica

I would like to simulate 3 pumps and a pressure sink. The problem now is if I want the pump to pump the medium in the opposite direction(backflow), the orifice does not work anymore...
Can anyone please help me?
Picture of the model
Picture of the signal
This is the error message:
Log-file of program ./dymosim
(generated: Thu Mar 08 09:21:33 2018)
dymosim started
... "dsin.txt" loading (dymosim input file)
Residual component 1 of system 1 is 1.255542e+002
Residual component 2 of system 1 is 1.255542e+002
Residual component 3 of system 1 is 1.255542e+002
LINEAR SYSTEM OF EQUATIONS IS SINGULAR AT TIME = 0
... Linear system of equations number =
... Variables which cannot be uniquely computed:
pump.port_a.h_outflow = 83680
... NOT ACCEPTING SINCE TOO LARGE RESDIUAL
Trying to solve non-linear system using global homotopy-method.
Residual component 1 of system 1 is 1.255542e+002
Residual component 2 of system 1 is 1.255542e+002
Residual component 3 of system 1 is 1.255542e+002
LINEAR SYSTEM OF EQUATIONS IS SINGULAR AT TIME = 0
... Linear system of equations number = 1
... Variables which cannot be uniquely computed:
pump.port_a.h_outflow = 83680
... NOT ACCEPTING SINCE TOO LARGE RESDIUAL
Error: Failed to start model.

Related

Calculating Q value in dqn with experience replay

consider the Deep Q-Learning algorithm
1 initialize replay memory D
2 initialize action-value function Q with random weights
3 observe initial state s
4 repeat
5 select an action a
6 with probability ε select a random action
7 otherwise select a = argmaxa’Q(s,a’)
8 carry out action a
9 observe reward r and new state s’
10 store experience <s, a, r, s’> in replay memory D
11
12 sample random transitions <ss, aa, rr, ss’> from replay memory D
13 calculate target for each minibatch transition
14 if ss’ is terminal state then tt = rr
15 otherwise tt = rr + γmaxa’Q(ss’, aa’)
16 train the Q network using (tt - Q(ss, aa))^2 as loss
17
18 s = s'
19 until terminated
In step 16 the value of Q(ss, aa) is used to calculate the loss. When is this Q value calculated? At the time the action was taken or during the training itself?
Since replay memory only stores < s,a,r,s' > and not the q-value, is it safe to assume the q value will be calculated during the time of training?
Yes, in step 16, when training the network, you are using the the loss function (tt - Q(ss, aa))^2 because you want to update network weights in order to approximate the most recent Q-values, computed as rr + γmaxa’Q(ss’, aa’) and used as target. Therefore, Q(ss, aa) is the current estimation, which is typically computed during training time.
Here you can find a Jupyter Notebook with a simply Deep Q-learning implementation that maybe is helpful.

Modeling an hrf time series in MATLAB

I'm attempting to model fMRI data so I can check the efficacy of an experimental design. I have been following a couple of tutorials and have a question.
I first need to model the BOLD response by convolving a stimulus input time series with a canonical haemodynamic response function (HRF). The first tutorial I checked said that one can make an HRF that is of any amplitude as long as the 'shape' of the HRF is correct so they created the following HRF in matlab:
hrf = [ 0 0 1 5 8 9.2 9 7 4 2 0 -1 -1 -0.8 -0.7 -0.5 -0.3 -0.1 0 ]
And then convolved the HRF with the stimulus by just using 'conv' so:
hrf_convolved_with_stim_time_series = conv(input,hrf);
This is very straight forward but I want my model to eventually be as accurate as possible so I checked a more advanced tutorial and they did the following. First they created a vector of 20 timepoints then used the 'gampdf' function to create the HRF.
t = 1:1:20; % MEASUREMENTS
h = gampdf(t,6) + -.5*gampdf(t,10); % HRF MODEL
h = h/max(h); % SCALE HRF TO HAVE MAX AMPLITUDE OF 1
Is there a benefit to doing it this way over the simpler one? I suppose I have 3 specific questions.
The 'gampdf' help page is super short and only says the '6' and '10' in each function call represents 'A' which is a 'shape' parameter. What does this mean? It gives no other information. Why is it 6 in the first call and 10 in the second?
This question is directly related to the above one. This code is written for a situation where there is a TR = 1 and the stimulus is very short (like 1s). In my situation my TR = 2 and my stimulus is quite long (12s). I tried to adapt the above code to make a working HRF for my situation by doing the following:
t = 1:2:40; % 2s timestep with the 40 to try to equate total time to above
h = gampdf(t,6) + -.5*gampdf(t,10); % HRF MODEL
h = h/max(h); % SCALE HRF TO HAVE MAX AMPLITUDE OF 1
Because I have no idea what the 'gampdf' parameters mean (or what that line does, in all actuality) I'm not sure this gives me what I'm looking for. I essentially get out 20 values where 1-14 have SOME numeric value in them but 15-20 are all 0. I'm assuming there will be a response during the entire 12s stimulus period (first 6 TRs so values 1-6) with the appropriate rectification which could be the rest of the values but I'm not sure.
Final question. The other code does not 'scale' the HRF to have an amplitude of 1. Will that matter, ultimately?
The canonical HRF you choose is dependent upon where in the brain the BOLD signal is coming from. It would be inappropriate to choose just any HRF. Your best source of a model is going to come from a lit review. I've linked a paper discussing the merits of multiple HRF models. The methods section brings up some salient points.

simulink discrete 0 and 1 function

I am running the model in external mode in real time hardware with discrete fixed step solver. I need to generate an input signal to my hardware as boolean 0 and 1 with sample time 0.01.
1.Is there any block to generate 0 and 1 as input signal in simulink.
2.Also another doubt, with input start for 2second as 1 and 5 second as 0 and another desired (as user desired time it can give input as 1 or 0 )
Kindly suggest me.

What would be a good neural net model for pick 3 lotteries?

This is just for fun to see if neural network predictions increase my odds of getting pick 3 lotteries correct.
Right now i just have a simple model of 30 input units, 30 hidden units, and 30 output units.
30 because if the pick 3 result was something like 124, i would make so that all my inputs are 0's except input[1] = 1 (because i assign 0 to 9 for the first digit), input[12] = 1 (because i assign 10 to 19 for the middle digit), input[24] = 1 (because i assign 20 to 29 for the last digit). I just do that so that my inputs are able store placement of digits.
i am training it so that if enter inputs for one draw, it gives me outputs for the next draw.
Do you know of a better model (if you have had experience with neural networks that dealt with pick3 lotteries)?

Time series forecasting

I have an input and target series. However, the target series lags 3 steps behind the input. Can I still use narx or some other network?
http://www.mathworks.co.uk/help/toolbox/nnet/ref/narxnet.html
Predict: y(t+1)
Input:
x(t) |?
x(t-1)|?
x(t-2)|?
x(t-3)|y(t-3)
x(t-4)|y(t-4)
x(t-5)|y(t-5)
...
During my training, I have y(t-2), y(t-1), y(t) in advance, but when I do the prediction in real life, those values are only available 3 steps later, because I calculate y from the next 3 inputs.
Here are some options
1) Also, you could have two inputs and one output as
x(t), y(t-3) -> y(t)
x(t-1),y(t-4) -> y(t-1)
x(t-2),y(t-5) -> y(t-2)
...
and predict the single output y(t)
2) You could also use ar or arx with na = 0, nb > 0, and nk = 3.
3) Also, you could have four inputs, where 2 of the inputs are estimated and one output as
x(t), y(t-3), ye(t-2), ye(t-1) -> y(t)
x(t-1),y(t-4), y(t-3), ye(t-2) -> y(t-1)
x(t-2),y(t-5), y(t-4), y(t-3) -> y(t-2)
...
and predict the single output y(t), using line 3 and higher as training data
4) You could setup the input/output as in steps one or three and use s4sid
I have a similar problem, but without any measurable inputs. And I'm trying to see how much error there is as the forecast distance and model complexity are increased. But I've only tried approach 2 and set nb = 5 to 15 by 5 and varied nk from 20 to 150 by 10 and plotted contours of maximum error. In my case, I'm not interested in predictions of less than 20 time steps.
Define a window of your choice( you need to try different sizes to see which is the best value). Now make this problem a regression problem. Use values of xt and yt from t=T-2... T-x where x-2 is the size of window. Now use regress() to train a regression model and use it for prediction.