Simulink Slip Rate Control (ABS Modelling) - simulink

I am trying to limit one input (Tb, which is a step input) while comparing some other variable (s, calculated in the model) to a set of conditions:
https://www.dropbox.com/s/4y2ni6dd1bf6ui9/11.jpg
The picture shows the variation of Tb as required for a give s curve
How do i implement this logic in a simulink model. I need to add this logic to an existing model.

Related

Integrating pump curve in Modellica.Fluid.Machines.ControlledPump

I'm a new user of OpenModelica. I'm trying to initialise variable-speed hydraulic pump curves in Modelica. I have 15 volumetric flow rates (m3/h) and 15 pump heads (m) for each pump speed (rev/min). I have been trying to use the following code for inputting these values into my model.
redeclare function flowCharacteristic =
Modelica.Fluid.Machines.BaseClasses.PumpCharacteristics.quadraticFlow (V_flow_nominal={},head_nominal={})
I encounter a translation error because the read-only system library only allows 3 elements in the V_flow_nominal and head_nominal array.
How do I obviate the error and still enter all data points I have?
How do I convert to SI units (m3/s and Pa)?
Setting the flowCharacteristic to Modelica.Fluid.Machines.BaseClasses.PumpCharacteristics.polynomialFlow should solve your problem for one speed. Values for other speeds are interpolated from this set of values. If you want to enter different values for other speeds you need a map-based model.

Parameter Variation in AnyLogic: Data for a specific variation

I am using parameter variation in AnyLogic (in a system dynamics model). I am interested in how one parameter changes with the various iterations. The parameter is binary: 0 when supply of water is greater than demand and 1 when supply is lower than demand. The parameters being varied are a given percentage of decrease in outdoor irrigation, a given percentage of decrease in indoor water-use, and a given percentage of households that have rainwater harvesting systems. Visually, I need a time plot where on the x-axis is time (10,950 days; i.e. 30 years) and the binary on the y-axis. This should essentially show which iteration pushes a 1 further into the future.
I have watched videos and seen how histograms and 2D data are used to visualize the results of the iterations, but this does not show which iteration produced which output specifically. Is there a way to first, visually show the output as I have described above and second, return the data for a specific iteration?
Many thanks!
Parameter variation experiments have After Iteration and After Simulation run actions that are executed after each iteration and simulation respectively. Here, it is possible to access the values inside the simulation object after it finished but before it is destroyed. There is also a getCurrentIteration() method which can be used to control the parameter variation experiment and retrieve the data.
For more detail please consult here and see "SIR Agent Based Calibration" example model in AnyLogic example models library (Help -> Example Models).

Calculate number of parameters in neural network

I am wondering would the number of parameters in the models like ResNet18, Vgg16, and DenseNet201 would change if we change the input size to the model?
I did measure the number of parameters with the following command
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
Also, I have tried this snippet, and the number of parameters did not change for different input size
import torchvision.models as models
model= models.resnet18(pretrained = False)
model.cuda()
summary(model, (1,64,64))
No it would not. Parameters of a model have the purpose of processing the input as it propagates inside the network pipeline.
The parameters are mostly trained to serve their purpose, which is defined by the training task. Consider a increase in number of parameters based on the input? What would their values be? Would they be random? How would this new parameters with new values affect the inference of the model?
Such a sudden, random change to the fine-tuned, well-trained parameters of the model would be impractical. Maybe there are some other algorithms that I am unaware of, that change their parameter collection based on input. But the architectures that have been mentioned in question do not support such functionality.
Traninable parameters do not change with the change in input. If you see the weights in first layer of the model with the command list(model.parameters())[0].shape you can realize that it does not depend on the height and width of the input, but it depends on the number of channels(e.g Gray, RGB, HyperSpectral), which usually is very insignificant in bigger models. For further information about getting the input shape, you can see this toy example.

Difference in Workspace and Simulink step response. Why this difference?

I had as primary objective to make a controller for the transfer function (5.551* s^2), using root locus I made the controller shown below. Analyzing the step response in the Workspace using the step () function I had a satisfactory answer but when I try to transfer this answer to Simulink the response behaves differently, at steady state for example I wish to have the smallest possible error as it was obtained in Workspace but in Simulink there is a big error and for some reason at 8 seconds time (Simulink simulation time) there is a "jump" as shown on the display and when I change the simulation time there is a change in this "jump" too and I do not know why these changes between one environment and another.
Step response in Workspace
Step response in Simulink with 8s of simulation
Step response in Simulink with 12s of simulation
Simulink controller
Simulink transfer function
I expected to make a controller that has an error less than 5% and an overshoot smaller than 25%, so I first made a controller with two integrators to nullify the effects of zeros on the source, after that I added two more integrators on the source to try decrease the error, the zero at -0.652 I used the angular condition for this and the gain of 0.240251 I used the modular condition.
I wasn't expecting the most optimized behavior possible, just that it has minimum conditions that satisfy the imposed conditions, so I didn't worry for example about the four integrators at the source.
I tried use the sisotool() command thinking that I had done something wrong, but the result changed a lot when I was simulating Simulink so I discarded this option and kept the controller I made using root locus.
Your MATLAB code and your Simulink model are not the same, and hence the different results.
MATLAB allows you to define the non-causal plant model P_ball, then form the causal closed loop CL, which can have its step response generated.
Simulink does not allow you to model non-causal blocks (even if the overall model is causal) and hence will not allow you to implement s^2, which I assume is why you have used two differentiation blocks. But a numerical differentiation is not the same as a Laplace s operator.
You would have to make the plant causal by incorporating two poles that are large enough to not adversely effect the overall simulation. So your plant model needs to be something like 5.551*s^2/((s/1000 + 1)(s/1000 + 1)) which can be implemented using a Transfer Function block with a numerator of 5.551*1000*1000*[1 0 0] and a denominator of [1 2*1000 1000*1000].
Alternatively you could just implement PID * P_ball (where you manually do the 2 zero/pole cancellations) which is causal.

Confusion about many-to-one, many-to-many LSTM architectures in Keras

My task is the following: I have a (black box) method which computes a sequence starting from an initial element. At each step, my method reads an input from an external source of memory and outputs an action which potentially changes this memory. You can think of this method as a function f: (external state, reading) -> action. I want to train an ANN to learn f(), which means I want to be able to take my trained model, feed it an input, get the predicted action, use it to change the external state and repeat this process indefinitely, one step at a time.
Because of the nature of f() I know that the ANN must be recurrent and stateful, but I'm not so sure about the rest. It makes sense to train it to map sequences of readings into sequences of actions, but it only makes sense if the model is able to fuse each reading with the action outputted in the last step, and I'm not sure how to enforce that.
But most importantly: After training my model with a given sequence length (readings^N -> actions^N), how can I make it output predictions one step at a time (sequence length = 1)? Is this possible?