Consider following parts of a Simulink model. (Note that I removed the content of the blocks.)
I want to know the meaning of the numbers
above and below the arrows: 14{14}, 25, [25x121]
in the top right-hand side of the blocks: 0:B, 0:8.
The 14{14}, 25 and [25x121] above the signals are the signal dimensions:
In the case of 25, it's a single vector signal of width 25;
In the case of [25x121], it's a matrix signal of dimensions [25x121];
In the case of 14{14}, it's a bus signal with 14 signals carried in the bus and 14 signal elements carried in the bus (i.e. each signal is of width 1)
Consider the following example in the Simulink documentation:
For the other numbers inside the block, they represent the sorted execution order of the model, i.e. in which order each block gets executed. It's difficult to explain what each number is without seeing the whole of the model and copying the documentation verbatim (the notation changes depending on what type of block it is and its level in the model hierarchy) , so I'll just refer you to the documentation page, which should provide all the necessary explanations.
Related
Howdy just had a numerical analysis question about RKF45. The documentation is a little mysterious in terms if is 4th or 5th order, and even on the wikipedia page "is a method of order O(h^4) with an error estimator of order O(h^5)"
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.RK45.html
https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method
Is the idea you first do the 4th order method, and then the interpolate of the numerical solution is 5th order to the true solution? So overall the output of this numerical method is 5th order?
Thanks a lot!
classical embedded methods like Fehlberg 4(5) aka RKF45
The method can proceed with the state updates of its 4th or 5th order method, designed for the 4th, nowadays used with 5th. The step size is variable, the optimal step size is always determined for the 4th order method, with the difference to the 5th order method serving as estimate of the local error, which in most cases is very accurate.
The relation between error tolerance and number of ODE function evaluations will mostly correspond to the 4th order, for example dividing the error tolerance by 16 should result in a step size sequence that is locally about half of the original sequence, doubling the number of ODE function evaluations.
If an exact solution is computable, the error of the numerical integration should be proportional to the tolerance if the 4th order steps are taken, and proportional to the power 5/4 of the tolerance if the 5th order steps are taken.
Extrapolation methods like DoPri (4)5 aka ode45 or RK45
The method proceeds with the values of the 5th order method. It uses a variable step size that is controlled via the difference between the 4th order method and the 5th order method, augmented to behave loosely like the actual local (unit step) error of the 5th order method. Despite being more a guidance to the local error size, it is good enough for the actual error to remain in the region of or below the error tolerance and for the step size to not leave the stability region of the 5th order method. The method was explicitly designed to show this behavior.
For test equations that are smooth, one can thus expect that in a log-log diagram of number of evaluations of the ODE function vs. given error tolerance or actual error against an exact solution you get a curve that is mostly linear with slope about 5. (One would of course compute it the other way, get number of function evaluations and actual error for some spread of error tolerances.)
See https://personal.math.ubc.ca/%7Efeldman/math/vble.pdf for some experiments with low-order embedded methods, starting with using the Euler method as embedded method in the explicit midpoint and Heun method.
I have a battery model in Modelica. PNet is the value of power flowing through battery (PNet is positive for charging and negative for discharging). This oscillates based on a load. I want to calculate the number of cycles that the battery is put through and also the depth of discharge comin in from each of these cycles.
This is a pretty generic question so my answer will be rather generic as well. Also it is not clear to me, what you are referring to as a cycle. Wikipedia mentions deep and shallow discharge and there are some others as well.
Some general note: In Modelica the when statement is useful for counting. You can read through Section 8.3.5 of the Modelica Language Specification to get full information on this.
The below examples computes how often the variable PNet turns positive, which should respond to the number of shallow cycles above. Some description for the model:
The model noiseSource computes a random number which is then filtered by a first order (PT1) element to compute PNet. The filter should likely be skipped in the original example, it is only there to smooth the trajectory a bit.
The code in the when statement is executed once at the time when the condition turns true, which enables the counting.
The pre statement accesses the value of cycles right before the when statement got active, which enables counting how often the condition occurred.
The start=0 in cycles(start=0) sets the starting value for the variable cycles, which is necessary as you cannot use cycles = 0 as this would generate an equation for cycles, which is not what you want.
The inner model globalSeed is necessary for the noiseSource to work.
Here is the actual code:
model CycleCounter
inner Modelica.Blocks.Noise.GlobalSeed globalSeed;
Modelica.Blocks.Noise.NormalNoise noiseSource;
parameter Modelica.SIunits.Time T = 1e-3 "Time constant of PT1 element to filter random signal to compute PNet";
Integer cycles(start=0) "Counts the number of ";
Real PNet "Random value";
equation
der(PNet) = (noiseSource.y - PNet)/T;
when PNet > 0 then
cycles = pre(cycles)+1;
end when;
annotation (uses(Modelica(version="3.2.3")));
end CycleCounter;
And the result from simulating in Dymola:
I need to make a Simulink block which receives a concatenation of a number of bus signals and performs the same math operations on the signals contained in the bus for each pair of consequent buses. The bus signals are of the same type and are non-virtual.
For the sake of the question, let's assume to have a concatenation of 4 simple buses, each containing a x and a y field. A bus of signals composed of a=x1+x2 and b=y1-y2 need to be made out of buses inputs 1,2 and 3,4. So, the output of the block should be a concatenation of 2 buses, the first containing information from the first pair of input buses, and the second one from the second pair.
An hard-to-scale way to do it is the following.
Are there any built-in Buses Math operations possibilities, or better ways to implement this? I could not find anything in Mathworks documentation, and simple operations block generate incompatibility errors.
You need to use For Each Subsystem Block. As shown in this example. Note that I called the bus BusTest and made the dimension and datatype of signals visible:
Now set the Signal Width parameter of that block to two so it divides input array into chunks of length 2:
Then move your logic into that block:
I am new to Simulink and I am trying to model an oscillator to control an automation controller.
The question is:
I created a pulse generator that results in a square wave. To design the oscilator I need that 2 others chanels (one is the same signal, while other is the reverse) remain in zero when the input (the square wave) is oscillating. The problem is that I can't make the other 2 signals remain in zero. I tried using the blocks for discrete elements in the library, such as: "Delay", "Unit Delay", and even "Zero Order Hold". Every block just shifted the entire curve, while what I need is to delay the signal when it assumes the "1" value.
Follows some prints:
I have no reputation for all the images so: the subsystem consists of 3 pulse generators, and theres a scope linked to the subsystem
Please Help!!!!
It sounds like you're asking for a signal that rises at some pre-specified delay after the pulse generator rises, but falls at the same time as the pulse. This is shown in the picture below,
If that's correct, then it can be created using an enabled subsystem, where the subsystem contains only a unit delay, as shown in this picture,
Within the subsystem you must also
Set the Enable block to reset its states.
Set the Outport block to reset its value when disabled AND set an initial value of 0.
Specify an appropriate sample rate in the Unit Delay block (this acts as the amount by which the rising signal is delayed)
How to use Morton Order in range search?
From the wiki, In the paragraph "Use with one-dimensional data structures for range searching",
it says
"the range being queried (x = 2, ..., 3, y = 2, ..., 6) is indicated
by the dotted rectangle. Its highest Z-value (MAX) is 45. In this
example, the value F = 19 is encountered when searching a data
structure in increasing Z-value direction. ......BIGMIN (36 in the
example).....only search in the interval between BIGMIN and MAX...."
My questions are:
1) why the F is 19? Why the F should not be 16?
2) How to get the BIGMIN?
3) Are there any web blogs demonstrate how to do the range search?
EDIT: The AWS Database Blog now has a detailed introduction to this subject.
This blog post does a reasonable job of illustrating the process.
When searching the rectangular space x=[2,3], y=[2,6]:
The minimum Z Value (12) is found by interleaving the bits of the lowest x and y values: 2 and 2, respectively.
The maximum Z value (45) is found by interleaving the bits of the highest x and y values: 3 and 6, respectively.
Having found the min and max Z values (12 and 45), we now have a linear range that we can iterate across that is guaranteed to contain all of the entries inside of our rectangular space. The data within the linear range is going to be a superset of the data we actually care about: the data in the rectangular space. If we simply iterate across the entire range, we are going to find all of the data we care about and then some. You can test each value you visit to see if it's relevant or not.
An obvious optimization is to try to minimize the amount of superfluous data that you must traverse. This is largely a function of the number of 'seams' that you cross in the data -- places where the 'Z' curve has to make large jumps to continue its path (e.g. from Z value 31 to 32 below).
This can be mitigated by employing the BIGMIN and LITMAX functions to identify these seams and navigate back to the rectangle. To minimize the amount of irrelevant data we evaluate, we can:
Keep a count of the number of consecutive pieces of junk data we've visited.
Decide on a maximum allowable value (maxConsecutiveJunkData) for this count. The blog post linked at the top uses 3 for this value.
If we encounter maxConsecutiveJunkData pieces of irrelevant data in a row, we initiate BIGMIN and LITMAX. Importantly, at the point at which we've decided to use them, we're now somewhere within our linear search space (Z values 12 to 45) but outside the rectangular search space. In the Wikipedia article, they appear to have chosen a maxConsecutiveJunkData value of 4; they started at Z=12 and walked until they were 4 values outside of the rectangle (beyond 15) before deciding that it was now time to use BIGMIN. Because maxConsecutiveJunkData is left to your tastes, BIGMIN can be used on any value in the linear range (Z values 12 to 45). Somewhat confusingly, the article only shows the area from 19 on as crosshatched because that is the subrange of the search that will be optimized out when we use BIGMIN with a maxConsecutiveJunkData of 4.
When we realize that we've wandered outside of the rectangle too far, we can conclude that the rectangle in non-contiguous. BIGMIN and LITMAX are used to identify the nature of the split. BIGMIN is designed to, given any value in the linear search space (e.g. 19), find the next smallest value that will be back inside the half of the split rectangle with larger Z values (i.e. jumping us from 19 to 36). LITMAX is similar, helping us to find the largest value that will be inside the half of the split rectangle with smaller Z values. The implementations of BIGMIN and LITMAX are explained in depth in the zdivide function explanation in the linked blog post.
It appears that the quoted example in the Wikipedia article has not been edited to clarify the context and assumptions. The approach used in that example is applicable to linear data structures that only allow sequential (forward and backward) seeking; that is, it is assumed that one cannot randomly seek to a storage cell in constant time using its morton index alone.
With that constraint, one's strategy begins with a full range that is the mininum morton index (16) and the maximum morton index (45). To make optimizations, one tries to find and eliminate large swaths of subranges that are outside the query rectangle. The hatched area in the diagram refers to what would have been accessed (sequentially) if such optimization (eliminating subranges) had not been applied.
After discussing the main optimization strategy for linear sequential data structures, it goes on to talk about other data structures with better seeking capability.