I am running the very same model but with a different number of equations:
Case 1. - number of equations < 10000
Case 2. and number of equations > 10000
The number of equations is changed by changing the discretization (number of nodes).
Case 1. (9974 eq.) - Runs without any problems and simulation is rather quick (total time 18sec).
Case 2. ( 10410 eq.) – The simulation time increase drastically ~1h++.
I tried using:
--maxSizeNonlinearTearing=11000
flag by adding it in the “Simulation Setup” -> “Translation Flags” -> “Additional Translation Flag”.
However, the simulation time hasn't changed, it is still extremely long compared to 18sec, which doesn't make sense. So I assume, that the tearing flag didn't work.
In addition, I tried to use two flags at the same time:
--maxSizeNonlinearTearing=11000 --maxSizeLinearTearing=1000”
I separated flags by adding one space as it is indicated in the pop-up window when one hovers over the "Additional Translation Flags".
After initiating the simulation I get the message:
"Invalid type of flag maxSizeNonlinearTearing, expected an integer value but got a list of values.."
So most likely I am not using them correctly. I tried to find some information on https://www.openmodelica.org/doc/OpenModelicaUsersGuide/latest/omchelptext.html
but I couldn't find any example. As a non-computer scientist, I think that there is a scarcity of simple, real-life examples.
It would be great if anyone could advise how to deal with OpenModelica flags and models with over 10000 equations.
Related
I wonder to know ho we can define the limit of iteration for solving nonlinear functions in OpenModelica? is it possible to log the average of iteration per time step or number of iterations per time step?
As far as I know, there is no way to limit the number of iterations of the nonlinear solvers in OpenModelica (you can limit many other iteration types though)
To see the execution times, counts, and maximum time to solve the system (to get average, divide total time with execution count), enable Profiling (under the Simulation Flags). Note that profiling adds some overhead to the total simulation times especially if you have many small systems you profile. You may need to press the refresh button in the transformational debugger window that pops up if the information isn't populated automatically.
I need to create some kind of counter, that counts how many times all of the transistors(signal_1, signal_2, signal_3, signal_4) I have used turn ON (and OFF) per second. I need to show the difference in switching frequency between 2-level and 3-level inverter, the signal is PWM. I have no clue how to do it, I'm really lost here.
This is my schematics (just 1/3 of it, other 2/3 is just doubled this).
I had as primary objective to make a controller for the transfer function (5.551* s^2), using root locus I made the controller shown below. Analyzing the step response in the Workspace using the step () function I had a satisfactory answer but when I try to transfer this answer to Simulink the response behaves differently, at steady state for example I wish to have the smallest possible error as it was obtained in Workspace but in Simulink there is a big error and for some reason at 8 seconds time (Simulink simulation time) there is a "jump" as shown on the display and when I change the simulation time there is a change in this "jump" too and I do not know why these changes between one environment and another.
Step response in Workspace
Step response in Simulink with 8s of simulation
Step response in Simulink with 12s of simulation
Simulink controller
Simulink transfer function
I expected to make a controller that has an error less than 5% and an overshoot smaller than 25%, so I first made a controller with two integrators to nullify the effects of zeros on the source, after that I added two more integrators on the source to try decrease the error, the zero at -0.652 I used the angular condition for this and the gain of 0.240251 I used the modular condition.
I wasn't expecting the most optimized behavior possible, just that it has minimum conditions that satisfy the imposed conditions, so I didn't worry for example about the four integrators at the source.
I tried use the sisotool() command thinking that I had done something wrong, but the result changed a lot when I was simulating Simulink so I discarded this option and kept the controller I made using root locus.
Your MATLAB code and your Simulink model are not the same, and hence the different results.
MATLAB allows you to define the non-causal plant model P_ball, then form the causal closed loop CL, which can have its step response generated.
Simulink does not allow you to model non-causal blocks (even if the overall model is causal) and hence will not allow you to implement s^2, which I assume is why you have used two differentiation blocks. But a numerical differentiation is not the same as a Laplace s operator.
You would have to make the plant causal by incorporating two poles that are large enough to not adversely effect the overall simulation. So your plant model needs to be something like 5.551*s^2/((s/1000 + 1)(s/1000 + 1)) which can be implemented using a Transfer Function block with a numerator of 5.551*1000*1000*[1 0 0] and a denominator of [1 2*1000 1000*1000].
Alternatively you could just implement PID * P_ball (where you manually do the 2 zero/pole cancellations) which is causal.
I have two data set, let us name them "actual speed" and "desired speed". My main objective is to match actual speed with the desired speed.
But for doing that in my case, I need to tune FF(1x10), Integral(10x8) and Proportional gain table(10x8).
My approach till now was as follows:-
First, start the iteration with having 0.1 as the initial value in the first cells(FF[0]) of the FF table
Then find the R-square or Co-relation between two dataset( i.e. Actual Speed and Desired Speed)
Increment the value of first cell(FF[0]) by 0.25 and then again compute R-square or Co-relation of two data set.
Once the cell(FF[0]) value reaches 2(Gains Maximum value. Already defined by the lab). Evaluate R-square and re-write the gain value in FF[0] which gives min. error between the two curve.
Then tune the Integral and Proportional table in the same way for the same RPM Range
Once It is tune then go for higher RPM range and repeat step 2-5 (RPM Range: 800-1000; 1000-1200;....;3000-3200)
Now the problem is that this process is taking way too long time to complete. For example it takes around 1 Hr. time to tune one cell of FF. Which is actually very slow.
If possible, Please suggest any other approach which I can try to tune the tables. I am using MATLAB R2010a and I can't shift to any other version of MATLAB because my controller can communicate with this version only and I can't use any app for tuning since my GUI is already communicating with the controller and those two datasets are being made in real-time
In the given figure, lets us take (X1,Y1) curve as Desired speed and (X2,Y2) curve as Actual speed
UPDATE
I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.