limit of iteration for nonlinear functions in OpenModelica - modelica

I wonder to know ho we can define the limit of iteration for solving nonlinear functions in OpenModelica? is it possible to log the average of iteration per time step or number of iterations per time step?

As far as I know, there is no way to limit the number of iterations of the nonlinear solvers in OpenModelica (you can limit many other iteration types though)
To see the execution times, counts, and maximum time to solve the system (to get average, divide total time with execution count), enable Profiling (under the Simulation Flags). Note that profiling adds some overhead to the total simulation times especially if you have many small systems you profile. You may need to press the refresh button in the transformational debugger window that pops up if the information isn't populated automatically.

Related

the impact of time step size on the results in OpenModelica

Hello stack overflow community
am curious to know how the time step size value can impact the OpenModelica simulation results .
and how to optimize the sumilation periode so that we can accelerate the simulation to have results in a shorter time
and also what does impact the simulation time , like the computer performance and the complexity of the code !!!
If you use an explicit (fixed-step) solver such as Euler, the step size will have a major impact on the stability of the results.
If you use an implicit (usually multi-step) solver such as Dassl, the step size will not really impact any performance or results except the values printed to the result-file are interpolated to these points by the solver. If you want to make it run faster and be less accurate, you increase the tolerance of the solver.
https://www.openmodelica.org/doc/OpenModelicaUsersGuide/1.16/solving.html#integration-methods
Just to clarify the nomenclatures. When you say in this post 'Step size' are you referring to the Interval parameter?
Moreover I've a couple of questions:
What's the scope of 'Initial time step' and 'Maximum time step'. How are they correlated with Interval and Tolerance?
What's the scope of 'Equidistant time grid' and 'Store variables at events' in the Output tab
Thanks

Anylogic - How to measure work in process inventory (WIP) within simulation

I am currently working on a simple simulation that consists of 4 manufacturing workstations with different processing times and I would like to measure the WIP inside the system. The model is PennyFab2 in case anybody knows it.
So far, I have measured throughput and cycle time and I am calculating WIP using Little's law, however the results don't match he expectations. The cycle time is measured by using the time measure start and time measure end agents and the throughput by simply counting how many pieces flow through the end of the simulation.
Any ideas on how to directly measure WIP without using Little's law?
Thank you!
For little's law you count the arrivals, not the exits... but maybe it doesn't make a difference...
Otherwise.. There are so many ways
you can count the number of agents inside your system using a RestrictedAreaStart block and use the entitiesInside() function
You can just have a variable that adds +1 if something enters and -1 if something exits
No matter what, you need to add the information into a dataset or a statistics object and you get the mean of agents in your system
Little's Law defines the relationship between:
Work in Process =(WIP)
Throughput (or Flow rate)
Lead Time (or Flow Time)
This means that if you have 2 of the three you can calculate the third.
Since you have a simulation model you can record all three items explicitly and this would be my advice.
Little's Law should then be used to validate if you are recording the 3 values correctly.
You can record them as follows.
WIP = Record the average number of items in your system
Simplest way would be to count the number of items that entered the system and subtract the number of items that left the system. You simply do this calculation every time unit that makes sense for the resolution of your model (hourly, daily, weekly etc) and save the values to a DataSet or Statistics Object
Lead Time = The time a unit takes from entering the system to leaving the system
If you are using the Process Modelling Library (PML) simply use the timeMeasureStart and timeMeasureEnd Blocks, see the example model in the help file.
Throughput = the number of units out of the system per time unit
If you run the model and your average WIP is 10 units and on average a unit takes 5 days to exit the system, your throughput will be 10 units/5 days = 2 units/day
You can validate this by taking the total units that exited your system at the end of the simulation and dividing it by the number of time units your model ran
if you run a model with the above characteristics for 10 days you would expect 20 units to have exited the system.

How to avoid using the To Workspace block

I ran the profiler on my Simulink model and realized that the "To Workspace" block is using 20% of the total simulation time. Because this model is ran more than one time, I'm looking for a way to increase its performance.
Hence, is there an alternate solution to using the "To Workspace" block that would increase my model global performance?
Yes, you can use Signal Logging. The various approaches to logging simulation results are discussed in the documentation under Export Simulation Data. Finally, see also View Simulation Results for alternative approaches. My personal recommendation would be signal logging or a To File block.
According to my general understanding of memory management, reserving a fixed memory block takes less time than expanding it in each timestep. So, it might be useful to limit the number of data points to be logged, that the memory space reserved for your data set will not be dynamically increased at every timestep. Of course this would be only valid if you would know the number of data points and therefore number of steps in simulation prior to start of the simulation, which can be achieved by a fixed step size solver (if applicable by your simulation system setuo). Thus pre-allocation of the workspace array might save you some time in terms of not reaching the memory management system at each timestep.

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.

Need help on Hopfield Simulation function

I have trained a Hopfiled network using newhop function, when I am simulating this network for my test input data, [y,Pf,Af] = sim(net,{1 repeatnum},{},{im1}); it is working properly but the problem is that it gets number of iterations as an input argument e.g. 100 iterations. The network may converge on the input data for example in 5th iteration and there's no need to continue the simulation. Is there any way to simulate until the convergence of network?
Regards!
Check out net.adaptParam.goal.
If necessary, set net.adaptFcn and net.adaptParam properly and according to the help of help nntrain.
From help traingd:
Training stops when any of these conditions occurs:
1) The maximum number of EPOCHS (repetitions) is reached.
2) The maximum amount of TIME has been exceeded.
3) Performance has been minimized to the GOAL.
4) The performance gradient falls below MINGRAD.
5) Validation performance has increased more than MAX_FAIL times
since the last time it decreased (when using validation).