I tried to run a model in Anylogic, but there is always that appearing error (see picture)
picture error
What does it mean? I have several schedules which control the arriving rates, i checked them already for same setting. The delays are measured in minutes, arrival rates in weeks and months and the simulation runs for 3 months. I have an event for hold: startevent stopevent
Thank you for your help.
Error in the console: console
MAin.java: java_screenshot
Related
i am new to carla and have one question regarding the statement (below mentioned) in carla docu:
how to understand the simulation time and time step mentioned in the statement? Is this same as rendering frame?
"There is a difference between real time, and simulation time. The simulated world has its own clock and time, conducted by the server. Computing two simulation steps takes some real time. However, there is also the time span that went by between those two simulation moments, the time-step."
Thank you for your help!
So you might set a simulation time of 5 seconds to model something - like how far the ripples progress in water when a stone hits the surface.
But the computer/processor may take 5 minutes to complete all the calculations to provide that solution.
I run a parameter variation experiment that varies five parameters of two levels each to yield 32 iterations. During the run, the error in the attached image occurred.
-When running this design three separated times with no replications in any of them, the error occurred only in one run of the three.
-When adding replications to the run (even as few as two replications) the error always occurs too early in run time.
-My selection for the maximum available memory for the experiment is: 60,000 Mb of the total 46Gb RAM of the device
-Disabling of parallel execution doesn't seem appealing due to the consequent slow run speed; I use material handling library, and it takes around 18 minutes for a single day run time.
How can I overcome this error?
Thanks
By running my simulation in matlab simulink, it takes a long time an remains in the state of processing and doesn't conclude. simulink shows a massage that it suggests to run the below command:
by entering this command, the above problem remains and just in the length of the time that has been shown beside of processig status, the results in the Scope block can be seen that is a very short time. How can fix this problem?
The massage is:
and I can see results when duration of simulation running is T=0.222.
I am trying to run a 1 ns simulation using VMD/NAMD on top of my 200 ps simulation, so I set the program to run 800000 with a timestep of 1. However, the next day (it took about 12 hours) it was complete, but I only had ~16500 frames. Anyone know why the program only collected so many frames? I have a similar issue with running different simulations: the amount I ask it to run and the number of frames I get are not the same.
I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.