How to understand simulation time and time step in carla simulator - simulation

i am new to carla and have one question regarding the statement (below mentioned) in carla docu:
how to understand the simulation time and time step mentioned in the statement? Is this same as rendering frame?
"There is a difference between real time, and simulation time. The simulated world has its own clock and time, conducted by the server. Computing two simulation steps takes some real time. However, there is also the time span that went by between those two simulation moments, the time-step."
Thank you for your help!

So you might set a simulation time of 5 seconds to model something - like how far the ripples progress in water when a stone hits the surface.
But the computer/processor may take 5 minutes to complete all the calculations to provide that solution.

Related

AnyLogic - stop simulation at specific model time

I want to stop the model at a specific model time.
Do I have to work with a counter variable per time and then throw stopSimulation() or is there another possibility? My simulation will run for one week in model time. I want to stop the simulation 5min before it will end, so 5min before one week of model time is over.
You can specify the stop time in the simulation experiment properties. See below:
In the settings of Simulation:main you can define exactly when you want your simulation to stop.
See attached image:
To set a week less 5 minutes, replace the number 100 with the number 10,075 (assuming your model runs in units of minutes)
Good luck

Silk Performer - recalulating a specific time period

Is it possible to have Silk Performer recalculate results based on a specific time period from testing? What I am seeing is a spike at the end of my test. This needs investigated, but I would like to show that prior to that spike, the average time was good. Because of this spike I am getting a couple seconds higher - at least that is my assumption right now. I would like to show that before the spike the times are good.
Here is what I am seeing:
In the Silk Performance Explorer there is a resample option under results. You can set the specific time period for calculation there.

Possible way to speed up SUMO simulation

Hi all I am a new SUMO user. I am having simulation iteratively with DUAROUTER and SUMO. The simulation consist of 20000 trips in Singapore network and it's very slow, took one hour and more to complete one simulation.
Anyone knows any way to speed up the process? I need to do 50 iterations. 1 hour per iteration is too slow.
My commands are as follows:
duarouter --net-file sg_left_v1.net.xml --trip-files trips20000_merged.trips.xml --output-file 0.20000.route.xml --ignore-errors true --no-warnings true --repair true
sumo -c simulation_sg_20000.sumocfg --tripinfo-output 0.20000.trip.output.xml --no-warnings true --tripinfo-output.write-unfinished true --vehroute-output 0.20000.individual.output.xml --link-output 0.20000.link-state.output.xml
The number X in X.20000.something.xml is increased on each iteration by my python code.
Thank you all in advance.
There are different things you can do to speed up the process by analyzing the bottlenecks. I would do the following:
Check whether the traffic flow is smooth. If there are big jams piling up the simulation slows down.
Do the vehicles depart at the times you expect them too. Even is there no visible jam, the backlog slows the simulation down. A good indicator is that vehicles which have an intended departure time near the end of the simulation, take much longer to depart (it's also in the tripinfo).
Recheck whether you need all outputs. To get a feeling whether it helps disable them one by one and have a look at the running time.
3a. Extend SUMO to aggregate your data. It is open source after all, so if the outputs are the bottleneck, aggregate inside the simulation.
Think about parallel execution. Maybe you do not need to start the iterations one after another?
Make the scenario smaller.
To accelerate the simulation, you will need to pass a parameter to Sumo called step-length
which a ratio of sumoTime / realWorldtime.
sumo your-other-args-here --step-length 1
It should enable you the get the wanted result

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.

Synchronise real-time workshop in matlab for grt target

I am trying to run a real-time simulation in Simulink using Real-time Workshop. The target is grt(I have tried rtwin, but my simulation refuses to compile for it). I need the simulation to run in real-time so that one second in simulation lasts one second of real time. Grt ignores realtime and finishes the simulation in shortest time possible. Is there any way to synchronise it?
I have tried http://www.mathworks.com/matlabcentral/fileexchange/3175 but could not get it to work(does not compile).
Thank you for any suggestions.
Looks like it is impossible. I was able to slow down the execution by using Sleep(time in ms) function from WinApi and clock function from time.h, which looked quite good for low sample rates. However, when I increased the sample rate the Sleep function was sleeping for too long, which resulted in errors, with one second in simulation lasting more than one real world second.
The idea was to say that one period of iteration should last, let's say 200ms. Then time how long it takes for one iteration of code to execute using the clock function. Then call Sleep(200 - u), where u is the length of the iteration. The problem is that Sleep function sleeps the process and wakes it up when it wants to, not when you tell it to in the argument.
I know this is not a solution, but post this so that if anyone faces the same problem as me they won't try this dead-end solution. I had to rewrite the simulation for rtwin and now it works fine.
Another idea would be to somehow use interrupts, but I guess it would be quite complicated and not worth the trouble.