Reduce execution time in labview math script realtime - real-time

for executing this matlab code in mscript node on realtime target took 2700 microsecond, I need to finish computation within 1000 micro second. Please help me to achieve the goal. Thanks in advance!

Related

How to understand simulation time and time step in carla simulator

i am new to carla and have one question regarding the statement (below mentioned) in carla docu:
how to understand the simulation time and time step mentioned in the statement? Is this same as rendering frame?
"There is a difference between real time, and simulation time. The simulated world has its own clock and time, conducted by the server. Computing two simulation steps takes some real time. However, there is also the time span that went by between those two simulation moments, the time-step."
Thank you for your help!
So you might set a simulation time of 5 seconds to model something - like how far the ripples progress in water when a stone hits the surface.
But the computer/processor may take 5 minutes to complete all the calculations to provide that solution.

Possible way to speed up SUMO simulation

Hi all I am a new SUMO user. I am having simulation iteratively with DUAROUTER and SUMO. The simulation consist of 20000 trips in Singapore network and it's very slow, took one hour and more to complete one simulation.
Anyone knows any way to speed up the process? I need to do 50 iterations. 1 hour per iteration is too slow.
My commands are as follows:
duarouter --net-file sg_left_v1.net.xml --trip-files trips20000_merged.trips.xml --output-file 0.20000.route.xml --ignore-errors true --no-warnings true --repair true
sumo -c simulation_sg_20000.sumocfg --tripinfo-output 0.20000.trip.output.xml --no-warnings true --tripinfo-output.write-unfinished true --vehroute-output 0.20000.individual.output.xml --link-output 0.20000.link-state.output.xml
The number X in X.20000.something.xml is increased on each iteration by my python code.
Thank you all in advance.
There are different things you can do to speed up the process by analyzing the bottlenecks. I would do the following:
Check whether the traffic flow is smooth. If there are big jams piling up the simulation slows down.
Do the vehicles depart at the times you expect them too. Even is there no visible jam, the backlog slows the simulation down. A good indicator is that vehicles which have an intended departure time near the end of the simulation, take much longer to depart (it's also in the tripinfo).
Recheck whether you need all outputs. To get a feeling whether it helps disable them one by one and have a look at the running time.
3a. Extend SUMO to aggregate your data. It is open source after all, so if the outputs are the bottleneck, aggregate inside the simulation.
Think about parallel execution. Maybe you do not need to start the iterations one after another?
Make the scenario smaller.
To accelerate the simulation, you will need to pass a parameter to Sumo called step-length
which a ratio of sumoTime / realWorldtime.
sumo your-other-args-here --step-length 1
It should enable you the get the wanted result

Elapsed time in PLC (OMROM CX Program)

I am trying to measure elapsed time.
Above ladder logic tries to measure the time for which 6000.03 is on.
It read around 6000 ms where as my stopwatch showed around 11 seconds.
What is wrong in the logic?
EDIT:
Had tried with below logic as well but got again different results:
The timer is reset in plc cycle. It cause the delays.
Make the timer run longer. About 1s and count 600 pulses. I am sure you will get lower error.
Other solution is to find system clock bit and use it. It is not plc cycle dependent. For now i cannot remember the system bit addresses for omron. If you still have problem just let know i will look for it for you.
The idea is right but the start/stop/reset bits maybe cause the problem?

Synchronise real-time workshop in matlab for grt target

I am trying to run a real-time simulation in Simulink using Real-time Workshop. The target is grt(I have tried rtwin, but my simulation refuses to compile for it). I need the simulation to run in real-time so that one second in simulation lasts one second of real time. Grt ignores realtime and finishes the simulation in shortest time possible. Is there any way to synchronise it?
I have tried http://www.mathworks.com/matlabcentral/fileexchange/3175 but could not get it to work(does not compile).
Thank you for any suggestions.
Looks like it is impossible. I was able to slow down the execution by using Sleep(time in ms) function from WinApi and clock function from time.h, which looked quite good for low sample rates. However, when I increased the sample rate the Sleep function was sleeping for too long, which resulted in errors, with one second in simulation lasting more than one real world second.
The idea was to say that one period of iteration should last, let's say 200ms. Then time how long it takes for one iteration of code to execute using the clock function. Then call Sleep(200 - u), where u is the length of the iteration. The problem is that Sleep function sleeps the process and wakes it up when it wants to, not when you tell it to in the argument.
I know this is not a solution, but post this so that if anyone faces the same problem as me they won't try this dead-end solution. I had to rewrite the simulation for rtwin and now it works fine.
Another idea would be to somehow use interrupts, but I guess it would be quite complicated and not worth the trouble.

LabVIEW Real-Time Timed Loop resolution

we are using LabVIEW Real-Time with the PXI-8110 Controller.
I am facing the following problem:
I have a loop with 500µs period time (time-loop) and no other task. I write the time each loop iteration into ram and then save the data afterwords.
It is necessary that the period is exact, but I see that it is 500µs with +/- 25 µs.
The clock for the timed loop is 1 MHz.
How is it possible to have 500µs - 25µs. I would understand if I get 500µs + xx µs when my compution is to heavy. But till now I just do an addition nothing more.
So does anyone have a clue what is going wrong?
I thought it would be possible to have resolution of 1µs as NI advertise (if the computation isn't so heavy).
Thanks.
You may need to check which thread the code is working in. An easier way to work is to use the Timed Loop as this will try and correct for overruns. Also pre-allocate the array that you are storing the data into and then replace array subset which each new value. You should see a massive improvement with this way.
If you display that value and are running in development mode you will see jitter +- time as you are reporting everything back to the host. Build the executable and again jitter will shrink.