I am trying to measure elapsed time.
Above ladder logic tries to measure the time for which 6000.03 is on.
It read around 6000 ms where as my stopwatch showed around 11 seconds.
What is wrong in the logic?
EDIT:
Had tried with below logic as well but got again different results:
The timer is reset in plc cycle. It cause the delays.
Make the timer run longer. About 1s and count 600 pulses. I am sure you will get lower error.
Other solution is to find system clock bit and use it. It is not plc cycle dependent. For now i cannot remember the system bit addresses for omron. If you still have problem just let know i will look for it for you.
The idea is right but the start/stop/reset bits maybe cause the problem?
Related
In Dymola, I'm able to do something like:
when time > 100 then
assert(false,"Simulation taking too long");
end when;
to stop simulations based on the time variable itself.
However, what i'd like to do is stop the simulation based on elapsed CPU time. Dymola has a way to output the CPU time and it shows up in the results as CPUtime, but I don't know how to access the variable. In other words, this is what i'd like to do, but the CPUtime variable isn't in scope:
when CPUtime > 100 then
assert(false,"Simulation taking too long");
end when;
Any suggestions, either how to access CPUtime, or other workarounds to kill simulations based on cpu time?
As already noted:
You can set this in Dymola 2022 in the simulation setup, or alternatively by setting Advanced.Simulation.MaxRunTime.
It's wall-clock time, which means that if you have a parallel simulation it will stop after 10s has passed and not when the cores together have spent 10s, and if you for some weird reason have a long sleep-call in the model it will still end.
(This was already noted in the comment - thanks Priyanka. However, stackoverflow for some reason warns that answers in comments may be lost.)
I'm trying to implement a high-precision, millisecond-timescale timer in matlab. Every T seconds, I want to query a camera linked to matlab, and if there is an image in the memory, I want to pull it out. The actual connection to the camera is straightforward - but a problem arises because images are coming in every ~60ms, and need to all be pulled off before another image enters the camera buffer. This essentially means that I need to be checking the camera buffer at least every ~30ms, and ideally every ~5ms.
While MATLAB's built in timer function ostensibly allows millisecond timing, it suffers from poor precision. While in >95% of executions the built in MATLAB timer will indeed pause for ~5ms between runs, in ~5% of cases it hovers around ~30ms, and in ~1% of cases it takes >100ms between executions, which unacceptably kills the performance. I should clarify in MATLAB's defense that simultaneously there are two other timers running (both with 1 s periods), as well as a number of figure windows open, so even though my machine is beefy (16-core, 64GB RAM), there is certainly a lot to be doing all at once. I have tried using timers based on .NET timers (System.Timers.Timer(period)) as well as with the Java sleep function (java.lang.Thread.sleep(period)), both of which should theoretically be more precise, and while both are better than the MATLAB timer (at the cost of being more unwieldy), none are able to consistently achieve <60ms execution delay across thousands of iterations.
Maybe I'm asking for something which is not implementable - but I hope that there is some way to implement a high-precision timer in MATLAB which will continue executing at a ms time-scale even when there are other figures/timers/commands being executed semi-simultaneously. I should maybe clarify that when running just a timer with no other timers/figures open I am able to consistently achieve <60ms execution (and really, consistent <10ms execution for a 5ms timer period). This is possible even when all those timers/figures are open in a different instance of MATLAB, so it seems the problem is to somehow separate the timer from the rest of MATLAB. Any advice or guidance would be appreciated in this regard.
Depending on what exactly you are doing, the timing system of Psychtoolbox may help you.
Specifically, check out the WaitSecs function and its documentation. It is supposed to be more precise than timer, and the documentation contains some tips about achieving high precision timing in general.
Also related is the GetSecs function.
It might however happen that switching to WaitSecs will not help you. In that case you can be quite sure that your machine is just too loaded to do what you are trying to do.
I am trying to run a real-time simulation in Simulink using Real-time Workshop. The target is grt(I have tried rtwin, but my simulation refuses to compile for it). I need the simulation to run in real-time so that one second in simulation lasts one second of real time. Grt ignores realtime and finishes the simulation in shortest time possible. Is there any way to synchronise it?
I have tried http://www.mathworks.com/matlabcentral/fileexchange/3175 but could not get it to work(does not compile).
Thank you for any suggestions.
Looks like it is impossible. I was able to slow down the execution by using Sleep(time in ms) function from WinApi and clock function from time.h, which looked quite good for low sample rates. However, when I increased the sample rate the Sleep function was sleeping for too long, which resulted in errors, with one second in simulation lasting more than one real world second.
The idea was to say that one period of iteration should last, let's say 200ms. Then time how long it takes for one iteration of code to execute using the clock function. Then call Sleep(200 - u), where u is the length of the iteration. The problem is that Sleep function sleeps the process and wakes it up when it wants to, not when you tell it to in the argument.
I know this is not a solution, but post this so that if anyone faces the same problem as me they won't try this dead-end solution. I had to rewrite the simulation for rtwin and now it works fine.
Another idea would be to somehow use interrupts, but I guess it would be quite complicated and not worth the trouble.
we are using LabVIEW Real-Time with the PXI-8110 Controller.
I am facing the following problem:
I have a loop with 500µs period time (time-loop) and no other task. I write the time each loop iteration into ram and then save the data afterwords.
It is necessary that the period is exact, but I see that it is 500µs with +/- 25 µs.
The clock for the timed loop is 1 MHz.
How is it possible to have 500µs - 25µs. I would understand if I get 500µs + xx µs when my compution is to heavy. But till now I just do an addition nothing more.
So does anyone have a clue what is going wrong?
I thought it would be possible to have resolution of 1µs as NI advertise (if the computation isn't so heavy).
Thanks.
You may need to check which thread the code is working in. An easier way to work is to use the Timed Loop as this will try and correct for overruns. Also pre-allocate the array that you are storing the data into and then replace array subset which each new value. You should see a massive improvement with this way.
If you display that value and are running in development mode you will see jitter +- time as you are reporting everything back to the host. Build the executable and again jitter will shrink.
What is the fastest I can run an NSTimer and still get reliable results? I've read that approaching 30ms it STARTS to become useless, so where does it "start to start becoming useless"...40ms? 50ms?
Say the docs:
the effective resolution of the time
interval for a timer is limited to on
the order of 50-100 milliseconds
Sounds like if you want to be safe, you shouldn't use timers below 0.1 sec. But why not try it in your own app and see how low you can go?
You won't find a guarantee on this. NSTimers are opportunistic by nature since they run with the event loop, and their effective finest granularity will depend on everything else going on in your app in addition to the limits of whatever the Cocoa timer dispatch mechanisms are.
What's you definition of reliable? A 16 mS error in a 1 second timer is under 2% error, but in a 30 mS timer is over 50% error.
NSTimers will wait for whatever is happening in the current run loop to finish, and any errors in time can accumulate. e.g. if you touch the display N times, all subsequent repeating NSTimer firings may be late by the cumulative time taken by 0 to N touch handlers (plus anything else that was running at the "wrong" time). etc.
CADisplayLink timers will attempt to quantize time to the frame rate, assuming that no set of foreground tasks takes as long as a frame time.
Depends on what kind of results you are trying to accomplish. NSTimer Class 0.5 - 1.0 is a good place to start for reliable results.