How to make a periodic task in MATLAB? - matlab

I have a function that is printing reports to the console with fprintf. But it's writing a lot, making it impossible to follow with the eye. So I want to make it "slower", or decrease the amount of writing. Can I make a decision based on mod(current_time, something)?
So what I want to achieve is, the system should go verbose for 0.5 seconds, then go silent for 1 second, and repeat.
Is this possible?
Thanks for any help!

Related

use number of solutions rather than maximum time to end solve attempts

I am using the CP-SAT solver on a JSP.
I am iterating so the solver runs many times (basically simulating each day for a year), I do not need to find the optimal solution, just a reasonably good one, so I would like to be a bit smarter on ending the solver than simply allowing it to run for X seconds each time. For example, i would like to take the 5th solution each time, or even to stop once the current solution makespan is only 5% (for example) shorter than the previous solution.
Is this possible? I am only aware of solver.parameters.max_time_in_seconds as a way of limiting the calculation time. Intermediate solutions are printed by SolutionPrinter but i think this is output only and there is no way to break the solver during a run?
wrong, you can stop the search in a callback, see this recipe:
https://github.com/google/or-tools/blob/stable/ortools/sat/docs/solver.md#stopping-search-early

High precision millisecond timer in matlab

I'm trying to implement a high-precision, millisecond-timescale timer in matlab. Every T seconds, I want to query a camera linked to matlab, and if there is an image in the memory, I want to pull it out. The actual connection to the camera is straightforward - but a problem arises because images are coming in every ~60ms, and need to all be pulled off before another image enters the camera buffer. This essentially means that I need to be checking the camera buffer at least every ~30ms, and ideally every ~5ms.
While MATLAB's built in timer function ostensibly allows millisecond timing, it suffers from poor precision. While in >95% of executions the built in MATLAB timer will indeed pause for ~5ms between runs, in ~5% of cases it hovers around ~30ms, and in ~1% of cases it takes >100ms between executions, which unacceptably kills the performance. I should clarify in MATLAB's defense that simultaneously there are two other timers running (both with 1 s periods), as well as a number of figure windows open, so even though my machine is beefy (16-core, 64GB RAM), there is certainly a lot to be doing all at once. I have tried using timers based on .NET timers (System.Timers.Timer(period)) as well as with the Java sleep function (java.lang.Thread.sleep(period)), both of which should theoretically be more precise, and while both are better than the MATLAB timer (at the cost of being more unwieldy), none are able to consistently achieve <60ms execution delay across thousands of iterations.
Maybe I'm asking for something which is not implementable - but I hope that there is some way to implement a high-precision timer in MATLAB which will continue executing at a ms time-scale even when there are other figures/timers/commands being executed semi-simultaneously. I should maybe clarify that when running just a timer with no other timers/figures open I am able to consistently achieve <60ms execution (and really, consistent <10ms execution for a 5ms timer period). This is possible even when all those timers/figures are open in a different instance of MATLAB, so it seems the problem is to somehow separate the timer from the rest of MATLAB. Any advice or guidance would be appreciated in this regard.
Depending on what exactly you are doing, the timing system of Psychtoolbox may help you.
Specifically, check out the WaitSecs function and its documentation. It is supposed to be more precise than timer, and the documentation contains some tips about achieving high precision timing in general.
Also related is the GetSecs function.
It might however happen that switching to WaitSecs will not help you. In that case you can be quite sure that your machine is just too loaded to do what you are trying to do.

A way/ tool to make an estimation of execution time for an APP/ task

I'am trying to run real-life experiment for an application in Raspberry Pi, and I need to estimate or predicate the execution time for the application. in other words, before execution/ run the task i need to know how long (roughly) this task/app going to take to get the result back. I have identified several techniques and works that have been done before. but most of it are simulation work which doesn't work with real-life experiment. does anyone can help me with any idea or technique (No code). thank you in advance
Estimating the execution time of an application or function is going to be difficult in any context. You might want to look up the halting problem for some insight to why. It's impossible to determine whether a given program will finish executing, and therefore, you can't really tell how long a given program will take to finish executing.
For general computing, varying hardware capabilities of any given system will always have an effect on the execution time of a program. Raspberry Pi is a little more discrete than that, and therefore more predictable in that sense, but those specifications will not always be consistent across its various versions. That adds to the complexity of determining a run time.
Practically, the most reliable way to determine how long a process will take would be to just run it and time it. If you absolutely need predicted times for something, you might be able to do a bit of a composite estimate - time the smaller chunks of the application separately, and then use those to determine how long you expect the application as a whole to run. For most situations, though, it would be much faster to just run the program itself rather than trying to predict it.
Store the time before and after the execution ? Then you could know the execution time

Synchronise real-time workshop in matlab for grt target

I am trying to run a real-time simulation in Simulink using Real-time Workshop. The target is grt(I have tried rtwin, but my simulation refuses to compile for it). I need the simulation to run in real-time so that one second in simulation lasts one second of real time. Grt ignores realtime and finishes the simulation in shortest time possible. Is there any way to synchronise it?
I have tried http://www.mathworks.com/matlabcentral/fileexchange/3175 but could not get it to work(does not compile).
Thank you for any suggestions.
Looks like it is impossible. I was able to slow down the execution by using Sleep(time in ms) function from WinApi and clock function from time.h, which looked quite good for low sample rates. However, when I increased the sample rate the Sleep function was sleeping for too long, which resulted in errors, with one second in simulation lasting more than one real world second.
The idea was to say that one period of iteration should last, let's say 200ms. Then time how long it takes for one iteration of code to execute using the clock function. Then call Sleep(200 - u), where u is the length of the iteration. The problem is that Sleep function sleeps the process and wakes it up when it wants to, not when you tell it to in the argument.
I know this is not a solution, but post this so that if anyone faces the same problem as me they won't try this dead-end solution. I had to rewrite the simulation for rtwin and now it works fine.
Another idea would be to somehow use interrupts, but I guess it would be quite complicated and not worth the trouble.

What's the best way to measure and track performance over various calls at runtime?

I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime?
Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time?
Many thanks.
Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?
You want to use the built-in performance tools called 'Instruments', check out Apples guide to Instruments. Specifically you probably want the System Instruments. There's also the Tuning Guide which could be useful to you and Shark.
Imagine a chain of events with some
recursive calls over a fraction of a
second. What's the best way to track
where the CPU spends most of its time?
Short version of previous answer.
Learn an IDE or debugger. Make sure it has a "pause" button or you can interrupt it when your program is running and taking too long.
If your code runs too quickly to be manually paused, wrap a temporary loop of 10 to 1000 times around it.
When you pause it, make a copy of the call stack, into some text editor. Repeat several times.
Your answer will be in those stacks. If the CPU is spending most of its time in a statement, that statement will be at the bottom of most of the stack samples. If there is some function call that causes most of the time to be used, that function call will be on most of the stacks. It doesn't matter if it's recursive - that just means it shows up more than once on a stack.
Don't think about measuring microseconds, or counting calls. Think about "percent of time active". That's what stack samples tell you, and that's roughly what you'll save if you fix it.
It's that simple.
BTW, when you fix that problem, you will get a speedup factor. Then, other issues in your code will be magnified by that factor, so they will be easier to find. This way, you can keep going until you've squeezed every cycle out of it.
The first thing I tell people is to recognize the difference between
1) timing routines and counting how many times they are called, and
2) finding code that you can fruitfully optimize.
For (1) there are instrumenting profilers.
To be really successful at (2) you need a rare type of profiler.
You need a sampling profiler that
samples the entire call stack, not just the program counter
samples at random wall clock times, not just CPU, so as to capture possible I/O problems
samples when you want it to (not when waiting for user input)
for output, gives you, for each line of code that appears on stack samples, the percent of samples containing that line. That is a direct measure of the total time that could be saved if that line were not there.
(I actually do it by hand, interrupting the program under the debugger.)
Don't get sidetracked by problems you don't have, such as
accuracy of measurement. If a line of code appears on 30% of call stack samples, it's actual cost could be anywhere in a range around 30%. If you can find a way to eliminate it or invoke it a lot less, you will save what it costs, even if you don't know in advance exactly what its cost is.
efficiency of sampling. Since you don't need accuracy of time measurement, you don't need a large number of samples. Even if you get a large number of samples, they don't skew the results significantly, because they don't fail to spot the costly lines of code.
call graphs. They make nice graphics, but are not what you need to know. An arc on a call graph corresponds to a line of code in the best case, usually multiple lines, so knowing cost of an arc only tells the cost of a line in the best case. Call graphs concentrate on functions, when what you need to find is lines of code. Call graphs get wrapped up in the issue of recursion, which is irrelevant.
It's important to understand what to expect. Many programmers, using traditional profilers, can get a 20% improvement, consider that terrific, count the profiler a winner, and stop there. Others, working with large programs, can often get speedup factors of 20 times.
This is done by fixing a series of problems, each one giving a multiplicative speedup factor. As soon as the profiler fails to find the next problem, the process stops. That's why "good enough" isn't good enough.
Here is a brief explanation of the method.