How compatible are Matlab and .EXE files? Is it possible to make win32 APIs send messages to matlab and for matlab to read them in real time?
More precisely can I make Matlab to receive and handle messages from another application at real time?
When I am handling such messages, I also have a concern with the type of loop I would have to use in Matlab. Is an infinite for/while loop a good practice?
for example
while(infinite loop)
if (received message)
do something
end
end
Note, above is algorithm only, not intended as code.
The first part of your question: the Matlab Engine seems to be what you are after.
The second part of your question: in many coding standards, it's often recommended to avoid infinite loops. The problem with infinite loops is, well, that they may never end :) It's all too easy to incorrectly or incompletely code the exit condition(s), causing your loop to never end and the program's execution to stall. This sort of bug can pop up in unit testing (often-failing exit condition), or only after the first batch of your customers start complaining about your program crashing (not-so-often failing exit condition). These (and many other) pitfalls of infinite loops are often avoidable by
translating the infinite loop to a finite one
setting a maximum on the amount of iterations
using a whole different paradigm altogether.
With IPC, where part of the program is listening to messages from other parts of the program or other programs alltogether, the last option is best. Using an event based approach prevents an infinite loop. MATLAB supports this in the form of events and listeners. This is part of OOP in MATLAB, so you'd have to have followed OOP already, or convert everything you have to OOP in order to use it.
Related
I am using matlab to automatically parameter and launch a finite element method code. I write a parameter text file that the FEM code will read, and then call for the FEM code with :
[status,cmdout]=system(['FEMApp ' current_folder '\MyFile']);
Sometimes, the FEM App will be unable to complete its task, and send an error message in the command window. Until now, I was able to detect the error message in cmdout, and proceed to the next paramter set.
For an unknown reason, the system command started to behave differently : it gets stuck for seemingly forever (Matlab is always in "busy" mode). Did I change anything without realizing it ?
For now, I am using the following solution :
[status,cmdout]=system(['FEMApp ' current_folder '\MyFile &']);
pause(45)
system(['taskkill' 'FEMProcessus')
It works correctly, but it slows my computation a lot (~ x5), because Matlab will always wait 45 secondes even when the task is completed in much less time.
Can anyone explain the change in behaviour of Matlab ?
Does anyone has a cleverer work around than mine ?
It should be noted that Matlab is an interpreter rather than a compiler. That means it performs a lot of internal operations, hidden from the developer, some of which may require a lot of CPU resources. Finite Element applications are very numerically intense in terms of using CPU and RAM resources. It might not be a good idea to use Matlab for FEM programming. Try to use some numerically-oriented language, like C or Fortran, where you will have full control over memory allocation and arithmetic operations.
I am working on the development of an Iterative Learning Controller for a simple transfer function.
The iterations are controlled by the external matlab loop.
But the error e(k) (k is trial number) is not updating ... as the trials increases.
Please detect the error I've commited.
Thanks and Regards.
You might have solved the problem. But as the question is still open, I would like to add something here.
First of all, you might want to check the usage of "memory" block. "The Memory block holds and delays its input by one major integration time step." The reason why the error wasn't updating is that the output of your plant produced was the same in each iteration(you defined external loop). The memory block only delayed one step of your U(K), not the whole iteration.
You might want to store the error of each iteration to workspace, and use it for the next iteration.
The memory should be a vector with a lenght of the single iteration. Not just single value. Delay block can store multiple past samples.
This guy did probably what you were looking for: https://github.com/arthurrichards77/iterative-learning-control
I have an application that takes voltages and temperatures as analog inputs and does some processing using an algorithm which involves signal processing such as low-pass filtering,
exponential smoothing, and other steps which might typically be done in a high-level programming language such as C or C++.
I'm curious how I could perform these same steps using a PLC, and in particular, the Allen-Bradley Control-Logix system? It seems to me that the instruction set with ladder logic is too limited for this. Could I perform this using structured text?
Ladder logic can do the computation just fine, although it isn't the nicest programming language in the world. It has a full complement of conditionals, arithmetic, arrays, etc.
Your real problem is fitting your computation into the cyclic execution model that most ladder logic engines (and Control Logix) run: a repeated execution of the program in the control from top to bottom, with each rung or computation being executed just once per scan.
If you need to loop over a set of values repeatedly before producing a result, you will likely have difficulty resolving the ladder engines' desire to execute everything "just once" per scan, and your need to execute a loop to produce a result. I believe in fact that there are FOR loop operators that can repeat a block of ladder code just as conventional loop; you need to ensure that the amount of time spent in your loops/algorithm don't materially affect the scan rate.
What may work well is for you to let the the scan rate act as one of your loops; typically you compute a filter by accepting a new value into an array and then computing a result over that array. Since you basically can't accept values faster than one-per-scan-cycle anyway, you can compute at-most-one-filter-result per scan cycle without losing any precision. If your array is of modest size (e.g., 10 values), you can in effect simply code a polynomial over the array as an equation to produce your filter result, and then code that polynomial (klunkily but straightforwardly) as ladder logic.
Control Logix PLCs do not have to execute on a cyclic sweep. I don't have RSLogix 5000 in front of me right now, but when defining the project, you are required to create one Program that executes on a cyclic sweep. But you can create other programs that do not. You can also run them off a trigger (not useful for regular input scanning) or off a fixed timer (very useful for input scanning). Keep in mind that there is no point in setting the input scan timer faster than your instrumentation updates-modern PLCs can frequently execute a scan much faster than a meter can update the data.
One good technique I have used for this is to create a program called one-second or something similar. This program will scan all your inputs, and perform all your signal processing, and then write to buffered memory locations. The rest of your program looks at those buffered memory locations, and never monitors the inputs directly. You can set the input-buffering program to execute as fast as needed for your process, up to whatever the PLC can handle before it faults.
It would also be a good idea to write your signal processing functions them selves as Add On Instructions, and then call them, with whatever parameters you need.
So you could have an AOI with a call interface like this:
input-1_buffered := input_smooth (low_pass, input-1);
This would call your input_smooth function, using input-1 as the value and input-1_buffered as the final location. low_pass would be used within the input_smooth function to jump to the appropriate logic.
Then you can write your actual smoothing logic in structured text, without anyone needing to understand it, because it will only exist in that one AOI.
Is there a way to end a matlab process that is taking too long to run?
ctrl+alt+delete is all I know right now and that shuts downt he program entirely.
It's Ctrl+C.
Apparently it's inconsistent at times:
http://www.mathworks.com/support/solutions/en/data/1-188VX/
Control C is the answer. It will break in. However, there are cases where it still may take a while to do the interrupt. For example, if the process is trying to solve a huge linear system of equations or allocate a huge block of virtual memory, then matlab will not see the interrupt until the solver returns control to matlab itself. So it may take a while before the break happens. If this is just a long running iterative process, then the break will happen quickly.
I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime?
Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time?
Many thanks.
Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?
You want to use the built-in performance tools called 'Instruments', check out Apples guide to Instruments. Specifically you probably want the System Instruments. There's also the Tuning Guide which could be useful to you and Shark.
Imagine a chain of events with some
recursive calls over a fraction of a
second. What's the best way to track
where the CPU spends most of its time?
Short version of previous answer.
Learn an IDE or debugger. Make sure it has a "pause" button or you can interrupt it when your program is running and taking too long.
If your code runs too quickly to be manually paused, wrap a temporary loop of 10 to 1000 times around it.
When you pause it, make a copy of the call stack, into some text editor. Repeat several times.
Your answer will be in those stacks. If the CPU is spending most of its time in a statement, that statement will be at the bottom of most of the stack samples. If there is some function call that causes most of the time to be used, that function call will be on most of the stacks. It doesn't matter if it's recursive - that just means it shows up more than once on a stack.
Don't think about measuring microseconds, or counting calls. Think about "percent of time active". That's what stack samples tell you, and that's roughly what you'll save if you fix it.
It's that simple.
BTW, when you fix that problem, you will get a speedup factor. Then, other issues in your code will be magnified by that factor, so they will be easier to find. This way, you can keep going until you've squeezed every cycle out of it.
The first thing I tell people is to recognize the difference between
1) timing routines and counting how many times they are called, and
2) finding code that you can fruitfully optimize.
For (1) there are instrumenting profilers.
To be really successful at (2) you need a rare type of profiler.
You need a sampling profiler that
samples the entire call stack, not just the program counter
samples at random wall clock times, not just CPU, so as to capture possible I/O problems
samples when you want it to (not when waiting for user input)
for output, gives you, for each line of code that appears on stack samples, the percent of samples containing that line. That is a direct measure of the total time that could be saved if that line were not there.
(I actually do it by hand, interrupting the program under the debugger.)
Don't get sidetracked by problems you don't have, such as
accuracy of measurement. If a line of code appears on 30% of call stack samples, it's actual cost could be anywhere in a range around 30%. If you can find a way to eliminate it or invoke it a lot less, you will save what it costs, even if you don't know in advance exactly what its cost is.
efficiency of sampling. Since you don't need accuracy of time measurement, you don't need a large number of samples. Even if you get a large number of samples, they don't skew the results significantly, because they don't fail to spot the costly lines of code.
call graphs. They make nice graphics, but are not what you need to know. An arc on a call graph corresponds to a line of code in the best case, usually multiple lines, so knowing cost of an arc only tells the cost of a line in the best case. Call graphs concentrate on functions, when what you need to find is lines of code. Call graphs get wrapped up in the issue of recursion, which is irrelevant.
It's important to understand what to expect. Many programmers, using traditional profilers, can get a 20% improvement, consider that terrific, count the profiler a winner, and stop there. Others, working with large programs, can often get speedup factors of 20 times.
This is done by fixing a series of problems, each one giving a multiplicative speedup factor. As soon as the profiler fails to find the next problem, the process stops. That's why "good enough" isn't good enough.
Here is a brief explanation of the method.