A Bug in MATLAB in the simulink serial acquisition block in MATLAB - matlab

I just wanted to share my way to solve a bug in Simulink (in Matlab v2010a and the same pause instruction which is the cause of the problem found in MATLAB2014a as well).
When I get a serial input via the simulink serial acquisition block , if the data input rate is moderately fast (like more than 100 sample/sec.) , I see that the 1st 3 sec or so, the data input will be good then after those few seconds a very strange noise will appear.
By digging deep in the source code of this serial acquisition block, I saw that it is using a delay instruction ‘pause(0.001)’ and has been used apparently to delay the code running for 1 ms after each 1 sample acquisition.

Answering my own question:
An article I found in MSDN titled “Windows Time” states: “GetTickCount and GetTickCount64 are limited to the resolution of the system timer, which is approximately 10 milliseconds to 16 milliseconds.” [1]. That means that delay will effectively limiting the maximum samples per sec. that can be read by this block.
I have deleted this pause line in the simulink serial acquisition block (line no. 331 in the script of the ‘serial receive’ block named ‘sserialrb.m’ in MATLAB R2010(a)) and everything worked well.
Hope this will help somebody!
[1] MSDN, "Windows Time," http://msdn.microsoft.com/en-us/library/windows/desktop/ms725496%28v=vs.85%29.aspx , 2012.

Related

export to workspace problems using rtwin.tlc

I'm using Sensoray 626 card with simulink real time (rtwin), the problem is that when I try to plot some graph using scope block in real time no more than 800 points are plotted. In other words, it seems that the scope updates the graph by deleting the old points and starting new frame from zero again and again.
I tried to export data to be plotted from simulink to workspace in order to plot it after the real-time simulation is finished but, unfortunately, the same problem occurred. I have got no more than 800 points in workspace (in some attempts I've got less than 200).
The weird thing is that such problem doesn't occur with the same matlab version and with the same pc but using das 1002 card instead. both The scope and save-to-workspace blocks work well.
I'm using matlab 2009 on Windows Xp.
I would have used das 1002 card but it doesn't contain any encoder.
PS: solver configuration was properly set , necessary libraries were loaded.
Any help that can solve this problem would be appreciated.
thanks in advance.
solver configuration
solver
scope properties
simple simulink example
The Scope is only able to display an amount of samples equal to external
mode buffer length. So please go to Tools->External Mode Control
Panel->Signal & Triggering and check the Duration parameter there. I'd
bet it is 1000, so 1000 samples at 0.001 s sampling rate gives the 1
second of data you get. If you want more, try to increase this number.

matlab script node in Labview with different timing

I have a DAQ for Temperature measurment. I take a continuous sample rate and after DAQ, calculating temperature difference per minute (Cooling Rate: CR) during this process. This CR and temperature values are inserted into the Matlab script for a physical model running (predicting the temperature drop for next 30 sec). Then, I record and compare the predicted and experimental values in LabVIEW.
What i am trying to do is the matlab model is executing every 30 sec, and send out its predictions as an output from matlab script. One of this outputs helps me to change the Air Blower Motor Speed until next matlab run( eventually affect the temperature drop for next 30 sec as well, which becomes a closed loop). After 30 sec while main process is still running, sending CR and temperature values to matlab model again, and so on.
I have a case structure for this Matlab script. And inside of case structure i applied an elapsed time function to control the timing for the matlab script, but this is not working.
Yes. Short answer: I believe (one of) the reasons the program behaves weird on changed timing are several race conditions present in the code.
The part of the diagram presented shows several big problems with the code:
Local variables lead to race conditions. Use dataflow. E.g. you are writing to Tinitial local variable, and reading from Tinitial local varaible in the chunk of code with no data dependencies. It is not known whether reading or writing will happen first. It may not manifest itself badly with small delays, while big delays may be an issue. Solution: rewrite you program using the following example:
From Bad:
To Good:
(nevermind broken wires)
Matlab script node executes in the main UI execution system. If it is executing for a long time, it may freeze indicators/controls as well as execution of other pieces of code. Change execution system of other VIs in your program (say to "other 1") and see if the situation improves.

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.

How to fully use the CPU in Matlab [Improving performance of a repetitive, time-consuming program]

I'm working on an adaptive and Fully automatic segmentation algorithm under varying light condition , the core of this algorithm uses Particle swarm Optimization(PSO) to tune the fuzzy system and believe me it's very time consuming :| for only 5 particles and 100 iterations I have to wait 2 to 3 hours ! and it's just processing one image from my data set containing over 100 photos !
I'm using matlab R2013 ,with a intel coer i7-2670Qm # 2.2GHz //8.00GB RAM//64-bit operating system
the problem is : when starting the program it uses only 12%-16% of my CPU and only one core is working !!
I've searched a lot and came into matlabpool so I added this line to my code :
matlabpool open 8
now when I start the program the task manger shows 98% CPU usage, but it's just for a few seconds ! after that it came back to 12-13% CPU usage :|
Do you have any idea how can I get this code run faster ?!
12 Percent sounds like Matlab is using only one Thread/Core and this one with with full load, which is normal.
matlabpool open 8 is not enough, this simply opens workers. You have to use commands like parfor, to assign work to them.
Further to Daniel's suggestion, ideally to apply PARFOR you'd find a time-consuming FOR loop in you algorithm where the iterations are independent and convert that to PARFOR. Generally, PARFOR works best when applied at the outermost level possible. It's also definitely worth using the MATLAB profiler to help you optimise your serial code before you start adding parallelism.
With my own simulations I find that I cannot recode them using Parfor, the for loops I have are too intertwined to take advantage of multiple cores.
HOWEVER:
You can open a second (and third, and fourth etc) instance of Matlab and tell this additional instance to run another job. Each instance of matlab open will use a different core. So if you have a quadcore, you can have 4 instances open and get 100% efficiency by running code in all 4.
So, I gained efficiency by having multiple instances of matlab open at the same time and running a job. My jobs took 8 to 27 hours at a time, and as one might imagine without liquid cooling I burnt out my cpu fan and had to replace it.
Also do look into optimizing your matlab code, I recently optimized my code and now it runs 40% faster.

LabVIEW Real-Time Timed Loop resolution

we are using LabVIEW Real-Time with the PXI-8110 Controller.
I am facing the following problem:
I have a loop with 500µs period time (time-loop) and no other task. I write the time each loop iteration into ram and then save the data afterwords.
It is necessary that the period is exact, but I see that it is 500µs with +/- 25 µs.
The clock for the timed loop is 1 MHz.
How is it possible to have 500µs - 25µs. I would understand if I get 500µs + xx µs when my compution is to heavy. But till now I just do an addition nothing more.
So does anyone have a clue what is going wrong?
I thought it would be possible to have resolution of 1µs as NI advertise (if the computation isn't so heavy).
Thanks.
You may need to check which thread the code is working in. An easier way to work is to use the Timed Loop as this will try and correct for overruns. Also pre-allocate the array that you are storing the data into and then replace array subset which each new value. You should see a massive improvement with this way.
If you display that value and are running in development mode you will see jitter +- time as you are reporting everything back to the host. Build the executable and again jitter will shrink.