Running a filter at a high speed - filtering

I'm writing a signal processing software in CVI.
I've got a signal, transmitted to the computer via USB at a very high speed (~50K).
I want to filter it in RT.
In order to do it I created a filter in Simulink and turned it into a C code, which I run in CVI using:
FuncName_initialize()
FuncName.in
FuncName_step()
FuncName.Out
The thing is that after a while (about 5-7 min) the filter works wrongly... Meaning showing inaccurate results and artifacts. I believe this is a result of using it too fast (because I used it before at lower speeds and this was fine).
Any suggestions on what might be the problem? How can I implement a RT filter in CVI directly (meaning a one that get one point at the input and gets one point in the output while maintaining some window).
I know that the data transmitted just fine at this speed since recording the signal works OK, and showing the raw data on screen works OK as well.
Thank you

Related

High latency when transmitting with HackRF One from GNU Radio

So I reverse-engineered an RC toy wireless protocol, and written a flow graph in GNU Radio Companion for reproducing the toy's commands.
The sink block is osmocom sink, and transmitting device is HackRF One.
The problem I encountered is severe lag between pressing button and toy reacting, which is about 1-2 seconds.
Quickly pressing, for example, "forward" button twice will result in toy jerking forward twice in quick succession as well, about a second after pressing. So it's not like the flow graph itself is slow. The CPU usage is also fairly low. This looks like some buffering issue.
I'm pretty sure that the flow graph itself reacts to button presses instantly, as debugging prints appear the same time as button is pressed/depressed, and square waves appear in the GUI scope sink instantly, as well.
I tried to lower number of HackRF buffers to one (by settings device arguments to hackrf,buffers=1), but it didn't help.
I set "Max Number of Outputs" to 10 for flow graph as well, and it also didn't make a difference (I tried some other values, too). Given that signal appears in the GUI scope much sooner than after 1 second, it shouldn't have worked anyway.
How can I reduce the latency then?
EDIT: I followed #Manos's suggestion and tried adjusting sample rate.
Adding a rational resampler block with interpolation of 8 (and adjusting sink sample rate accordingly) AND setting hackrf,buffers=1 makes latency almost non-existent.
However, reducing output sample rate of my custom block to 500k and having 16x interpolation still causes noticeable lag, maybe 400-500ms (still not as dramatic as when it's 1M at both source and sink). I'm not sure how to fix it. Unfortunately, running my custom block at 1M consumes 100% CPU and causes occasional underflows.

How to synchronize readout of binary streams on serial port of Matlab

I'm having an issue which is partially Matlab- and partially general programming-related, I'm hoping that somebody can help me brainstorm for solutions.
I have an external microcontroller that generates a large stream of binary data (~40kb) every 400ms and sends it via UART to a PC running Matlab scripts. The data is not encoded in hexa or dec characters, but true binary (hence, there's no terminator defined as all 256 values are possible, valid combinations of data). Baudrate is set at 1024000. In short, it takes roughly 375ms for a whole stream of data to be sent, with 25ms of dead time in between streams
In Matlab, the serial port is configured correctly (also 1024000, 8x bits, 1x stop bit, no parity, no hardware flow control, etc.). I am able to readout the data I'm sending via the microcontroller correctly (i.e. there's no corruption of data), but I'm not being able to synchronize the serial readout on Matlab. My script is as follows:
function data_show = GetDATA
if ~isempty(instrfind)
fclose(instrfind);
end
DATA_TOTAL_SIZE = 38400;
DATA_buffer = uint8(zeros(DATA_TOTAL_SIZE,1));
DATA_show = reshape(DATA_buffer(1:2:end)',[160,120])';
f_data_in = false;
f_data_out = true;
serialport = serial('COM11','BaudRate',1024000,'DataBits',8,'FlowControl','none','Parity','none','StopBits',1,...
'BytesAvailableFcnCount',DATA_TOTAL_SIZE,'BytesAvailableFcnMode','byte','InputBufferSize',DATA_TOTAL_SIZE * 2,...
'BytesAvailableFcn',#GetPortData);
fopen(serialport);
while (get(serialport,'BytesAvailable') ~= 0) % Skip first packet which might be incomplete
fread(serialport,DATA_TOTAL_SIZE,'uint8');
end
f_data_out = true;
while (1)
if (f_data_in)
DATA_buffer = fread(serialport,DATA_TOTAL_SIZE,'uint8');
DATA_show = reshape(DATA_buffer(1:2:end)',[160,120])'; %Reshape array as matrix
DATAsc(DATA_show);
disp('DATA');
end
pause(0.01);
end
fclose(serialport);
delete(serialport);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function GetPortData (obj,~)
if f_data_out
f_data_in = true;
end
end
end
The problem I see is that what I end up reading is always the correct size, but belongs to multiple streams, because I haven't found a way to tell Matlab that these 25ms of no data should be used to synchronize (i.e. data from before and after that blank period should belong to different streams).
Does anyone have any suggestions for this?
Thanks a lot!
For completeness, I would like to post the current implementation I have fixing this issue, which is probably not a suitable solution in all cases but might be useful in some.
The approach I took consists in moving into a bi-directional communication protocol, in which Matlab initiates the streaming by sending a very short command as a trigger (e.g. single, non-printable character). Given the high baudrate it does not add significant delay due to processing in the microcontroller's side.
The microcontroller, upon reception of this trigger, proceeds to transmit only one full package (as opposed to continuously streaming package at a 5Hz rate). By forcing Matlab to pickup a serial package of the known length right after issuing the trigger, it ensures that only one package and without synchronization issues is received.
Then it becomes just a matter of encapsulating the Matlab script in a routine with a 5Hz tick given by a timer, in which the sequence is repeated (send trigger, retrieve package, do whatever processing, and repeat).
Advantages of this:
It solves the synchronization problems
Disadvantages of this:
Having Matlab running on a timer tick does not ensure perfect periodicity, and hence the triggers might not always be sent at exactly 5Hz. If triggers are sent at "inconvenient" times for the microcontroller, packages might need to be skipped in order to avoid that a package is updated in memory while it is still being transmitted (since transmission takes a significant part of the 200ms time slot)
From experience, performance can vary a lot depending on what the PC running Matlab is doing. For example, it works fine when the PC is left on its own to do the acquisition, but if another program is used (e.g. Chrome), Matlab begins to lag and that results in delays in transmission of triggers.
As mentioned above, it's not a complete answer, but it is an approach that might be sufficient in some situations. If someone has a more efficient option, please fell free to share!

Micromaster 440. Ways to limit output frequency on the run?

I need to control a conveyor (driven by Micromaster 440) from a PC program using SFC14/15.
The scheme will be: Supervisors PC ->(ethernet)-> S7-1200 ->(profibus)-> Micromaster 440.
At the moment, Micromaster's output frequency is controlled via a potentiometer (analog inputs) by the "field" operator. The problem is that sometimes the operator increases the conveyor speed in order to do his job faster and this affects the production negatively. The "supervisor" wants to be able to limit output frequency using the PC program.
Of course I've seen the list of MM440 parameters and I know about P1082, but I've discovered that, unfortunately, MM440 should be stopped before the new value of P1082 takes effect. In my case it's preferable to be able to change the value on the run.
Fortunately, it seems that P0757 - P0760 - (input scaling) can be changed on the run, but this parameter has sign "first confirm", which means that
the “P” button on the operator panel (BOP or AOP) must be pressed before the
changes take effect.
But the MM440 has only one slot for the Profibus/BOP/AOP panel and I'll be using Profibus. So, in this case, what will be the behavior of mm440 like? I want to believe that, perhaps, this condition is not obligatory when using profibus panel...
I would opt for a solution where the operator no longer operates the belt speed directly but tells the S7-1200 PLC a what speed he would like the belt to run (either by using 2 +/- buttons or a pot-meter). The PLC can then control the speed of the belt (either by analog output or 2 digital (+/-) outputs).
As an added bonus you can stop the belt when it is accidentally left on and things like that...

Dymola/Modelica real-time simulation advances too fast

I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.

LabVIEW Real-Time Timed Loop resolution

we are using LabVIEW Real-Time with the PXI-8110 Controller.
I am facing the following problem:
I have a loop with 500µs period time (time-loop) and no other task. I write the time each loop iteration into ram and then save the data afterwords.
It is necessary that the period is exact, but I see that it is 500µs with +/- 25 µs.
The clock for the timed loop is 1 MHz.
How is it possible to have 500µs - 25µs. I would understand if I get 500µs + xx µs when my compution is to heavy. But till now I just do an addition nothing more.
So does anyone have a clue what is going wrong?
I thought it would be possible to have resolution of 1µs as NI advertise (if the computation isn't so heavy).
Thanks.
You may need to check which thread the code is working in. An easier way to work is to use the Timed Loop as this will try and correct for overruns. Also pre-allocate the array that you are storing the data into and then replace array subset which each new value. You should see a massive improvement with this way.
If you display that value and are running in development mode you will see jitter +- time as you are reporting everything back to the host. Build the executable and again jitter will shrink.