Force a Specific Camera Aquisition Framerate in a realtime Matlab Loop - matlab

I have a function running in real time controlling hardware timing, etc. I want to be able to record video as well, so I've created a global video object in matlab, and set the triggering to manual. Right now, I have it so that each iteration of the realtime loop I record a frame and write it to disk. I'm changing it so that Ill record to memory till I hit some limit, and then writing to disk. However, I'd like to guarantee no matter how fast the real time loop is running, I only capture 15 frames per second.
Thinking about though, how can I make it so that I capture the 15 frames not so dramatically close to each other? If the real time loop is running lightning fast, the 15 frames will just be gathered at the "beginning" and capture almost no change that has occurred during that second. In other words, the faster real time loop is, the more my sampling will act like 1 frame per second (which has 14 other copies made).
For example,
% Main File
function start()
global vid;
global myLogger;
vid = videoinput('winvideo', 1, 'MJPG_160x120');
src = getselectedsource(vid);
triggerconfig(vid, 'manual');
vid.FramesPerTrigger = 1;
vid.LoggingMode = 'disk&memory';
imaqmem(512000000); % 512 MB
myLogger = VideoWriter('C:\Users\myname\Desktop\output.avi', 'Motion JPEG AVI');
myLogger.Quality = 50;
myLogger.FrameRate = 15;
vid.DiskLogger = myLogger;
src.FrameRate = '15.0000';
vid.ReturnedColorspace = 'grayscale';
start(vid);
open(myLogger);
initiateFastLoop();
close(myLogger);
stop(vid);
end
The real time piece:
function initiateFastLoop
global vid;
global myLogger;
while(flag)
% perform lightning fast stuff
frame = getsnapshot(vid);
writeVideo(myLogger, frame);
end
end
The video generated is much higher framerate, and I don't want to capture a frame every single time realtime loop runs, and I don't want to set a simple upper limit because of the problem described above. Any help would be great!

Related

Matlab real time audio processing

I'm trying to record my microphone input and process it at the same time.
I tried with a loop with this inside:
recordblocking(recorder, 1);
y = getaudiodata(recorder);
% any processing on y
But while I'm doing something with y, I'm losing information since not recording continuously.
Is there something I could do to continuously record sound coming in my microphone, store it in some kind of buffer, and process chunks of it at the same time?
A delay isn't a problem, but I really need the recording and processing done simultaneously.
Thanks in advance for any help.
I think that you should use Stream processing like this:
% Visualization of audio spectrum frame by frame
Microphone = dsp.AudioRecorder;
Speaker = dsp.AudioPlayer;
SpecAnalyzer = dsp.SpectrumAnalyzer;
tic;
while(toc<30)
audio = step(Microphone);
step(SpecAnalyzer,audio);
step(Speaker, audio);
end
you can find more information here and also in this presentation
You can try the block processing framework in LTFAT
http://ltfat.github.io/doc/demos/demo_blockproc_basicloop_code.html
Edit:
This is the main gist of the code:
% Basic Control pannel (Java object)
p = blockpanel({
{'GdB','Gain',-20,20,0,21},...
});
% Setup blocktream
fs = block('playrec','loadind',p);
% Set buffer length to 30 ms
L = floor(30e-3*fs);
flag = 1;
%Loop until end of the stream (flag) and until panel is opened
while flag && p.flag
gain = blockpanelget(p,'GdB');
gain = 10^(gain/20);
% Read the block
[f,flag] = blockread(L);
% Play the block and do the processing
blockplay(f*gain);
end
blockdone(p);
Note that it is possible to specify input and output devices and their channels by passing additional arguments to the block function. List of available audio devices can be obtained by calling blockdevices.

Matlab: exactly timed getsnapshot for real-time event analyzing

I got a camera triggered by external source at a constant rate of 1/0.14s, and Matlab for-loop is used to take timed pictures for real-time measurements. However, the elapsed time for 1 execution of "getsnapshot" is so different each time. Sometimes I get 1 picture with less than 0.14s and
sometimes it takes 0.5s to take a picture. Is there anyway to synchronize the "getsnapshot"
with the external trigger? or at least make the "getsnapshot" exactly timed?
The following is my code:
vid = videoinput('camera');
preview(vid);
for i=1:100
data=getsnapshot(vid);
%...data processing...
%....
clear data
end
First, delete the preview(vid) line, this is probably why the rep. rate you are getting is weird. When you take data you don't need this preview option on, as it takes resources from your cpu.
Then, you may need to set the camera properties on the imaq toolbox to be in triggered mode. For example, for a gentl camera type this might look something like:
triggerconfig(vid, 'hardware', 'DeviceSpecific', 'DeviceSpecific');
src = getselectedsource(vid);
src.FrameStartTriggerMode = 'On';
src.FrameStartTriggerActivation = 'RisingEdge';
src.FrameStartTriggerDelayAbs = 0;
src.FrameStartTriggerSource = 'Line1';
src.FrameStartTriggerOverlap = 'Off';
Then, with some camera's you can read their trigger out, that is whenever the camera is exposing, it sends a ttl to some output. Matlab way to define it is something like:
src.SyncOut1SyncOutPolarity = 'Normal';
src.SyncOut1SyncOutSource = 'Exposing';
Again, you'll need to play with your camera's options in the imaq tool.
Also, the data processing step that you take afterwards may take some time, so benchmark it to see you can take data and analyze it on the fly without bottlenecks happening.
Last, you can use getdata instead of getsnapshot (read the documentation to see their difference) , and in the form: [img, time, metadata] = getdata(vid);
This will give you timestamps for each image taken, so you can see what's happening. Also, instead of clear data use flushdata(vid) to keep the vid object from completely filling the memory buffer (though if you only run 100 iterations in a loop, you should be fine).

Can you synchronize the data acquisition toolbox and the image acquisition toolbox of Matlab?

I'd like to simultaneously get data from a camera (i.e. an image) and an analog voltage using matlab. For the camera I use the imaq toolbox, for reading the voltage I use the daq toolbox (reading NI-USB device), with a following code:
clear all
% Prepare camera
vid = videoinput('gentl', 1, 'Mono8');
src = getselectedsource(vid);
vid.FramesPerTrigger = 1;
vid.TriggerRepeat = Inf;
triggerconfig(vid, 'hardware', 'DeviceSpecific', 'DeviceSpecific');
src.FrameStartTriggerMode = 'On';
src.FrameStartTriggerActivation = 'RisingEdge';
% prepare DAQ
s=daq.createSession('ni');
s.addAnalogInputChannel('Dev1','ai1','Voltage');
fid = fopen('log.txt','w');
lh = s.addlistener('DataAvailable',#(src,event)SaveData(fid,event));
s.IsContinuous = true;
% Take data
s.startBackground();
start(vid)
N=10;
for ii=1:N
im(:,:,ii)=getsnapshot(vid);
end
% end code
delete(lh );
fclose('all');
stop(vid)
delete(vid)
where the function SaveData is:
function SaveData(fid,event)
time = event.TimeStamps;
data = event.Data;
fprintf(fid, '%f,%f\n ', [time data]);
end
I do get images and a log.txt file with the daq trace (time and data), but how can I use the external triggering (that trigger the camera) or some other clock to synchronize the two?
For this example, the daq reads the camera triggering TTL signal (# 50 Hz), so I want to assign each TTL pulse to an image.
Addendum:
I've been searching and have found a few discussions (like this one) on the subject, and read the examples that are found in the Mathworks website, but haven't found an answer. The documentation shows how to Start a Multi-Trigger Acquisition on an External Event, but the acquisition discussed is only relevant for the DAQ based input, not a camera based input (it is also working in the foreground).
This will not entirely solve your problem, but it might be good enough. Since the synchronization signal you are after in at 50 Hz, you can use clock in order to create time stamps for both types of your data (camera image and analog voltage). Since the function clock takes practically no time (i.e. below 1e-7 sec), you can try edit to your SaveData function accordingly:
fprintf(fid, '%f,%f\n ', [clock time data]);
And in the for loop add:
timestamp(i,:)=clock;
Can you use the sync to trigger the AD board? From the USB-6009 manual...
Using PFI 0 as a Digital Trigger--
When an analog input task is defined, you can configure PFI 0 as a digital trigger input. When the digital trigger is enabled, the AI task waits for a rising or falling edge on PFI 0 before starting the acquisition. To use AI Start Trigger (ai/StartTrigger) with a digital source, specify PFI 0 as the source and select a rising or falling edge.
My experience suggests that delay between trigger and AQ is very short
I'm sorry I use Python or C for this, so I can't give you MatLab code, but you want to look at functions like.
/* Select trigger source */
Select_Signal(deviceNumber, ND_IN_START_TRIGGER, ND_PFI_0, ND_HIGH_TO_LOW);
/* specify that a start trigger is to be used */
DAQ_Config(deviceNumber, startTrig, extConv); // set startTrig = 1
/* start the acquisition */
DAQ_Start(deviceNumber, …)
If you want to take this route you could get more ideas from:
http://www.ni.com/white-paper/4326/en
Hope this helps,
Carl
This is yet no complete solution, but some thoughts that might be useful.
I do get images and a log.txt file with the daq trace (time and data), but how can I use the external triggering (that trigger the camera) or some other clock to synchronize the two?
Can you think of a way to calibrate your setup? I.e. modify your experiment and create a distinct event in both your image stream and voltage measurements, which can be used for synchronization? Just like this ...

Continuous Video Recording in Matlab, Saving/Restarting On a Memory Cap, Ending on a Flag

I would like to record a continuous video in Matlab until some other flag changes, allowing matlab to continue performing other tasks during video acquisition (like deciding whether or not the flag should be set). Since these recordings could last upwards of 3 hours, perhaps closing the recording every hour, writing to a file video_1, then recording for another hour and dumping to video_2, etc for as long as the flag isn't set. However, from what I've seen using Matlab's Image Processing Toolbox, you have to specify some kind of number of frames to capture, or frames per trigger, etc. I'm not really too sure how to proceed.
Some simple code to record video I have is:
% create video obj
video = videoinput('winvideo',1);
% create writer obj
writerObj = VideoWriter('output.avi');
% set video properties
video.LoggingMode = 'disk';
video.DiskLogger = writerObj;
% start recording video
start(video);
% wait
wait(video, inf)
% save video
close(video.DiskLogger);
delete(video);
clear video;
However, the output video is only .3 seconds long. I've followed the following tutorial to get a 30 second recording down to a 3 second video available here but I can't figure out how to make it go on continuously.
Any help would be appreciated!
aviObject = avifile('myVideo.avi'); % Create a new AVI file
for iFrame = 1:100 % Capture 100 frames
% ...
% You would capture a single image I from your webcam here
% ...
F = im2frame(I); % Convert I to a movie frame
aviObject = addframe(aviObject,F); % Add the frame to the AVI file
end
aviObject = close(aviObject); % Close the AVI file
source: How do I record video from a webcam in MATLAB?

Scope for improvement in this code

I have written the following code in MATLAB to process large images of the order of 3000x2500 pixels. Currently the operation takes more than half hour to complete. Is there any scope to improve the code to consume less time? I heard parallel processing can make things faster, but I have no idea on how to implement it. How do I do it, given the following code?
function dirvar(subfn)
[fn,pn] = uigetfile({'*.TIF; *.tiff; *.tif; *.TIFF; *.jpg; *.bmp; *.JPG; *.png'}, ...
'Select an image', '~/');
I = double(imread(fullfile(pn,fn)));
ld = input('Enter the lag distance = '); % prompt for lag distance
fh = eval(['#' subfn]); % Function handles
I2 = uint8(nlfilter(I, [7 7], fh));
imshow(I2); % Texture Layer Image
imwrite(I2,'result_mat.tif');
% Zero Degree Variogram
function [gamma] = ewvar(I)
c = (size(I)+1)/2; % Finds the central pixel of moving window
EW = I(c(1),c(2):end); % Determines the values from central pixel to margin of window
h = length(EW) - ld; % Number of lags
gamma = 1/(2 * h) * sum((EW(1:ld:end-1) - EW(2:ld:end)).^2);
end
The input lag distance is usually 1.
You really need to use the profiler to get some improvements out of it. My first guess (as I haven't run the profiler, which you should as suggested already), would be to use as little length operations as possible. Since you are processing every image with a [7 7] window, you can precalculate some parts,
such that you won't repeat these actions
function dirvar(subfn)
[fn,pn] = uigetfile({'*.TIF; *.tiff; *.tif; *.TIFF; *.jpg; *.bmp; *.JPG; *.png'}, ...
'Select an image', '~/');
I = double(imread(fullfile(pn,fn)));
ld = input('Enter the lag distance = '); % prompt for lag distance
fh = eval(['#' subfn]); % Function handles
%% precalculations
wind = [7 7];
center = (wind+1)/2; % Finds the central pixel of moving window
EWlength = (wind(2)+1)/2;
h = EWlength - ld; % Number of lags
%% calculations
I2 = nlfilter(I, wind, fh);
imshow(I2); % Texture Layer Image
imwrite(I2,'result_mat.tif');
% Zero Degree Variogram
function [gamma] = ewvar(I)
EW = I(center(1),center(2):end); % Determines the values from central pixel to margin of window
gamma = 1/(2 * h) * sum((EW(1:ld:end-1) - EW(2:ld:end)).^2);
end
end
Note that by doing so, you trade performance for clearness of your code and coupling (between the function dirvar and the nested function ewvar). However, since I haven't profiled your code (you should do that yourself using your own inputs), you can find what line of your code consumes the most time.
For batch processing, I would also recommend to leave out any input, imshow, imwrite and uigetfile. Those are commands that you typically call from a more high-level function/script and that will force you to enter these inputs even when you want them to stay the same. So instead of that code, make each of the variables they produce (/process) a parameter (/return value) for your function. That way, you could leave MATLAB running during the weekend to process everything (without having manually enter to all those values), even if you are unable to speed up the code.
A few general purpose tricks:
1 - use the MATLAB profiler to determine all the computational bottlenecks
2 - parallel processing can make things faster and there are a lot of tools that you can use, but it depends on how your entire code is set up and whether the code is optimized for it. By far the easiest trick to learn is parfor, where you can replace the top level for loop by parfor. This does mean you must open the MATLAB pool with matlabpool open.
3 - If you have a rather recent Nvidia GPU as well as MATLAB 2011, you can also write some CUDA code.
All in all 30 mins to me is peanuts, so don't fret it too much.
First of all, I strongly suggest you follow the advice by #Egon: Write a separate function that collects a list of files (the excellent UIPICKFILES from the FEX is your friend here), and then runs your filtering code in a loop for each image. Note that you should definitely keep the call to imwrite in your filtering code: In case the analysis crashes at image 48 (e.g. due to power failure), you don't want to lose all the previous work.
Running thusly in batch mode has two big advantages: (1) you can start running your code and go home for the week-end, and (2) you can easily parallelize this outside loop using PARFOR. However, with only a dual-core machine, it is unlikely that you get any significant improvements from parallelization - your OS also wants to run stuff at times, and the overhead of parallelization might be more than the gain from running two workers. Also, 2.5GB of RAM is seriously limiting.
As to your specific code: in my experience using IM2COL is often faster than NLFILTER. im2col creates a nElementsInMask-by-nMasks array out of your image, so that you can apply the filtering in one single operation. With a 7x7 window, the output of im2col will be 3000*2500*49 bytes, which is close to 400MB. Thus, it should just work. All that you need to do is rewrite ewvar so that it works on a 49x1 array of pixels that make up the pixels your mask, which will require some index juggling, if I understand your code correctly.