I have written a computer vision code which uses MSER to detect features in MATLAB. I used the built in 'detectMSERFeatures' function to process a locally saved video. Now, I want to port it t C language using MATLAB Coder. But, MATLAB Coder doesn't support this function. I've attached a screenshot of the output. Any help would be appreciated.
My code is as follows:
function count = master
% Clear workspace and Initialize Frame count
clear all;
F=0;
count=0;
% 1.Input Video Object Handler Definition
inputVideo = vision.VideoFileReader('video.mp4');
videoPlayer = vision.VideoPlayer;
% 2.Cropping Height and Width of frame. Subject to convenient
% adjustments according to the position of camera. Mainly using to
% crop out street lights from the top of the frame.
height = floor(inputVideo.info().VideoSize(2)*0.7);
width = inputVideo.info().VideoSize(1);
crop = vision.ImagePadder(...
'SizeMethod','Output size', ...
'NumOutputRowsSource','Property', ...
'RowPaddingLocation','Top', ...
'NumOutputRows', height, ...
'NumOutputColumns', width);
% 3.Frame Conversion from True Colour to Grayscale
gray = vision.ColorSpaceConverter;
gray.Conversion = 'RGB to intensity';
% 4.Implementation on individual frames till the end of video.
while(~isDone(inputVideo))
% Current Frame number
F = F + 1;
%flag=0
% Current Frame
currentFrame = step(inputVideo);
% Crop
currentFrame = step(crop, currentFrame);
% Convert to Grayscale
currentFrame = step(gray, currentFrame);
% Threshold
currentFrame(currentFrame<0.7843) = 0;
% Detect MSER Regions
regions = detectMSERFeatures(currentFrame, ...
'RegionAreaRange', [800 3000], ...
'ThresholdDelta', 4);
% Check for 'big bright blob(s)', or high incoming beam
% and output detected blob count and corresponding frame
if(regions.Count >= 2 && regions.Count <=6)
disp([regions.Count, F]);
%flag=1;
count= count+1;
end
% Port frame to player
step(videoPlayer,currentFrame);
end
%5.Release both player and video file instances
release(inputVideo);
release(videoPlayer);
I'm using MATLAB R2013a.
The only way to fix this would be to upgrade to a more recent version of MATLAB. Code generation support for detectMSERFeatures was added in R2013b release.
Related
i try to draw a disk on objects in video (the object are cars are move on road from left to right).
my code:
obj = VideoReader('Cars.avi');
get(obj)
im = read(obj,71);
nframes = get(obj,'NumberOfFrames');
sedisk = strel('disk',10);
im_new = imopen(im,sedisk);
stats = regionprops(im_new);
area_array = [stats.Area];
im2 = read(obj,1);
figure,imagesc(im2);
for i=1:nframes-1
stats(i).Centroid
frame = read(obj,i);
imshow(frame);
end
i see the frames but not the disk on the cars, why it's not working?
maybe something in the logic is wrong?
thank's everyone
You need to put the regionprops code into the for loop, and plot the regions over the image.
Start by following MATLAB regionprops documentation
I built a sample code with traffic.avi as input file.
traffic.avi file comes with my MATLAB installation (in folder toolbox/images/imdata/).
Here is a code sample (please read the comments):
clear
close all
%obj = VideoReader('Cars.avi');
obj = VideoReader('traffic.avi');
nframes = get(obj, 'NumberOfFrames');
%sedisk = strel('disk', 10);
sedisk = strel('disk', 2); %Smaller disk fits traffic.avi (you can keep sedisk = strel('disk', 10))
%Read first image (assume no cars). In your code you can keep: im = read(obj,71);
im1 = read(obj,1);
for i = 2:nframes
im = read(obj, i);
imshow(im); %Show the frame
%Subtract frame form the first frame - assume the cars will pop up in the difference image.
diff_im = uint8(abs(double(im) - double(im1)));
I = rgb2gray(diff_im); %Convert to grayscale image
BW = imbinarize(I); %Convert to binary image.
im_new = imopen(BW, sedisk);
stats = regionprops('table', im_new, 'Centroid', 'MajorAxisLength', 'MinorAxisLength'); %MATLAB documentation code sample
%imshow(im_new);
if (~isempty(stats))
%stats(i).Centroid
centers = stats.Centroid; %Get centers (MATLAB documentation code sample)
%Get radius of the circles (MATLAB documentation code sample).
diameters = mean([stats.MajorAxisLength stats.MinorAxisLength],2);
radii = diameters/2;
%Plot the circles on the displayed video frame.
hold on
viscircles(centers,radii);
hold off
end
pause(0.1); %Pause 0.1 seconds
end
The code is not marking the cars accurately (just demonstrates the stages).
Here is a sample frame:
I want to rectify an image with perspectival distorsion. I have points of the corners and I have also have an algorithm that perfoms what I need but it executes really slow. It has 'imtransform' and 'maketform' functions which matlab has faster functions for these actions. So I tried to replace them but I couldn't make it right. Any helps will be appreciated.
Here is the Images to make this question clearer:
Input Image with known Coordinates(x,y):
and Desired Output:
This process executed with the interval of 2 seconds, I need to replace this process via new matlab functions but I couldn't make it.
Old algorihm was:
%X has the clockwise X coordinates %Y has the clockwise Y coordinates
A=zeros(8,8);
A(1,:)=[X(1),Y(1),1,0,0,0,-1*X(1)*x(1),-1*Y(1)*x(1)];
A(2,:)=[0,0,0,X(1),Y(1),1,-1*X(1)*y(1),-1*Y(1)*y(1)];
A(3,:)=[X(2),Y(2),1,0,0,0,-1*X(2)*x(2),-1*Y(2)*x(2)];
A(4,:)=[0,0,0,X(2),Y(2),1,-1*X(2)*y(2),-1*Y(2)*y(2)];
A(5,:)=[X(3),Y(3),1,0,0,0,-1*X(3)*x(3),-1*Y(3)*x(3)];
A(6,:)=[0,0,0,X(3),Y(3),1,-1*X(3)*y(3),-1*Y(3)*y(3)];
A(7,:)=[X(4),Y(4),1,0,0,0,-1*X(4)*x(4),-1*Y(4)*x(4)];
A(8,:)=[0,0,0,X(4),Y(4),1,-1*X(4)*y(4),-1*Y(4)*y(4)];
v=[x(1);y(1);x(2);y(2);x(3);y(3);x(4);y(4)];
u=A\v;
%transfer fonksiyonumuz
U=reshape([u;1],3,3)';
w=U*[X';Y';ones(1,4)];
w=w./(ones(3,1)*w(3,:));
T=maketform('projective',U');
%transform uygulayıp resmi düzleştiriyoruz
P2=imtransform(I,T,'XData',[1 n],'YData',[1 m]);
if it helps, here is how I generated "A" matrix and U matrix:
Out Link
using the builtin MATLAB functions (fitgeotrans, imref2d, and imwarp) the following code runs in 0.06 seconds on my laptop:
% read the image
im = imread('paper.jpg');
tic
% set the moving points := the original image control points
x = [1380;2183;1282;422];
y = [727;1166;2351;1678];
movingPoints = [x,y];
% set the fixed points := the desired image control points
xfix = [1;1000;1000;1];
yfix = [1;1;1000;1000];
fixedPoints = [xfix,yfix];
% generate geometric transform
tform = fitgeotrans(movingPoints,fixedPoints,'projective');
% generate reference object (full desired image size)
R = imref2d([1000 1000]);
% warp image
outputImage = imwarp(im,tform,'OutputView',R);
toc
% show image
imshow(outputImage);
I have the following code:
a = imaqhwinfo;
%[camera_name, camera_id, format] = getCameraInfo(a);
% Capture the video frames using the videoinput function
% You have to replace the resolution & your installed adaptor name.
% vid = videoinput('winvideo', 1);
vid = VideoReader('C:\VHDL Project\VHDL Course\Logtel\Image processing\Sacramento Kings vs Golden State Warriors.mp4')
% Set the properties of the video object
set(vid, 'FramesPerTrigger', Inf);
set(vid, 'ReturnedColorspace', 'rgb')
vid.FrameGrabInterval = 5;
%start the video aquisition here
start(vid)
% Set a loop that stop after 100 frames of aquisition
while(vid.FramesAcquired<=200)
% Get the snapshot of the current frame
data = getsnapshot(vid);
% Now to track red objects in real time
% we have to subtract the red component
% from the grayscale image to extract the red components in the image.
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
%Use a median filter to filter out noise
diff_im = medfilt2(diff_im, [3 3]);
% Convert the resulting grayscale image into a binary image.
diff_im = im2bw(diff_im,0.18);
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,300);
% Label all the connected components in the image.
bw = bwlabel(diff_im, 8);
% Here we do the image blob analysis.
% We get a set of properties for each labeled region.
stats = regionprops(bw, 'BoundingBox', 'Centroid');
% Display the image
imshow(data)
hold on
%This is a loop to bound the red objects in a rectangular box.
for object = 1:length(stats)
bb = stats(object).BoundingBox;
bc = stats(object).Centroid;
rectangle('Position',bb,'EdgeColor','r','LineWidth',2)
plot(bc(1),bc(2), '-m+')
a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2)))));
set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'yellow');
end
hold off
end
% Both the loops end here.
% Stop the video aquisition.
stop(vid);
% Flush all the image data stored in the memory buffer.
flushdata(vid);
% Clear all variables
clear all
sprintf('%s','That was all about Image tracking, Guess that was pretty easy :) ')
I try to read a MP4 file, run it and find reds objects.
Unfortunately the tool pop up me the following error:
Error using VideoReader/set
The name 'FramesPerTrigger' is not an accessible property for an instance of class 'VideoReader'.
Error in RedObjectTracking (line 11)
set(vid, 'FramesPerTrigger', Inf);
I would be happy if someone can tell me where my mistake
Thanks for the help.
If you look at the documentation for VideoReader you will see that (as your error says) FramesPerTrigger is not listed as a valid property. Based on more of the documentation for FramesPerTrigger, it is a property that is only used for video devices in the image acquisition toolbox (similar to the videoinput that you have commented out on the line before. So the line where you attempt to set the FramesPerTrigger value shouldn't be there when you're only using a video file input via VideoReader.
This also makes sense, as your image acquisition system will have triggers and it is important for MATLAB to know how many frames to grab for each trigger. A video file, on the other hand (which VideoReader is intended to handle) has no triggers and the frame information is readily available from the file itself.
It looks, based on your code, that you're trying to simply substitute VideoReader in for videoinput but this will not work so there are many points throughout your code which will result in errors.
Here is model for face detection. Bounding box is to track and detect face. Active contours need to be added to get exact shape of the face not its features. To do this we need to proceed with segmentation of the frames within the video. I applied segmentation that is done on the image however on the image you can choose where you want to initialize segmentation as it is a still image, with the video however it needs to be dynamic and faster in the same time as it will loop trought live images which are not stored anywhere. I want both the bounding box and active contours can anyone guide me how to achieve this? Here is the code so far:
tic;
clear all
close all
clc
%Create the face detector object.
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
obj =imaq.VideoDevice('winvideo', 1, 'YUY2_320x240','ROI', [1 1 320 240]);
set(obj,'ReturnedColorSpace', 'rgb'); % Set obejct to RGB colours
figure('menubar','none','tag','webcam');
%preview (obj)
while (true)
frame=step(obj);
%----IT IS TO DO WITH THIS PART OF CODE
%m = zeros(size(frame,1),size(frame,2)); %-- create initial mask
%m(20:222,20:250) = 1; %show the specific image with the give parameters
%m = imresize(m,.5); % for fast compution
%seg = region_seg(frame, m, 300); %-- run segmentation
bbox=step(faceDetector,frame);
boxInserter = insertObjectAnnotation(frame,'rectangle',bbox,'Face Detected');
imshow(boxInserter,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
close(gcf)
break
end
pause(0.05)
end
release(obj)
toc;
Some example of what I want to do: http://groups.inf.ed.ac.uk/calvin/FastVideoSegmentation/
i have a real time skin detection algorithm where it gives me a bounding box around the skin region with rectangle('Position',bb,'EdgeColor','r','LineWidth',2) from an original image. I wish to use the code to first detect the skin region from the original image before i use Viola Jones to detect the face region from the cropped skin region. I wish to know after i crop the skin region then use the face detection algorithm to detect the face, how can i map the bounding box of the face to the original image.
function cameraon_Callback(hObject, eventdata, handles)
% hObject handle to cameraon (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
global video;
global videoFrame;
axes(handles.axes1);
video = videoinput('winvideo',1,'YUY2_320x240');
set(video,'ReturnedColorSpace','rgb');
handles.video=video;
triggerconfig(video,'manual');
video.FramesPerTrigger = 1;
guidata(hObject,handles);
faceDetector=vision.CascadeObjectDetector('FrontalFaceCART');
faceDetector.MinSize=[20 20];
faceDetector.MergeThreshold = 20;
videoFrame=getsnapshot(video);
bbox=step(faceDetector,videoFrame);
if numel(bbox) == 0
errordlg('Face not detected. Please try again.');
set(handles.cameraon,'String','Start Camera')
stop(video);
delete(video);
clear;
else
axes(handles.axes1);
start(video);
end
while(true)
frame=getsnapshot(video);
%Detect faces.
data = frame;% this is to read a image from data base. just put any image name u want to give make sure its placed in bin
diff_im = imsubtract(data(:,:,1), rgb2gray(data)); % deleting gray scale pixels from image
diff_im = medfilt2(diff_im, [3 3]); %applying filter one
diff_im = imadjust(diff_im); % adjust image function to fill small holes (check all the function's functionality to have idea of whats going on)
level = graythresh(diff_im);% extract level value
bw = im2bw(diff_im,level);
BW5 = imfill(bw,'holes');
bw6 = bwlabel(BW5, 8);
stats = regionprops(bw6,['basic']);%basic mohem nist
measurements = regionprops(bw6, 'boundingbox');
BB1=struct2cell(measurements);
BB2=cell2mat(BB1);
a = BB2(1);
b = BB2(2);
c = BB2(3);
d = BB2(4);
[N,M]=size(stats);
if (bw==0)% check if there is no skin color then exit
break;
else
tmp = stats(1);
for i = 2 : N % checking for biggest hole to mark it as face
if stats(i).Area > tmp.Area
tmp = stats(i);
end
end
bb = tmp.BoundingBox; % applying identification square to mark skin color region
bc = tmp.Centroid;
videoFrame=getsnapshot(video);
This is the place where i cannot put the bounding box back to the original image.
skinImage = imcrop(videoFrame,bb(1,:));
bbox = step(faceDetector,skinImage);
bbox(1,1:2) = bbox(1,1:2) + bb(1,1:2);
videoOut = insertObjectAnnotation(videoFrame,'rectangle',bbox,'Face');
cla;
imshow(videoOut,[]);
drawnow;
pause(0.0001);
end
end
guidata(hObject,handles);
I want to put the rectangle i got from the face detector onto the full size image at the original location in the image where the cropped image come from.
You simply add the coordinates of the top-left corner of the cropped region to the top-left corners of the detected bounding boxes.
Also, in the latest version of MATLAB vision.CascadeObjectDetector supports passing in the region of interest where you want to detect objects, so that you do not need to crop. Then it will adjust the coordinates for you. Check the documentation for the step() method of vision.CascadeObjectDetector.