Capture image of a detected face using webcam? - matlab

I am new to MATLAB. I need to capture image and save it into a folder. THis is my matlab code for detect face.
% Create the face detector object.
faceDetector = vision.CascadeObjectDetector();
% Create the point tracker object.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Create the webcam object.
cam = webcam();
% Capture one frame to get its size.
videoFrame = snapshot(cam);
frameSize = size(videoFrame);
% Create the video player object.
videoPlayer = vision.VideoPlayer('Position', [100 100 [frameSize(2), frameSize(1)]+30]);
runLoop = true;
numPts = 0;
frameCount = 0;
%%x = 0;
while runLoop && frameCount < 400
%% while(x<1)
% Get the next frame.
videoFrame = snapshot(cam);
videoFrameGray = rgb2gray(videoFrame);
frameCount = frameCount + 1;
if numPts < 10
% Detection mode.
bbox = faceDetector.step(videoFrameGray);
if ~isempty(bbox)
% Find corner points inside the detected region.
points = detectMinEigenFeatures(videoFrameGray, 'ROI', bbox(1, :));
% Re-initialize the point tracker.
xyPoints = points.Location;
numPts = size(xyPoints,1);
release(pointTracker);
initialize(pointTracker, xyPoints, videoFrameGray);
% Save a copy of the points.
oldPoints = xyPoints;
% the orientation of the face.
bboxPoints = bbox2points(bbox(1, :));
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display detected corners.
videoFrame = insertMarker(videoFrame, xyPoints, '+', 'Color', 'white');
end
else
% Tracking mode.
[xyPoints, isFound] = step(pointTracker, videoFrameGray);
visiblePoints = xyPoints(isFound, :);
oldInliers = oldPoints(isFound, :);
numPts = size(visiblePoints, 1);
if numPts >= 10
% Estimate the geometric transformation between the old points
% and the new points.
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box.
bboxPoints = transformPointsForward(xform, bboxPoints);
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon = reshape(bboxPoints', 1, []);
% Display a bounding box around the face being tracked.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon, 'LineWidth', 3);
% Display tracked points.
videoFrame = insertMarker(videoFrame, visiblePoints, '+', 'Color', 'white');
% Reset the points.
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
end
% Display the annotated video frame using the video player object.
step(videoPlayer, videoFrame);
% Check whether the video player window has been closed.
runLoop = isOpen(videoPlayer);
end
% Clean up.
clear cam;
release(videoPlayer);
release(pointTracker);
release(faceDetector);
Please help me to capture image and save.
I tried this code for capture image and save
vid = videoinput('dcam',1,'RGB24_640x480');
preview(vid);
start(vid);
im=getdata(vid);
figure,imshow(im);
write(im,'test1image.jpg');
When I try this code it gives and error,
Error using videoinput (line 233)
There are no devices installed for the specified ADAPTORNAME. See IMAQHWINFO.
Error in takeimage (line 1)
vid = videoinput('dcam',1,'RGB24_640x480');

You may need to download a support package for the Image Acquisition Toolbox for your particular camera. If you are using a regular USB webcam, then you probably need the "OS Generic Video Interface" support package.

Related

Video Stabilization Using Point Feature Matching WITHOUT LOSING RGB COLORS on frames on MATLAB

I'd like to stabilize a 13 min video captured by a quadcopter over a traffic crossroads without losing its 3 color channels (RGB). Matlab's own function leads to a gray scale video which is an unwanted case for the main and future objective, vehicle tracking. New thoughts are appreciated.
Below you can find my own code (works and converts the video to gray scale) edited over the Matlab's own script written on the following page:
Matlab's related Webpage : Video Stabilization Using Point Feature Matching
clc; clear all; close all;
filename = 'Quad_video_erst';
hVideoSrc = vision.VideoFileReader('Quad_video_erst.mp4', 'ImageColorSpace', 'Intensity');
% Create and open video file
myVideo = VideoWriter('vivi.avi');
open(myVideo);
hVPlayer = vision.VideoPlayer;
%% Step 1: Read Frames from a Movie File
for i=1:10 % testing for a short run
imgA = step(hVideoSrc); % Read first frame into imgA
imgB = step(hVideoSrc); % Read second frame into imgB
%% Step 2: SURF DETECTION
pointsA=surf_function_CAN(imgA);
pointsB=surf_function_CAN(imgB);
%% Step 3. Select Correspondences Between Points
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
%% Step 4: Estimating Transform from Noisy Correspondences
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);
%% Step 5: Step 5. Transform Approximation and Smoothing
% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)];...
translation], [0 0 1]'];
tformsRT = affine2d(HsRt);
imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));
%% Write the Video
writeVideo(myVideo,imfuse(imgBold,imgBsRt,'ColorChannels','red-cyan'));
end
And the function:
function [ surf_points ] = surf_function_CAN(img)
surfpoints_raw= detectSURFFeatures(img);
[featuresOriginal, validPtsOriginal] = extractFeatures(img, surfpoints_raw);
strongestPoints = validPtsOriginal.selectStrongest(1600);
array=strongestPoints.Location;
% New - Get X and Y coordinates
X = array(:,1);
Y = array(:,2);
% New - Determine a mask to grab the points we want
ind = (((X>156-9-70 & X<156+9+70) & (Y>406-9-70 & Y<406+9+70)) | ...
((X>684-11-70 & X<684+11+70) & (Y>274-11-70 & Y<274+11+70)) | ...
((X>1066-15-70 & X<1066+15+70) & (Y>67-15-70 & Y<67+15+70)) | ...
((X>1559-15-70 & X<1559+15+70) & (Y>867-15-70 & Y<867+15+70)) | ...
((X>1082-18-70 & X<1082+18+70) & (Y>740-18-100 & Y<740+18+100))) ;
% New - Create new SURFPoints structure that contains all information
% from the points we need
array_filtered =strongestPoints(ind);
surf_points= array_filtered;
end
Firstly, if you look through their example you should use the part where they perform the loop, not the part where they show how to implement it between 2 frames as they are not exactly compatible. Other than that the only thing you need to do is perform the analysis on the a grayscale image, but implement the transformation on the color image:
%% Load Video and Open Save File
filename = 'shaky_car.avi';
hVideoSrc = vision.VideoFileReader(filename);
myVideo = VideoWriter('vivi.avi');
open(myVideo);
% Get next Image
colorImg = step(hVideoSrc);
% Try to Convert to Grayscale
try
imgB = rgb2gray(colorImg);
RGB = true;
catch % Image is not RGB
imgB = colorImg;
RGB = false;
end
Hcumulative = eye(3);
ptThresh = 0.1;
% Loop Through Video
while ~isDone(hVideoSrc)
imgA = imgB;
% Get Next Image
colorImg = step(hVideoSrc);
% Convert to Grayscale
if RGB
imgB = rgb2gray(colorImg);
else
imgB = colorImg;
end
%% Calculate Transformation
% Generate Prospective Points
pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);
% Extract Features for the Corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
[tform] = estimateGeometricTransform(pointsB, pointsA, 'affine');
% Extract Rotation & Translations
H = tform.T;
R = H(1:2,1:2);
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
scale = mean(R([1 4])/cos(theta));
translation = H(3, 1:2);
% Reconstitute Trnasform
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
translation], [0 0 1]'];
Hcumulative = HsRt*Hcumulative;
% Perform Transformation on Color Image
img = imwarp(colorImg, affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));
% Save Transformed Color Image to Video File
writeVideo(myVideo,img)
end
close(myVideo)

I want to make panorama image but it is showing the error message Undefined function 'imageSet' for input arguments of type 'char'

Undefined function 'imageSet' for input arguments of type 'char'.
Error in build (line 3)
buildingScene = imageSet(buildingDir);
% Load images.
buildingDir = fullfile(toolboxdir('vision'), 'visiondata', 'building');
buildingScene = imageSet(buildingDir);
% Display images to be stitched
montage(buildingScene.ImageLocation)
% Read the first image from the image set.
I = read(buildingScene, 1);
% Initialize features for I(1)
grayImage = rgb2gray(I);
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage, points);
% Initialize all the transforms to the identity matrix. Note that the
% projective transform is used here because the building images are fairly
% close to the camera. Had the scene been captured from a further distance,
% an affine transform would suffice.
tforms(buildingScene.Count) = projective2d(eye(3));
% Iterate over remaining image pairs
for n = 2:buildingScene.Count
% Store points and features for I(n-1).
pointsPrevious = points;
featuresPrevious = features;
% Read I(n).
I = read(buildingScene, n);
% Detect and extract SURF features for I(n).
grayImage = rgb2gray(I);
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage, points);
% Find correspondences between I(n) and I(n-1).
indexPairs = matchFeatures(features, featuresPrevious, 'Unique', true);
matchedPoints = points(indexPairs(:,1), :);
matchedPointsPrev = pointsPrevious(indexPairs(:,2), :);
% Estimate the transformation between I(n) and I(n-1).
tforms(n) = estimateGeometricTransform(matchedPoints, matchedPointsPrev,...
'projective', 'Confidence', 99.9, 'MaxNumTrials', 2000);
% Compute T(1) * ... * T(n-1) * T(n)
tforms(n).T = tforms(n-1).T * tforms(n).T;
end
avgXLim = mean(xlim, 2);
[~, idx] = sort(avgXLim);
centerIdx = floor((numel(tforms)+1)/2);
centerImageIdx = idx(centerIdx);
Tinv = invert(tforms(centerImageIdx));
for i = 1:numel(tforms)
tforms(i).T = Tinv.T * tforms(i).T;
end
for i = 1:numel(tforms)
[xlim(i,:), ylim(i,:)] = outputLimits(tforms(i), [1 imageSize(2)], [1 imageSize(1)]);
end
% Find the minimum and maximum output limits
xMin = min([1; xlim(:)]);
xMax = max([imageSize(2); xlim(:)]);
yMin = min([1; ylim(:)]);
yMax = max([imageSize(1); ylim(:)]);
% Width and height of panorama.
width = round(xMax - xMin);
height = round(yMax - yMin);
% Initialize the "empty" panorama.
panorama = zeros([height width 3], 'like', I);
Step 4 - Create the Panorama
Use imwarp to map images into the panorama and use vision.AlphaBlender to overlay the images together.
blender = vision.AlphaBlender('Operation', 'Binary mask', ...
'MaskSource', 'Input port');
% Create a 2-D spatial reference object defining the size of the panorama.
xLimits = [xMin xMax];
yLimits = [yMin yMax];
panoramaView = imref2d([height width], xLimits, yLimits);
% Create the panorama.
for i = 1:buildingScene.Count
I = read(buildingScene, i);
% Transform I into the panorama.
warpedImage = imwarp(I, tforms(i), 'OutputView', panoramaView);
% Create an mask for the overlay operation.
warpedMask = imwarp(ones(size(I(:,:,1))), tforms(i), 'OutputView', panoramaView);
% Clean up edge artifacts in the mask and convert to a binary image.
warpedMask = warpedMask >= 1;
% Overlay the warpedImage onto the panorama.
panorama = step(blender, panorama, warpedImage, warpedMask);
end
figure
imshow(panorama)
imageSet requires the Computer Vision Toolbox from MATLAB R2014b or higher. See the release notes from the Computer Vision Toolbox here: http://www.mathworks.com/help/vision/release-notes.html#R2014b
If you have R2014a or lower, imageSet does not come with your distribution. The only option you have is to upgrade your MATLAB distribution. Sorry if this isn't what you wanted to hear!

To refresh imshow in Matlab?

I want to convert this answer's code to imshow.
It creates a movie in MOVIE2AVI by
%# preallocate
nFrames = 20;
mov(1:nFrames) = struct('cdata',[], 'colormap',[]);
%# create movie
for k=1:nFrames
surf(sin(2*pi*k/20)*Z, Z)
mov(k) = getframe(gca);
end
close(gcf)
movie2avi(mov, 'myPeaks1.avi', 'compression','None', 'fps',10);
My pseudocode
%# preallocate
nFrames = 20;
mov(1:nFrames) = struct('cdata',[], 'colormap',[]);
%# create movie
for k=1:nFrames
imshow(signal(:,k,:),[1 1 1]) % or simply imshow(signal(:,k,:))
drawnow
mov(k) = getframe(gca);
end
close(gcf)
movie2avi(mov, 'myPeaks1.avi', 'compression','None', 'fps',10);
However, this creates the animation in the screen, but it saves only a AVI -file which size is 0 kB. The file myPeaks1.avi is stored properly after running the surf command but not from imshow.
I am not sure about the command drawnow.
Actual case code
%% HSV 3rd version
% https://stackoverflow.com/a/29801499/54964
rgbImage = imread('http://i.stack.imgur.com/cFOSp.png');
% Extract blue using HSV
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
% The label most repeated will be the signal.
% So we find it and separate the background from the signal using label.
% Label each "blob"
lbl=bwlabel(~bw);
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
%% Error Testing
comp_image = rgb2gray(abs(double(rgbImage) - double(signal)));
if ( sum(sum(comp_image(32:438, 96:517))) > 0 )
break;
end
%% Video
% 5001 units so 13.90 (= 4.45 + 9.45) seconds.
% In RGB, original size 480x592.
% Resize to 480x491
signal = signal(:, 42:532, :);
% Show 7 seconds (298 units) at a time.
% imshow(signal(:, 1:298, :));
%% Video VideoWriter
% movie2avi deprecated in Matlab
% https://stackoverflow.com/a/11054155/54964
% https://stackoverflow.com/a/29952648/54964
%# figure
hFig = figure('Menubar','none', 'Color','white');
Z = peaks;
h = imshow(Z, [], 'InitialMagnification',1000, 'Border','tight');
colormap parula; axis tight manual off;
set(gca, 'nextplot','replacechildren', 'Visible','off');
% set(gcf,'Renderer','zbuffer'); % on some Windows
%# preallocate
N = 40; % 491;
vidObj = VideoWriter('myPeaks3.avi');
vidObj.Quality = 100;
vidObj.FrameRate = 10;
open(vidObj);
%# create movie
for k=1:N
set(h, 'CData', signal(:,k:k+40,:))
% drawnow
writeVideo(vidObj, getframe(gca));
end
%# save as AVI file
close(vidObj);
How can you substitute the drawing function by imshow or corresponding?
How can you store the animation correctly?
Here is some code to try:
%// plot
hFig = figure('Menubar','none', 'Color','white');
Z = peaks;
%h = surf(Z);
h = imshow(Z, [], 'InitialMagnification',1000, 'Border','tight');
colormap jet
axis tight manual off
%// preallocate movie structure
N = 40;
mov = struct('cdata',cell(1,N), 'colormap',cell(1,N));
%// aninmation
for k=1:N
%set(h, 'ZData',sin(2*pi*k/N)*Z)
set(h, 'CData',sin(2*pi*k/N)*Z)
drawnow
mov(k) = getframe(hFig);
end
close(hFig)
%// save AVI movie, and open video file
movie2avi(mov, 'file.avi', 'Compression','none', 'Fps',10);
winopen('file.avi')
Result (not really the video, just a GIF animation):
Depending on the codecs installed on your machine, you can apply video compression, e.g:
movie2avi(mov, 'file.avi', 'Compression','XVID', 'Quality',100, 'Fps',10);
(assuming you have the Xvid encoder installed).
EDIT:
Here is my implementation of the code you posted:
%%// extract blue ECG signal
%// retrieve picture: http://stackoverflow.com/q/29800089
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%%// create sliding-window video
len = 40;
signal = imgRGB2(:,42:532,:);
figure('Menubar','none', 'NumberTitle','off', 'Color','k')
hImg = imshow(signal(:,1:1+len,:), ...
'InitialMagnification',100, 'Border','tight');
vid = VideoWriter('signal.avi');
vid.Quality = 100;
vid.FrameRate = 60;
open(vid);
N = size(signal,2);
for k=1:N-len
set(hImg, 'CData',signal(:,k:k+len,:))
writeVideo(vid, getframe());
end
close(vid);
The result look like this:

Matlab and ImageJ detecting bone fracture

I'm trying to detect a fracture in a picture using imageJ and Matlab, both of them are required. Here is the original image:
I already established the connection between matlab and imageJ and I've opened the image on imageJ and started by doing some things. First, I used the Find Edges function in imageJ menu to get an outline of the bone. After I did a constrast enhancement to enhance the outline. My problem now is, having only an outline and a black background how can I make an algorithm or something like it that will tell me that the lines don't connect? (meaning there is a fracture in the bone). I did something similar to what is on this video in the part when he ticks the sobel edge detection.
https://www.youtube.com/watch?v=Hxn2atZl5us
Gave it a shot in MATLAB only. You can use the Hough transform to find what the largest-contributing angles are to the edge-filtered image, then use that information to go one step more and detect where the break is with some typical image processing tricks.
No promises on how this will work on any images that are not the one that you provided, but the steps are reasonable to refine for additional sample breadth.
img = imread('http://i.stack.imgur.com/mHo7s.jpg');
ImgBlurSigma = 2; % Amount to denoise input image
MinHoughPeakDistance = 5; % Distance between peaks in Hough transform angle detection
HoughConvolutionLength = 40; % Length of line to use to detect bone regions
HoughConvolutionDilate = 2; % Amount to dilate kernel for bone detection
BreakLineTolerance = 0.25; % Tolerance for bone end detection
breakPointDilate = 6; % Amount to dilate detected bone end points
%%%%%%%%%%%%%%%%%%%%%%%
img = (rgb2gray(img)); % Load image
img = imfilter(img, fspecial('gaussian', 10, ImgBlurSigma), 'symmetric'); % Denoise
% Do edge detection to find bone edges in image
% Filter out all but the two longest lines
% This feature may need to be changed if break is not in middle of bone
boneEdges = edge(img, 'canny');
boneEdges = bwmorph(boneEdges, 'close');
edgeRegs = regionprops(boneEdges, 'Area', 'PixelIdxList');
AreaList = sort(vertcat(edgeRegs.Area), 'descend');
edgeRegs(~ismember(vertcat(edgeRegs.Area), AreaList(1:2))) = [];
edgeImg = zeros(size(img, 1), size(img,2));
edgeImg(vertcat(edgeRegs.PixelIdxList)) = 1;
% Do hough transform on edge image to find angles at which bone pieces are
% found
% Use max value of Hough transform vs angle to find angles at which lines
% are oriented. If there is more than one major angle contribution there
% will be two peaks detected but only one peak if there is only one major
% angle contribution (ie peaks here = number of located bones = Number of
% breaks + 1)
[H,T,R] = hough(edgeImg,'RhoResolution',1,'Theta',-90:2:89.5);
maxHough = max(H, [], 1);
HoughThresh = (max(maxHough) - min(maxHough))/2 + min(maxHough);
[~, HoughPeaks] = findpeaks(maxHough,'MINPEAKHEIGHT',HoughThresh, 'MinPeakDistance', MinHoughPeakDistance);
% Plot Hough detection results
figure(1)
plot(T, maxHough);
hold on
plot([min(T) max(T)], [HoughThresh, HoughThresh], 'r');
plot(T(HoughPeaks), maxHough(HoughPeaks), 'rx', 'MarkerSize', 12, 'LineWidth', 2);
hold off
xlabel('Theta Value'); ylabel('Max Hough Transform');
legend({'Max Hough Transform', 'Hough Peak Threshold', 'Detected Peak'});
% Locate site of break
if numel(HoughPeaks) > 1;
BreakStack = zeros(size(img, 1), size(img, 2), numel(HoughPeaks));
% Convolute edge image with line of detected angle from hough transform
for m = 1:numel(HoughPeaks);
boneKernel = strel('line', HoughConvolutionLength, T(HoughPeaks(m)));
kern = double(bwmorph(boneKernel.getnhood(), 'dilate', HoughConvolutionDilate));
BreakStack(:,:,m) = imfilter(edgeImg, kern).*edgeImg;
end
% Take difference between convolution images. Where this crosses zero
% (within tolerance) should be where the break is. Have to filter out
% regions elsewhere where the bone simply ends.
brImg = abs(diff(BreakStack, 1, 3)) < BreakLineTolerance*max(BreakStack(:)) & edgeImg > 0;
[BpY, BpX] = find(abs(diff(BreakStack, 1, 3)) < BreakLineTolerance*max(BreakStack(:)) & edgeImg > 0);
brImg = bwmorph(brImg, 'dilate', breakPointDilate);
brReg = regionprops(brImg, 'Area', 'MajorAxisLength', 'MinorAxisLength', ...
'Orientation', 'Centroid');
brReg(vertcat(brReg.Area) ~= max(vertcat(brReg.Area))) = [];
% Calculate bounding ellipse
brReg.EllipseCoords = zeros(100, 2);
t = linspace(0, 2*pi, 100);
brReg.EllipseCoords(:,1) = brReg.Centroid(1) + brReg.MajorAxisLength/2*cos(t - brReg.Orientation);
brReg.EllipseCoords(:,2) = brReg.Centroid(2) + brReg.MinorAxisLength/2*sin(t - brReg.Orientation);
else
brReg = [];
end
% Draw ellipse around break location
figure(2)
imshow(img)
hold on
colormap('gray')
if ~isempty(brReg)
plot(brReg.EllipseCoords(:,1), brReg.EllipseCoords(:,2), 'r');
end
hold off

Video Stabilization with MATLAB

I have a video when in some place the video rotated ... I don't know the angle and to what Direction it move. I tried to use:
function [ output_args ] = aaa( filename )
hVideoSrc = vision.VideoFileReader(filename, 'ImageColorSpace', 'Intensity');
imgA = step(hVideoSrc); % Read first frame into imgA
imgB = step(hVideoSrc); % Read second frame into imgB
figure; imshowpair(imgA, imgB, 'montage');
title(['Frame A', repmat(' ',[1 70]), 'Frame B']);
figure; imshowpair(imgA,imgB,'ColorChannels','red-cyan');
title('Color composite (frame A = red, frame B = cyan)');
ptThresh = 0.1;
pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);
% Display corners found in images A and B.
figure; imshow(imgA); hold on;
plot(pointsA);
title('Corners in A');
figure; imshow(imgB); hold on;
plot(pointsB);
title('Corners in B');
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
figure; showMatchedFeatures(imgA, imgB, pointsA, pointsB);
legend('A', 'B');
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);
figure;
showMatchedFeatures(imgA, imgBp, pointsAm, pointsBmp);
legend('A', 'B');
% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
translation], [0 0 1]'];
tformsRT = affine2d(HsRt);
imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));
figure(2), clf;
imshowpair(imgBold,imgBsRt,'ColorChannels','red-cyan'), axis image;
title('Color composite of affine and s-R-t transform outputs');
% Reset the video source to the beginning of the file.
reset(hVideoSrc);
hVPlayer = vision.VideoPlayer; % Create video viewer
% Process all frames in the video
movMean = step(hVideoSrc);
imgB = movMean;
imgBp = imgB;
correctedMean = imgBp;
ii = 2;
Hcumulative = eye(3);
while ~isDone(hVideoSrc) && ii < 10
% Read in new frame
imgA = imgB; % z^-1
imgAp = imgBp; % z^-1
imgB = step(hVideoSrc);
movMean = movMean + imgB;
% Estimate transform from frame A to frame B, and fit as an s-R-t
H = cvexEstStabilizationTform(imgA,imgB);
HsRt = cvexTformToSRT(H);
Hcumulative = HsRt * Hcumulative;
imgBp = imwarp(imgB,affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));
% Display as color composite with last corrected frame
step(hVPlayer, imfuse(imgAp,imgBp,'ColorChannels','red-cyan'));
correctedMean = correctedMean + imgBp;
ii = ii+1;
end
correctedMean = correctedMean/(ii-2);
movMean = movMean/(ii-2);
% Here you call the release method on the objects to close any open files
% and release memory.
release(hVideoSrc);
release(hVPlayer);
figure; imshowpair(movMean, correctedMean, 'montage');
title(['Raw input mean', repmat(' ',[1 50]), 'Corrected sequence mean']);
end
Code from here
http://www.mathworks.com/help/vision/examples/video-stabilization-using-point-feature-matching.html,
but the MatLab doesn't recognize the function detectFASTFeatures
Someone can help me ?
Maybe someone have other function that find this points.
It seems to be a function in the computer vision toolbox that only comes with MATLAB r2014a:
http://www.mathworks.com/help/vision/ref/detectfastfeatures.html
If you have an older version of the MATLAB with Computer Vision System Toolbox, you can use the vision.CornerDetector object.