Video Stabilization with MATLAB - matlab

I have a video when in some place the video rotated ... I don't know the angle and to what Direction it move. I tried to use:
function [ output_args ] = aaa( filename )
hVideoSrc = vision.VideoFileReader(filename, 'ImageColorSpace', 'Intensity');
imgA = step(hVideoSrc); % Read first frame into imgA
imgB = step(hVideoSrc); % Read second frame into imgB
figure; imshowpair(imgA, imgB, 'montage');
title(['Frame A', repmat(' ',[1 70]), 'Frame B']);
figure; imshowpair(imgA,imgB,'ColorChannels','red-cyan');
title('Color composite (frame A = red, frame B = cyan)');
ptThresh = 0.1;
pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);
% Display corners found in images A and B.
figure; imshow(imgA); hold on;
plot(pointsA);
title('Corners in A');
figure; imshow(imgB); hold on;
plot(pointsB);
title('Corners in B');
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
figure; showMatchedFeatures(imgA, imgB, pointsA, pointsB);
legend('A', 'B');
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);
figure;
showMatchedFeatures(imgA, imgBp, pointsAm, pointsBmp);
legend('A', 'B');
% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
translation], [0 0 1]'];
tformsRT = affine2d(HsRt);
imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));
figure(2), clf;
imshowpair(imgBold,imgBsRt,'ColorChannels','red-cyan'), axis image;
title('Color composite of affine and s-R-t transform outputs');
% Reset the video source to the beginning of the file.
reset(hVideoSrc);
hVPlayer = vision.VideoPlayer; % Create video viewer
% Process all frames in the video
movMean = step(hVideoSrc);
imgB = movMean;
imgBp = imgB;
correctedMean = imgBp;
ii = 2;
Hcumulative = eye(3);
while ~isDone(hVideoSrc) && ii < 10
% Read in new frame
imgA = imgB; % z^-1
imgAp = imgBp; % z^-1
imgB = step(hVideoSrc);
movMean = movMean + imgB;
% Estimate transform from frame A to frame B, and fit as an s-R-t
H = cvexEstStabilizationTform(imgA,imgB);
HsRt = cvexTformToSRT(H);
Hcumulative = HsRt * Hcumulative;
imgBp = imwarp(imgB,affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));
% Display as color composite with last corrected frame
step(hVPlayer, imfuse(imgAp,imgBp,'ColorChannels','red-cyan'));
correctedMean = correctedMean + imgBp;
ii = ii+1;
end
correctedMean = correctedMean/(ii-2);
movMean = movMean/(ii-2);
% Here you call the release method on the objects to close any open files
% and release memory.
release(hVideoSrc);
release(hVPlayer);
figure; imshowpair(movMean, correctedMean, 'montage');
title(['Raw input mean', repmat(' ',[1 50]), 'Corrected sequence mean']);
end
Code from here
http://www.mathworks.com/help/vision/examples/video-stabilization-using-point-feature-matching.html,
but the MatLab doesn't recognize the function detectFASTFeatures
Someone can help me ?
Maybe someone have other function that find this points.

It seems to be a function in the computer vision toolbox that only comes with MATLAB r2014a:
http://www.mathworks.com/help/vision/ref/detectfastfeatures.html

If you have an older version of the MATLAB with Computer Vision System Toolbox, you can use the vision.CornerDetector object.

Related

gif of sphere with changing radius in Matlab

I wish to generate a .gif file of a sphere that grows in size(radius) over time based on a specified function relating radius and time. I'm having some trouble with how to formulate the animation.
Here is what I have so far:
%% Parameters
dt = 0.05;
time = 0:dt:1;
radius = 1
%% generate sphere
[X, Y, Z] = sphere(25);
X=X*radius;
Y=Y*radius;
Z=Z*radius;
mySphere = surf(X,Y,Z, 'FaceLighting','gouraud');
axis equal
shading interp
mySphere.FaceAlpha = 0.3
view([61 15])
colormap bone
hold on
%% generate gif
filename = 'Sizechange.gif';
for n = 1:20
radius = 1 + time(n)
im = frame2im(getframe(1));
[imind,cm] = rgb2ind(im,256);
if n == 1;
imwrite(imind,cm,filename,'gif', 'Loopcount',inf,'DelayTime',dt);
else
imwrite(imind,cm,filename,'gif','WriteMode','append','DelayTime',dt);
end
end
Here I am trying to get it to go from radius 1 to radius 2 in steps of 0.05.
When I run this however, the gif stays still at 1 and there is no animation.
Any help would be greatly appreciated.
As #cris-luengo said, you should redraw your sphere for each iteration on the radius.
%% Parameters
dt = 0.05;
time = 0:dt:1;
radius = 1 ;
%% generate sphere
[X, Y, Z] = sphere(25);
X=X*radius;
Y=Y*radius;
Z=Z*radius;
%figure;
%mySphere = surf(X,Y,Z, 'FaceLighting','gouraud');
% axis equal
% shading interp
% mySphere.FaceAlpha = 0.3;
% view([61 15])
% colormap bone
% hold on
%% generate gif
filename = 'Sizechange.gif';
figure;
for n = 1:20
radius = 1+ time(n);
%====================================================
X=X*radius;
Y=Y*radius;
Z=Z*radius;
mySphere = surf(X,Y,Z, 'FaceLighting','gouraud');
axis equal
shading interp
mySphere.FaceAlpha = 0.3;
view([61 15])
colormap bone
%====================================================
im = frame2im(getframe(1));
[imind,cm] = rgb2ind(im,256);
if n == 1
imwrite(imind,cm,filename,'gif', 'Loopcount',inf,'DelayTime',dt);
else
imwrite(imind,cm,filename,'gif','WriteMode','append','DelayTime',dt);
end
end

Plot the MajorAxisLength and the MinorAxisLength - Matlab

I found this script below in https://blogs.mathworks.com/steve/2010/07/30/visualizing-regionprops-ellipse-measurements/
I wish to plot also the MajorAxisLength and the MinorAxisLength, how can I do it?
Script:
url = 'https://blogs.mathworks.com/images/steve/2010/rice_binary.png';
bw = imread(url);
imshow(bw)
s = regionprops(bw, 'Orientation', 'MajorAxisLength', ...
'MinorAxisLength', 'Eccentricity', 'Centroid');
imshow(bw)
hold on
phi = linspace(0,2*pi,50);
cosphi = cos(phi);
sinphi = sin(phi);
for k = 1:length(s)
xbar = s(k).Centroid(1);
ybar = s(k).Centroid(2);
a = s(k).MajorAxisLength/2;
b = s(k).MinorAxisLength/2;
theta = pi*s(k).Orientation/180;
R = [ cos(theta) sin(theta)
-sin(theta) cos(theta)];
xy = [a*cosphi; b*sinphi];
xy = R*xy;
x = xy(1,:) + xbar;
y = xy(2,:) + ybar;
plot(x,y,'r','LineWidth',2);
end
hold off
You have a lot of unnecessary code in what you posted. I don't think you actually attempted to adjust it for what you say you want to plot. Given what you said you are trying to do in the comments, I think this is the code you want in order to plot the Major Axis Length vs. Minor Axis Length
url = 'https://blogs.mathworks.com/images/steve/2010/rice_binary.png';
bw = imread(url);
s = regionprops(bw, 'Orientation', 'MajorAxisLength', ...
'MinorAxisLength', 'Eccentricity', 'Centroid'); %Get region props
Major=zeros(size(s));
Minor=zeros(size(s));
for k = 1:length(s)
Major(k)= s(k).MajorAxisLength; %get your y values
Minor(k)= s(k).MinorAxisLength; %get your x values
end
figure %create a new figure because you don't want to plot on top of the image
plot(Minor, Major, 'o') %plot
xlabel('Minor Axis Length')
ylabel('Major Axis Length')

Video Stabilization Using Point Feature Matching WITHOUT LOSING RGB COLORS on frames on MATLAB

I'd like to stabilize a 13 min video captured by a quadcopter over a traffic crossroads without losing its 3 color channels (RGB). Matlab's own function leads to a gray scale video which is an unwanted case for the main and future objective, vehicle tracking. New thoughts are appreciated.
Below you can find my own code (works and converts the video to gray scale) edited over the Matlab's own script written on the following page:
Matlab's related Webpage : Video Stabilization Using Point Feature Matching
clc; clear all; close all;
filename = 'Quad_video_erst';
hVideoSrc = vision.VideoFileReader('Quad_video_erst.mp4', 'ImageColorSpace', 'Intensity');
% Create and open video file
myVideo = VideoWriter('vivi.avi');
open(myVideo);
hVPlayer = vision.VideoPlayer;
%% Step 1: Read Frames from a Movie File
for i=1:10 % testing for a short run
imgA = step(hVideoSrc); % Read first frame into imgA
imgB = step(hVideoSrc); % Read second frame into imgB
%% Step 2: SURF DETECTION
pointsA=surf_function_CAN(imgA);
pointsB=surf_function_CAN(imgB);
%% Step 3. Select Correspondences Between Points
% Extract FREAK descriptors for the corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
%% Step 4: Estimating Transform from Noisy Correspondences
[tform, pointsBm, pointsAm] = estimateGeometricTransform(...
pointsB, pointsA, 'affine');
imgBp = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
pointsBmp = transformPointsForward(tform, pointsBm.Location);
%% Step 5: Step 5. Transform Approximation and Smoothing
% Extract scale and rotation part sub-matrix.
H = tform.T;
R = H(1:2,1:2);
% Compute theta from mean of two possible arctangents
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
% Compute scale from mean of two stable mean calculations
scale = mean(R([1 4])/cos(theta));
% Translation remains the same:
translation = H(3, 1:2);
% Reconstitute new s-R-t transform:
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)];...
translation], [0 0 1]'];
tformsRT = affine2d(HsRt);
imgBold = imwarp(imgB, tform, 'OutputView', imref2d(size(imgB)));
imgBsRt = imwarp(imgB, tformsRT, 'OutputView', imref2d(size(imgB)));
%% Write the Video
writeVideo(myVideo,imfuse(imgBold,imgBsRt,'ColorChannels','red-cyan'));
end
And the function:
function [ surf_points ] = surf_function_CAN(img)
surfpoints_raw= detectSURFFeatures(img);
[featuresOriginal, validPtsOriginal] = extractFeatures(img, surfpoints_raw);
strongestPoints = validPtsOriginal.selectStrongest(1600);
array=strongestPoints.Location;
% New - Get X and Y coordinates
X = array(:,1);
Y = array(:,2);
% New - Determine a mask to grab the points we want
ind = (((X>156-9-70 & X<156+9+70) & (Y>406-9-70 & Y<406+9+70)) | ...
((X>684-11-70 & X<684+11+70) & (Y>274-11-70 & Y<274+11+70)) | ...
((X>1066-15-70 & X<1066+15+70) & (Y>67-15-70 & Y<67+15+70)) | ...
((X>1559-15-70 & X<1559+15+70) & (Y>867-15-70 & Y<867+15+70)) | ...
((X>1082-18-70 & X<1082+18+70) & (Y>740-18-100 & Y<740+18+100))) ;
% New - Create new SURFPoints structure that contains all information
% from the points we need
array_filtered =strongestPoints(ind);
surf_points= array_filtered;
end
Firstly, if you look through their example you should use the part where they perform the loop, not the part where they show how to implement it between 2 frames as they are not exactly compatible. Other than that the only thing you need to do is perform the analysis on the a grayscale image, but implement the transformation on the color image:
%% Load Video and Open Save File
filename = 'shaky_car.avi';
hVideoSrc = vision.VideoFileReader(filename);
myVideo = VideoWriter('vivi.avi');
open(myVideo);
% Get next Image
colorImg = step(hVideoSrc);
% Try to Convert to Grayscale
try
imgB = rgb2gray(colorImg);
RGB = true;
catch % Image is not RGB
imgB = colorImg;
RGB = false;
end
Hcumulative = eye(3);
ptThresh = 0.1;
% Loop Through Video
while ~isDone(hVideoSrc)
imgA = imgB;
% Get Next Image
colorImg = step(hVideoSrc);
% Convert to Grayscale
if RGB
imgB = rgb2gray(colorImg);
else
imgB = colorImg;
end
%% Calculate Transformation
% Generate Prospective Points
pointsA = detectFASTFeatures(imgA, 'MinContrast', ptThresh);
pointsB = detectFASTFeatures(imgB, 'MinContrast', ptThresh);
% Extract Features for the Corners
[featuresA, pointsA] = extractFeatures(imgA, pointsA);
[featuresB, pointsB] = extractFeatures(imgB, pointsB);
indexPairs = matchFeatures(featuresA, featuresB);
pointsA = pointsA(indexPairs(:, 1), :);
pointsB = pointsB(indexPairs(:, 2), :);
[tform] = estimateGeometricTransform(pointsB, pointsA, 'affine');
% Extract Rotation & Translations
H = tform.T;
R = H(1:2,1:2);
theta = mean([atan2(R(2),R(1)) atan2(-R(3),R(4))]);
scale = mean(R([1 4])/cos(theta));
translation = H(3, 1:2);
% Reconstitute Trnasform
HsRt = [[scale*[cos(theta) -sin(theta); sin(theta) cos(theta)]; ...
translation], [0 0 1]'];
Hcumulative = HsRt*Hcumulative;
% Perform Transformation on Color Image
img = imwarp(colorImg, affine2d(Hcumulative),'OutputView',imref2d(size(imgB)));
% Save Transformed Color Image to Video File
writeVideo(myVideo,img)
end
close(myVideo)

To refresh imshow in Matlab?

I want to convert this answer's code to imshow.
It creates a movie in MOVIE2AVI by
%# preallocate
nFrames = 20;
mov(1:nFrames) = struct('cdata',[], 'colormap',[]);
%# create movie
for k=1:nFrames
surf(sin(2*pi*k/20)*Z, Z)
mov(k) = getframe(gca);
end
close(gcf)
movie2avi(mov, 'myPeaks1.avi', 'compression','None', 'fps',10);
My pseudocode
%# preallocate
nFrames = 20;
mov(1:nFrames) = struct('cdata',[], 'colormap',[]);
%# create movie
for k=1:nFrames
imshow(signal(:,k,:),[1 1 1]) % or simply imshow(signal(:,k,:))
drawnow
mov(k) = getframe(gca);
end
close(gcf)
movie2avi(mov, 'myPeaks1.avi', 'compression','None', 'fps',10);
However, this creates the animation in the screen, but it saves only a AVI -file which size is 0 kB. The file myPeaks1.avi is stored properly after running the surf command but not from imshow.
I am not sure about the command drawnow.
Actual case code
%% HSV 3rd version
% https://stackoverflow.com/a/29801499/54964
rgbImage = imread('http://i.stack.imgur.com/cFOSp.png');
% Extract blue using HSV
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
% The label most repeated will be the signal.
% So we find it and separate the background from the signal using label.
% Label each "blob"
lbl=bwlabel(~bw);
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
%% Error Testing
comp_image = rgb2gray(abs(double(rgbImage) - double(signal)));
if ( sum(sum(comp_image(32:438, 96:517))) > 0 )
break;
end
%% Video
% 5001 units so 13.90 (= 4.45 + 9.45) seconds.
% In RGB, original size 480x592.
% Resize to 480x491
signal = signal(:, 42:532, :);
% Show 7 seconds (298 units) at a time.
% imshow(signal(:, 1:298, :));
%% Video VideoWriter
% movie2avi deprecated in Matlab
% https://stackoverflow.com/a/11054155/54964
% https://stackoverflow.com/a/29952648/54964
%# figure
hFig = figure('Menubar','none', 'Color','white');
Z = peaks;
h = imshow(Z, [], 'InitialMagnification',1000, 'Border','tight');
colormap parula; axis tight manual off;
set(gca, 'nextplot','replacechildren', 'Visible','off');
% set(gcf,'Renderer','zbuffer'); % on some Windows
%# preallocate
N = 40; % 491;
vidObj = VideoWriter('myPeaks3.avi');
vidObj.Quality = 100;
vidObj.FrameRate = 10;
open(vidObj);
%# create movie
for k=1:N
set(h, 'CData', signal(:,k:k+40,:))
% drawnow
writeVideo(vidObj, getframe(gca));
end
%# save as AVI file
close(vidObj);
How can you substitute the drawing function by imshow or corresponding?
How can you store the animation correctly?
Here is some code to try:
%// plot
hFig = figure('Menubar','none', 'Color','white');
Z = peaks;
%h = surf(Z);
h = imshow(Z, [], 'InitialMagnification',1000, 'Border','tight');
colormap jet
axis tight manual off
%// preallocate movie structure
N = 40;
mov = struct('cdata',cell(1,N), 'colormap',cell(1,N));
%// aninmation
for k=1:N
%set(h, 'ZData',sin(2*pi*k/N)*Z)
set(h, 'CData',sin(2*pi*k/N)*Z)
drawnow
mov(k) = getframe(hFig);
end
close(hFig)
%// save AVI movie, and open video file
movie2avi(mov, 'file.avi', 'Compression','none', 'Fps',10);
winopen('file.avi')
Result (not really the video, just a GIF animation):
Depending on the codecs installed on your machine, you can apply video compression, e.g:
movie2avi(mov, 'file.avi', 'Compression','XVID', 'Quality',100, 'Fps',10);
(assuming you have the Xvid encoder installed).
EDIT:
Here is my implementation of the code you posted:
%%// extract blue ECG signal
%// retrieve picture: http://stackoverflow.com/q/29800089
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%%// create sliding-window video
len = 40;
signal = imgRGB2(:,42:532,:);
figure('Menubar','none', 'NumberTitle','off', 'Color','k')
hImg = imshow(signal(:,1:1+len,:), ...
'InitialMagnification',100, 'Border','tight');
vid = VideoWriter('signal.avi');
vid.Quality = 100;
vid.FrameRate = 60;
open(vid);
N = size(signal,2);
for k=1:N-len
set(hImg, 'CData',signal(:,k:k+len,:))
writeVideo(vid, getframe());
end
close(vid);
The result look like this:

hough transform for lines

I'm trying to get a Hough transform to work in MATLAB, but I'm having problems. I have a really bad way of detecting peaks that needs to be fixed, but before that I need to be able to reverse the hough transform to create the lines again properly. This is the type of stuff I'm getting right now:
looks like its rotated by 90 degrees, but I'm not sure why. I'm not sure if it's my hough space that's wrong, or if it's the way I dehough and draw the lines. Also could someone help improve my peak detection?
the images used in the code are here
Thank you
%% load a sample image; convert to grayscale; convert to binary
%create 'x' image (works well)
a = eye(255);
b = flipud(eye(255));
x = a + b;
x(128,128) = 1;
%image = rgb2gray(imread('up.png')) < 255;
%image = rgb2gray(imread('hexagon.png')) < 255;
%image = rgb2gray(imread('traingle.png')) < 255;
%%% these work
%image = x;
%image = a;
image = b;
%% set up variables for hough transform
theta_sample_frequency = 0.01;
[x, y] = size(image);
rho_limit = norm([x y]);
rho = (-rho_limit:1:rho_limit);
theta = (0:theta_sample_frequency:pi);
num_thetas = numel(theta);
num_rhos = numel(rho);
hough_space = zeros(num_rhos, num_thetas);
%% perform hough transform
for xi = 1:x
for yj = 1:y
if image(xi, yj) == 1
for theta_index = 1:num_thetas
th = theta(theta_index);
r = xi * cos(th) + yj * sin(th);
rho_index = round(r + num_rhos/2);
hough_space(rho_index, theta_index) = ...
hough_space(rho_index, theta_index) + 1;
end
end
end
end
%% show hough transform
subplot(1,2,1);
imagesc(theta, rho, hough_space);
title('Hough Transform');
xlabel('Theta (radians)');
ylabel('Rho (pixels)');
colormap('gray');
%% detect peaks in hough transform
r = [];
c = [];
[max_in_col, row_number] = max(hough_space);
[rows, cols] = size(image);
difference = 25;
thresh = max(max(hough_space)) - difference;
for i = 1:size(max_in_col, 2)
if max_in_col(i) > thresh
c(end + 1) = i;
r(end + 1) = row_number(i);
end
end
%% plot all the detected peaks on hough transform image
hold on;
plot(theta(c), rho(r),'rx');
hold off;
%% plot the detected line superimposed on the original image
subplot(1,2,2)
imagesc(image);
colormap(gray);
hold on;
for i = 1:size(c,2)
th = theta(c(i));
rh = rho(r(i));
m = -(cos(th)/sin(th));
b = rh/sin(th);
x = 1:cols;
plot(x, m*x+b);
hold on;
end
Cross Posted:
https://dsp.stackexchange.com/questions/1958/help-understanding-hough-transform
If your regenerated image looks rotated by 90 degrees or otherwise flipped, it's likely because the plotting isn't happening as you expect. You can try axis ij; to move the origin of the plot and/or you can reverse your plot command:
plot(m*x+b, x);
As for peak detection, you might want to look at imregionalmax.