Motion vectors calculation - matlab

I am working on the following code:
filename = 'C:\li_walk.avi';
hVidReader = vision.VideoFileReader(filename, 'ImageColorSpace', 'RGB','VideoOutputDataType', 'single');
hOpticalFlow = vision.OpticalFlow('OutputValue', 'Horizontal and vertical components in complex form', 'ReferenceFrameDelay', 3);
hMean1 = vision.Mean;
hMean2 = vision.Mean('RunningMean', true);
hMedianFilt = vision.MedianFilter;
hclose = vision.MorphologicalClose('Neighborhood', strel('line',5,45));
hblob = vision.BlobAnalysis('CentroidOutputPort', false, 'AreaOutputPort', true, 'BoundingBoxOutputPort', true, 'OutputDataType', 'double','MinimumBlobArea', 250, 'MaximumBlobArea', 3600, 'MaximumCount', 80);
herode = vision.MorphologicalErode('Neighborhood', strel('square',2));
hshapeins1 = vision.ShapeInserter('BorderColor', 'Custom', 'CustomBorderColor', [0 1 0]);
hshapeins2 = vision.ShapeInserter( 'Shape','Lines', 'BorderColor', 'Custom','CustomBorderColor', [255 255 0]);
htextins = vision.TextInserter('Text', '%4d', 'Location', [1 1],'Color', [1 1 1], 'FontSize', 12);
sz = get(0,'ScreenSize');
pos = [20 sz(4)-300 200 200];
hVideo1 = vision.VideoPlayer('Name','Original Video','Position',pos);
pos(1) = pos(1)+220; % move the next viewer to the right
hVideo2 = vision.VideoPlayer('Name','Motion Vector','Position',pos);
pos(1) = pos(1)+220;
hVideo3 = vision.VideoPlayer('Name','Thresholded Video','Position',pos);
pos(1) = pos(1)+220;
hVideo4 = vision.VideoPlayer('Name','Results','Position',pos);
% Initialize variables used in plotting motion vectors.
lineRow = 22;
firstTime = true;
motionVecGain = 20;
borderOffset = 5;
decimFactorRow = 5;
decimFactorCol = 5;
while ~isDone(hVidReader) % Stop when end of file is reached
frame = step(hVidReader); % Read input video frame
grayFrame = rgb2gray(frame);
ofVectors = step(hOpticalFlow, grayFrame); % Estimate optical flow
% The optical flow vectors are stored as complex numbers. Compute their
% magnitude squared which will later be used for thresholding.
y1 = ofVectors .* conj(ofVectors);
% Compute the velocity threshold from the matrix of complex velocities.
vel_th = 0.5 * step(hMean2, step(hMean1, y1));
% Threshold the image and then filter it to remove speckle noise.
segmentedObjects = step(hMedianFilt, y1 >= vel_th);
% Thin-out the parts of the road and fill holes in the blobs.
segmentedObjects = step(hclose, step(herode, segmentedObjects));
% Estimate the area and bounding box of the blobs.
[area, bbox] = step(hblob, segmentedObjects);
% Select boxes inside ROI (below white line).
Idx = bbox(:,1) > lineRow;
% Based on blob sizes, filter out objects which can not be cars.
% When the ratio between the area of the blob and the area of the
% bounding box is above 0.4 (40%), classify it as a car.
ratio = zeros(length(Idx), 1);
ratio(Idx) = single(area(Idx,1))./single(bbox(Idx,3).*bbox(Idx,4));
ratiob = ratio > 0.4;
count = int32(sum(ratiob)); % Number of cars
bbox(~ratiob, :) = int32(-1);
% Draw bounding boxes around the tracked cars.
y2 = step(hshapeins1, frame, bbox);
% Display the number of cars tracked and a white line showing the ROI.
y2(22:23,:,:) = 1; % The white line.
y2(1:15,1:30,:) = 0; % Background for displaying count
result = step(htextins, y2, count);
% Generate coordinates for plotting motion vectors.
if firstTime
[R C] = size(ofVectors); % Height and width in pixels
RV = borderOffset:decimFactorRow:(R-borderOffset);
CV = borderOffset:decimFactorCol:(C-borderOffset);
[Y X] = meshgrid(CV,RV);
firstTime = false;
sumu=0;
sumv=0;
end
grayFrame = rgb2gray(frame);
[ra ca na] = size(grayFrame);
ofVectors = step(hOpticalFlow, grayFrame); % Estimate optical flow
ua = real(ofVectors);
ia = ofVectors - ua;
va = ia/complex(0,1);
sumu=ua+sumu;
sumv=va+sumv;
[xa ya]=meshgrid(1:1:ca,ra:-1:1);
% Calculate and draw the motion vectors.
tmp = ofVectors(RV,CV) .* motionVecGain;
lines = [Y(:), X(:), Y(:) + real(tmp(:)), X(:) + imag(tmp(:))];
motionVectors = step(hshapeins2, frame, lines);
% Display the results
step(hVideo1, frame); % Original video
step(hVideo2, motionVectors); % Video with motion vectors
step(hVideo3, segmentedObjects); % Thresholded video
step(hVideo4, result); % Video with bounding boxes
quiver(xa,ya,sumu,sumv)
end
release(hVidReader);
Please help me to understand the following statements of the above code:
ua = real(ofVectors);
ia = ofVectors - ua;
va = ia/complex(0,1);
these are the horizontal (ua) and vertical (va) components of the motion vectors. what real part of the (Ofvectors) will be? please help me in understanding this code segment

When the object hOpticalFlow is constructed in the third line of the code, the OutputValue property is set to 'Horizontal and vertical components in complex form' which has the effect that when you apply the step command to hOpticalFlow and the image (frame), you will not get just the magnitudes of the flowVectors, but complex numbers that represent these planar flow vectors. It is just a compact way for the command to return the information. Once you have the complex numbers in ofVectors, which is the output of the step command, the command
ua = real(ofVectors);
stores the horizontal component of each vector in ua. After the command
ia = ofVectors - ua;
is executed, ia contains the imaginary (i.e., vertical components of the flow vectors) because the real parts in ua are subtracted from the complex numbers in ofVectors. However, you need to get rid of the imaginary units in ia, so you divide by 0+1i. This is what the command
va = ia/complex(0,1);
does.

Related

Chessboard distance in the image matrix

Given an image matrix, how can I get the locations of the pixels whose chessboard distances from pixel A is less than D. I need to performed this for all pixels.
Using the MATLAB function bwdist I couldn't provide desired result. What's the solution?
[D,idx] = bwdist(Img,'chessboard');
Given an image, pixel and maximum distance:
% Test image
Image = zeros(20,30);
% Maximum chessboard distance from image
maxDist = 7;
% The pixel from which to measure distance
pix = [4,19];
To find the pixels who's chessboard distance from the pix are
less than maxDist and in the image bounds:
Option 1: Using bwdist
% Create a binary image with all pixels zero except 'pix'
bw = zeros(size(Image));
bw(pix(1), pix(2)) = 1;
% Get the chessboard distance transform
[D,idx] = bwdist(bw,'chessboard');
% Get the linear index of 'pix'
pixInd = sub2ind(size(bw), pix(1), pix(2));
% Find linear indices of pixels who's chessboard distance from pixel are
% less than 'maxDist'
pointsInd = find(idx == pixInd & D < maxDist);
% Remove 'pix'
pointsInd(pointsInd == pixInd) = [];
% Get the pairs of (x,y) of the pixels
[pointsX, pointsY] = ind2sub(size(bw), pointsInd);
Option 2: Using meshgrid
% Get the range of x and y indices who's chessboard distance from pixel are
% less than 'maxDist' and in the image bounds
xRange = max((pix(1)-(maxDist-1)),1):min((pix(1)+(maxDist-1)),size(Image,1));
yRange = max((pix(2)-(maxDist-1)),1):min((pix(2)+(maxDist-1)),size(Image,2));
% Create a mesgrid to get the pairs of (x,y) of the pixels
[pointsX, pointsY] = meshgrid(xRange, yRange);
pointsX = pointsX(:);
pointsY = pointsY(:);
% Remove 'pix'
pixIndToRemove = (pointsX == pix(1) & pointsY == pix(2));
pointsX(pixIndToRemove) = [];
pointsY(pixIndToRemove) = [];
Displaying result:
% Get linear indices of pixels
pointsInd = sub2ind(size(Image), pointsX, pointsY);
% To display the result, create a binary image with all found pixels
% colored white
bwPoints = zeros(size(Image));
bwPoints(pointsInd) = 1;
% Show points
imshow(bwPoints, 'InitialMagnification', 2000)
% Show pixel grid lines
hold on
[rows, cols] = size(bwPoints);
for row = 0.5 : 1 : (rows + 0.5)
line([0.5, cols+0.5], [row, row], 'Color', 'r', 'LineWidth', 0.5);
end
for col = 0.5 : 1 : (cols + 0.5)
line([col, col], [0.5, rows+0.5], 'Color', 'r', 'LineWidth', 0.5);
end
Efficiency and running in a loop over all image pixels:
Option 2 is way much faster than Option 1. I wrote first Option 1 because bwdist was mentioned in the question. Running Option 2 in a loop can be improved by calculating the pixels first and than shifting them to the location of each pixel:
% Get the range of x and y indices who's chessboard distance from pixel
% (0,0) are less than 'maxDist'
xRange = (-(maxDist-1)):(maxDist-1);
yRange = (-(maxDist-1)):(maxDist-1);
% Create a mesgrid to get the pairs of (x,y) of the pixels
[pointsX, pointsY] = meshgrid(xRange, yRange);
pointsX = pointsX(:);
pointsY = pointsY(:);
% Remove pixel (0,0)
pixIndToRemove = (pointsX == 0 & pointsY == 0);
pointsX(pixIndToRemove) = [];
pointsY(pixIndToRemove) = [];
for x=1:size(Image, 1)
for y=1:size(Image, 2)
% Get a shifted copy of 'pointsX' and 'pointsY' that is centered
% around (x, y)
pointsX1 = pointsX + x;
pointsY1 = pointsY + y;
% Remove the the pixels that are out of the image bounds
inBounds =...
pointsX1 >= 1 & pointsX1 <= size(Image, 1) &...
pointsY1 >= 1 & pointsY1 <= size(Image, 2);
pointsX1 = pointsX1(inBounds);
pointsY1 = pointsY1(inBounds);
% Do stuff with 'pointsX1' and 'pointsY1'
% ...
end
end
"The aim is to access the location of the pixels whose chessboard distances from pixel A is less than D. The process should be
performed for all pixels..."
Since D is creating a square selection area, just use simple maths..
For example: if D is 3 then from the [x,y] position of pixel A...
//# we minus D by 1 since you want less than D (not equal / higher)
Start-X = pixelA.x - (D-1); //from the left
End-X = pixelA.y + (D-1); //to the right
Start-Y = pixelA.y - (D-1); //from the top
End-Y = pixelA.y + (D-1); //to the bottom
That will give you a square perimeter that represents your required selection area.
Look at this example image below:
Each square is a pixel. If the "crown" icon represents pixel A and D is 3 (where your "less than D" means D has a maximum length of 2 pixels), can you see how the pseudo-code above applies?

How to speed up runtime of code that searches for data between several arrays within a 'moving' sphere

I am trying to average my CFD data (which is in the form of a scalar N x M x P array; N corresponds to Y, M to x, and P to z) over a subset of time steps. I've tried to simplify the description of my desired averaging process below.
Rotate the grid at each time step by a specified angle (this is because the flow has a coherent structure that rotates and changes shape/size at each time step and I want to overlap them and find a time averaged form of the structure that takes into account the change of shape/size over time)
Drawing a sphere centered on the original unrotated grid
Identifying the grid points from all the rotated grids that lie within the sphere
Identify the indices of the grid points in each rotated grid
Use the indices to find the scalar data at the rotated grid points within the sphere
Take an average of the values within the sphere
Put that new averaged value at the location on the unrotated grid
I have a code that seems to do what I want correctly, but it takes far too long to finish the calculations. I would like to make it run faster, and I am open to changing the code if necessary. Below is version of my code that works with a smaller version of the data.
x = -5:5; % x - position data
y = -2:.5:5; % y - position data
z = -5:5; % z - position data
% my grid is much bigger actually
[X,Y,Z] = meshgrid(x,y,z); % mesh for plotting data
dX = diff(x)'; dX(end+1) = dX(end); % x grid intervals
dY = diff(y)'; dY(end+1) = dY(end); % y grid intervals
dZ = diff(z)'; dZ(end+1) = dZ(end); % z grid intervals
TestPoints = combvec(x,y,z)'; % you need the Matlab Neural Network Toolbox to run this
dXYZ = combvec(dX',dY',dZ')';
% TestPoints is the unrotated grid
M = length(x); % size of grid x - direction
N = length(y); % size of grid y - direction
P = length(z); % size of grid z - direction
D = randi([-10,10],N,M,P,3); % placeholder for data for 3 time steps (I have more than 3, this is a subset)
D2{3,M*N*P} = [];
PosAll{3,2} = [];
[xSph,ySph,zSph] = sphere(50);
c = 0.01; % 1 cm
nu = 8e-6; % 8 cSt
s = 3*c; % span for Aspect Ratio 3
r_g = s/sqrt(3);
U_g = 110*nu/c; % velocity for Reynolds number 110
Omega = U_g/r_g; % angular velocity
T = (2*pi)/Omega; % period
dt = 40*T/1920; % time interval
DeltaRotAngle = ((2*pi)/T)*dt; % angle interval
timesteps = 121:123; % time steps 121, 122, and 123
for ti=timesteps
tj = find(ti==timesteps);
Theta = ti*DeltaRotAngle;
Rotate = [cos(Theta),0,sin(Theta);...
0,1,0;...
-sin(Theta),0,cos(Theta)];
PosAll{tj,1} = (Rotate*TestPoints')';
end
for i=1:M*N*P
aa = TestPoints(i,1);
bb = TestPoints(i,2);
cc = TestPoints(i,3);
rs = 0.8*sqrt(dXYZ(i,1)^2 + dXYZ(i,2)^2 + dXYZ(i,3)^2);
handles.H = figure;
hs = surf(xSph*rs+aa,ySph*rs+bb,zSph*rs+cc);
[Fs,Vs,~] = surf2patch(hs,'triangle');
close(handles.H)
for ti=timesteps
tj = find(timesteps==ti);
f = inpolyhedron(Fs,Vs,PosAll{tj,1},'FlipNormals',false);
TestPointsR_ti = PosAll{tj,1};
PointsInSphere = TestPointsR_ti(f,:);
p1 = [aa,bb,cc];
p2 = [PointsInSphere(:,1),...
PointsInSphere(:,2),...
PointsInSphere(:,3)];
w = 1./sqrt(sum(...
(p2-repmat(p1,size(PointsInSphere,1),1))...
.^2,2));
D_ti = reshape(D(:,:,:,tj),M*N*P,1);
D2{tj,i} = [D_ti(f),w];
end
end
D3{M*N*P,1} = [];
for i=1:M*N*P
D3{i} = vertcat(D2{:,i});
end
D4 = zeros(M*N*P,1);
for i=1:M*N*P
D4(i) = sum(D3{i}(:,1).*D3{i}(:,2))/...
sum(D3{i}(:,2));
end
D_ta = reshape(D4,N,M,P);
I expect to get an N x M x P array where each index is the weighted average of all the points covering all of the time steps at that specific position in the unrotated grid. As you can see this is exactly what I get. The major problem however is the length of time it takes to do so when I use the larger set of my 'real' data. The code above takes only a couple minutes to run, but when M = 120, N = 24, and P = 120, and the number of time steps is 24 this can take much longer. Based on my estimates it would take approximately 25+ days to finish the entire calculation.
Nevermind, I can help you with the math. What you are trying to do here is find things inside a sphere. You have a well-defined sphere so this makes things easy. Just find the distance of all points from the center point. No need to plot or use inpolyhedron. Note line 66 where I modify the points by the center point of the sphere, compute the distance of these points, and compare to the radius of the sphere.
% x = -5:2:5; % x - position data
x = linspace(-5,5,120);
% y = -2:5; % y - position data
y = linspace(-2,5,24);
% z = -5:2:5; % z - position data
z = linspace(-5,5,120);
% my grid is much bigger actually
[X,Y,Z] = meshgrid(x,y,z); % mesh for plotting data
dX = diff(x)'; dX(end+1) = dX(end); % x grid intervals
dY = diff(y)'; dY(end+1) = dY(end); % y grid intervals
dZ = diff(z)'; dZ(end+1) = dZ(end); % z grid intervals
TestPoints = combvec(x,y,z)'; % you need the Matlab Neural Network Toolbox to run this
dXYZ = combvec(dX',dY',dZ')';
% TestPoints is the unrotated grid
M = length(x); % size of grid x - direction
N = length(y); % size of grid y - direction
P = length(z); % size of grid z - direction
D = randi([-10,10],N,M,P,3); % placeholder for data for 3 time steps (I have more than 3, this is a subset)
D2{3,M*N*P} = [];
PosAll{3,2} = [];
[xSph,ySph,zSph] = sphere(50);
c = 0.01; % 1 cm
nu = 8e-6; % 8 cSt
s = 3*c; % span for Aspect Ratio 3
r_g = s/sqrt(3);
U_g = 110*nu/c; % velocity for Reynolds number 110
Omega = U_g/r_g; % angular velocity
T = (2*pi)/Omega; % period
dt = 40*T/1920; % time interval
DeltaRotAngle = ((2*pi)/T)*dt; % angle interval
timesteps = 121:123; % time steps 121, 122, and 123
for ti=timesteps
tj = find(ti==timesteps);
Theta = ti*DeltaRotAngle;
Rotate = [cos(Theta),0,sin(Theta);...
0,1,0;...
-sin(Theta),0,cos(Theta)];
PosAll{tj,1} = (Rotate*TestPoints')';
end
tic
for i=1:M*N*P
aa = TestPoints(i,1);
bb = TestPoints(i,2);
cc = TestPoints(i,3);
rs = 0.8*sqrt(dXYZ(i,1)^2 + dXYZ(i,2)^2 + dXYZ(i,3)^2);
% handles.H = figure;
% hs = surf(xSph*rs+aa,ySph*rs+bb,zSph*rs+cc);
% [Fs,Vs,~] = surf2patch(hs,'triangle');
% close(handles.H)
for ti=timesteps
tj = find(timesteps==ti);
% f = inpolyhedron(Fs,Vs,PosAll{tj,1},'FlipNormals',false);
f = sqrt(sum((PosAll{tj,1}-[aa,bb,cc]).^2,2))<rs;
TestPointsR_ti = PosAll{tj,1};
PointsInSphere = TestPointsR_ti(f,:);
p1 = [aa,bb,cc];
p2 = [PointsInSphere(:,1),...
PointsInSphere(:,2),...
PointsInSphere(:,3)];
w = 1./sqrt(sum(...
(p2-repmat(p1,size(PointsInSphere,1),1))...
.^2,2));
D_ti = reshape(D(:,:,:,tj),M*N*P,1);
D2{tj,i} = [D_ti(f),w];
end
if ~mod(i,10)
toc
end
end
D3{M*N*P,1} = [];
for i=1:M*N*P
D3{i} = vertcat(D2{:,i});
end
D4 = zeros(M*N*P,1);
for i=1:M*N*P
D4(i) = sum(D3{i}(:,1).*D3{i}(:,2))/...
sum(D3{i}(:,2));
end
D_ta = reshape(D4,N,M,P);
In terms of runtime, on my computer, the old code takes 57 hours to run. The new code takes 2 hours. At this point, the main calculation is the distance so I doubt you'll get much better.

How can i render lineseries/contour/etc objects to array of pixel data?

I have an array of pixel data frames for use with VideoWriter. I want to overlay lineseries/contour objects into each frame. I don't want to make the movie by iteratively drawing each frame to a figure and capturing it with getframe, because that gives poor resolution and is slow. I tried using getframe on a plot of just the contour, but that returns images scaled to the wrong size with weird margins, especially when using 'axis equal,' which I need.
Updated to accommodate feedback from OP
Getting the contour data as pixel data is not trivial (if possible at all) since using getframe doesn't yield predictable results
What we can do is to extract the contour data and then overlay it on the pixel data frames, forcing them to be to the same scale and then do a getframe on the resultant merged image. This will at least ensure that they two data sets area aligned.
The following code shows the principle though you'd need to modify it for your own needs:
%% Generate some random contours to use
x = linspace(-2*pi,2*pi);
y = linspace(0,4*pi);
[X,Y] = meshgrid(x,y);
Z = sin(X)+cos(Y);
[~,h] = contour(X,Y,Z);
This yields the following contours
Now we get the handles of the children of this image. These will all be 'patch' type objects
patches = get(h,'Children');
Also get the axis limits for the contours
lims = axis;
Next, create a new figure and render the pixel frame data into it
In this example I'm just loading an image but you get the idea.
%% Render frame data
figure
i = imread( some_image_file_png );
This image is actually 194 x 259 x 3. I can display it and rescale the X and Y axes using
%% Set image axes
image(flipdim(i,1),'XData',[lims(1) lims(2)],'YData',[lims(4) lims(3)]);
Note the use of flipdim() to vertically flip the image since the image Y-axis runs in the opposite sense to the contour Y axis. This gives me:
Now I can plot the contours (patches) form the contour plot over the top of the image in the same coordinate space
%% Plot patches
for p =1:length(patches)
xd = get( patches(p), 'XData' );
yd = get( patches(p), 'YData' );
% This causes all contours to be rendered in white. You may
% want to play with this a little
cd = zeros(size(xd));
patch( xd, yd, cd, 'EdgeColor', 'w');
end
This yields
You can now use getframe to extract the frame. If it's important to have coloured contours, you will need to extract colour data from the original contour map and use it to apply an appropriate colouring in the overlaid image.
As a short cut, it's also possible to compile all patch data into a single MxN matrix and render with a single call to patch but I wrote it this way to demonstrate the process.
Well, here's a Bresenham-esque solution based on the ContourMatrix. Not ideal cuz doesn't handle line width, antialiasing, or any more than a single color. But it's pretty efficient (not quite Bham efficient).
function renderContour
clc
close all
x = randn(100,70);
[c,h] = contour(x,[0 0],'LineColor','r');
axis equal
if ~isnumeric(h.LineColor)
error('not handled')
end
cs = nan(size(c,2),4);
k = 0;
ci = 1;
for i = 1:size(c,2)
if k <= 0
k = c(2,i);
else
if k > 1
cs(ci,:) = reshape(c(:,i+[0 1]),[1 4]);
ci = ci + 1;
end
k = k - 1;
end
end
pix = renderLines(cs(1:ci-1,:),[1 1;fliplr(size(x))],10,h.LineColor);
figure
image(pix)
axis equal
end
function out = renderLines(cs,rect,res,color)
% cs = [x1(:) y1(:) x2(:) y2(:)]
% rect = [x(1) y(1);x(2) y(2)]
% doesnt handle line width, antialiasing, etc
% could do those with imdilate, imfilter, etc.
test = false;
if test
if false
cs = [0 0 5 5; 0 5 2.5 2.5];
rect = [0 0; 10 10];
else
cs = 100 * randn(1000,4);
rect = 200 * randn(2);
end
res = 10;
color = [1 .5 0];
end
out = nan(abs(res * round(diff(fliplr(rect)))));
cs = cs - repmat(min(rect),[size(cs,1) 2]);
d = [cs(:,1) - cs(:,3) cs(:,2) - cs(:,4)];
lens = sqrt(sum(d.^2,2));
for i = 1:size(cs,1)
n = ceil(sqrt(2) * res * lens(i));
if false % equivalent but probably less efficient
pts = linspace(0,1,n);
pts = round(res * (repmat(cs(i,1:2),[length(pts) 1]) - pts' * d(i,:)));
else
pts = round(res * [linspace(cs(i,1),cs(i,3),n);linspace(cs(i,2),cs(i,4),n)]');
end
pts = pts(all(pts > 0 & pts <= repmat(fliplr(size(out)),[size(pts,1) 1]),2),:);
out(sub2ind(size(out),pts(:,2),pts(:,1))) = 1;
end
out = repmat(flipud(out),[1 1 3]) .* repmat(permute(color,[3 1 2]),size(out));
if test
image(out)
axis equal
end
end

I want to make panorama image but it is showing the error message Undefined function 'imageSet' for input arguments of type 'char'

Undefined function 'imageSet' for input arguments of type 'char'.
Error in build (line 3)
buildingScene = imageSet(buildingDir);
% Load images.
buildingDir = fullfile(toolboxdir('vision'), 'visiondata', 'building');
buildingScene = imageSet(buildingDir);
% Display images to be stitched
montage(buildingScene.ImageLocation)
% Read the first image from the image set.
I = read(buildingScene, 1);
% Initialize features for I(1)
grayImage = rgb2gray(I);
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage, points);
% Initialize all the transforms to the identity matrix. Note that the
% projective transform is used here because the building images are fairly
% close to the camera. Had the scene been captured from a further distance,
% an affine transform would suffice.
tforms(buildingScene.Count) = projective2d(eye(3));
% Iterate over remaining image pairs
for n = 2:buildingScene.Count
% Store points and features for I(n-1).
pointsPrevious = points;
featuresPrevious = features;
% Read I(n).
I = read(buildingScene, n);
% Detect and extract SURF features for I(n).
grayImage = rgb2gray(I);
points = detectSURFFeatures(grayImage);
[features, points] = extractFeatures(grayImage, points);
% Find correspondences between I(n) and I(n-1).
indexPairs = matchFeatures(features, featuresPrevious, 'Unique', true);
matchedPoints = points(indexPairs(:,1), :);
matchedPointsPrev = pointsPrevious(indexPairs(:,2), :);
% Estimate the transformation between I(n) and I(n-1).
tforms(n) = estimateGeometricTransform(matchedPoints, matchedPointsPrev,...
'projective', 'Confidence', 99.9, 'MaxNumTrials', 2000);
% Compute T(1) * ... * T(n-1) * T(n)
tforms(n).T = tforms(n-1).T * tforms(n).T;
end
avgXLim = mean(xlim, 2);
[~, idx] = sort(avgXLim);
centerIdx = floor((numel(tforms)+1)/2);
centerImageIdx = idx(centerIdx);
Tinv = invert(tforms(centerImageIdx));
for i = 1:numel(tforms)
tforms(i).T = Tinv.T * tforms(i).T;
end
for i = 1:numel(tforms)
[xlim(i,:), ylim(i,:)] = outputLimits(tforms(i), [1 imageSize(2)], [1 imageSize(1)]);
end
% Find the minimum and maximum output limits
xMin = min([1; xlim(:)]);
xMax = max([imageSize(2); xlim(:)]);
yMin = min([1; ylim(:)]);
yMax = max([imageSize(1); ylim(:)]);
% Width and height of panorama.
width = round(xMax - xMin);
height = round(yMax - yMin);
% Initialize the "empty" panorama.
panorama = zeros([height width 3], 'like', I);
Step 4 - Create the Panorama
Use imwarp to map images into the panorama and use vision.AlphaBlender to overlay the images together.
blender = vision.AlphaBlender('Operation', 'Binary mask', ...
'MaskSource', 'Input port');
% Create a 2-D spatial reference object defining the size of the panorama.
xLimits = [xMin xMax];
yLimits = [yMin yMax];
panoramaView = imref2d([height width], xLimits, yLimits);
% Create the panorama.
for i = 1:buildingScene.Count
I = read(buildingScene, i);
% Transform I into the panorama.
warpedImage = imwarp(I, tforms(i), 'OutputView', panoramaView);
% Create an mask for the overlay operation.
warpedMask = imwarp(ones(size(I(:,:,1))), tforms(i), 'OutputView', panoramaView);
% Clean up edge artifacts in the mask and convert to a binary image.
warpedMask = warpedMask >= 1;
% Overlay the warpedImage onto the panorama.
panorama = step(blender, panorama, warpedImage, warpedMask);
end
figure
imshow(panorama)
imageSet requires the Computer Vision Toolbox from MATLAB R2014b or higher. See the release notes from the Computer Vision Toolbox here: http://www.mathworks.com/help/vision/release-notes.html#R2014b
If you have R2014a or lower, imageSet does not come with your distribution. The only option you have is to upgrade your MATLAB distribution. Sorry if this isn't what you wanted to hear!

super resolution of low resolution images using delaunay triangulation, negative pixel values for the resultant High resolution image

i have to do super resolution of two low resolution images to obtain a high resolution image.
2nd image is taken as base image and the first image is registered with respect to it . i used SURF algorithm for image registration . A Delaunay triangulation is
constructed over the points using a built-in MATLAB delaunay
function . The HR grid of size is
constructed for a prespecified resolution enhancement factor R Then HR algorithm for interpolating the pixel values on the
HR grid is summarized next.
HR Algorithm Steps:
1. Construct the Delaunay triangulation
over the set of scattered vertices in the
irregularly sampled raster formed from the
LR frames.
Estimate the gradient vector at each
vertex of the triangulation by calculating the unit normal vector of neighbouring vector using cross product method.Sum of the unit normal vector of each triangle multiplied by its area is divided by summation of area of all neighbouring triangles to get the vertex normal.
Approximate each triangle patch in
the triangulation by a continuous and,
possibly, a continuously differentiable
surface, subject to some smoothness constraint.
Bivariate polynomials or splines
could be the approximants as explained
below.
Set the resolution enhancement factor
along the horizontal and vertical directions
and then calculate the pixel value
at each regularly spaced HR grid point to
construct the initial HR image
The bivariate polynomial i used is mentioned in the code, using pixel values at each vertex of a triangle and corresponding gradient in x and y directions i calculated the nine constants associated with each triangle then defined a high resolution grid , calculated the pixel values at each point using the constants calculated
i am attaching my code with it, the problem i am facing is that i am just getting a gray image as out put HR image , because the constants i have calculated have negative values resulting in negative pixel values
another problem i realized with my code is in gradient estimation i get a lot of 'NaN' as a result of gradient calculation.
if any one can please spent some time to help me out
close all
clear all
K = 2;
P1 = imread('C:\Users\Javeria Farooq\Desktop\project images\a.pgm');
%reads the image to be registered
P2 = imread('C:\Users\Javeria Farooq\Desktop\project images\b.pgm');
%reads the base image
image1_gray = makelr(P1, 1, 100, 1/2);
%image1_gray = P1;
% makes lr image of first
image2_gray= makelr(P2, 1, 100, 1/2);
%image2_gray= P2;
%makes lr image of second
figure(1),imshow(image1_gray)
axis on;
grid on;
title('Unregistered image');
figure(2),imshow(image2_gray)
axis on;
grid on;
title('Base image ');
impixelinfo
% both image displayed with pixel info
hold on
points_image1= detectSURFFeatures(image1_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
%detects surf features of first image
points_image2 = detectSURFFeatures(image2_gray, 'NumScaleLevels', 100, 'NumOctaves', 12, 'MetricThreshold', 500 );
%detects surf features of second image
[features_image1, validPoints_image1] = extractFeatures(image1_gray, points_image1);
[features_image2, validPoints_image2] = extractFeatures(image2_gray, points_image2);
%extracts features of both images
indexPairs = matchFeatures(features_image1, features_image2, 'Prenormalized', true) ;
% get matching points
matched_pts1 = validPoints_image1(indexPairs(:, 1));
matched_pts2 = validPoints_image2(indexPairs(:, 2));
figure; showMatchedFeatures(image1_gray,image2_gray,matched_pts1,matched_pts2,'montage');
%matched features of both images are displayed
legend('matched points 1','matched points 2');
% Compute the transformation matrix
tform = estimateGeometricTransform(matched_pts1,matched_pts2,'projective')
%calculate transformation matrix using projective transform
T=tform.T;
r=[];
A=[];
l=1
[N1 N2]=size(image2_gray)
registeredPts = zeros(N1*N2,2);
% s= zeros(N1*N2,2);
pixelVals = zeros(N1*N2,1);
[N1 N2]=size(image2_gray)
for row = 1:N1
for col = 1:N2
pixNum = (row-1)*N2 + col;
pixelVals(pixNum,1) = image2_gray(row,col);
registeredPts(pixNum,:) = [col,row];
end
end
[r]=transformPointsForward(tform,registeredPts);
%coordinates of base image
image2_gray=double(image2_gray);
R=2;
r1=r(:,1);
r2=r(:,2);
for row = 1:N1
for col = 1:N2
pixNum = N1*N2 + (row-1)*N2 + col;
pixelVals(pixNum,1) = image1_gray(row,col);
registeredPts(pixNum,:) = [r1(row,1),r2(row,1)];
end
end
% all pixel values are saved in pixelVals
%all registered points are saved first base image then unregistered image
%delaunay triangulation of all coordinates passing x and y coordinates from registered Points
tri = delaunay(registeredPts(:,1),registeredPts(:,2));
figure(3), triplot(tri,registeredPts(:,1),registeredPts(:,2))
save tri
% Estimate the gradient vector at each vertex
[totalTris,three] = size(tri);
[totalPoints,two] = size(registeredPts);
vGradientVecs = zeros(totalPoints,2);
triAreas = zeros(totalTris,1);
triUnitNormals = zeros(totalTris,3);
vUnitNormals = zeros(totalPoints,3);
% 1. Find the unit normal vectors and the areas of all triangles,
% then find the product of these two numbers for each triangle
for triNum = 1:totalTris
v = tri(triNum,:);
% 3D triangle points: x,y,pixel
b=pixelVals(v);
b=b(:);
p = [registeredPts(v,:),b];
% triangle area
triAreas(triNum) = polyarea([p(:,1)],[p(:,2)]);
% directional vectors representing the surface of the plane
d1 = p(2,:)-p(1,:);
d2 = p(3,:)-p(1,:);
% cross product of these vectors
crossp = cross(d1,d2);
% If u = [u1 u2 u3] and v = [v1 v2 v3], we know that the product w is defined as w = [(u2v3 – u3v2) (u3v1 - u1v3) (u1v2 - u2v1)]
% normalized cross product = unit normal vector for the triangle
dist = sqrt(sum(crossp.^2));
triUnitNormals(triNum,:) = crossp./dist;
end
% %2. %Estimate the unit normal vector at each vertex
% a. Find the triangle patches that neighbor the vertex
% b. Find the unit normal vectors of these regions
% c. Multiply each of these vectors by the area of the
% associated region, then sum these numbers and divide
% by the total area of all the regions
for pointNum = 1:totalPoints
[neighbors,x] = find(tri==pointNum);
areas = triAreas(neighbors);
areas3 = [areas,areas,areas];
triNormsSum = sum(triUnitNormals(neighbors,:).*areas3);
triAreasSum = sum(areas);
vUnormalized = triNormsSum./triAreasSum;
vUnitNormals(pointNum,:) = ...
vUnormalized./sqrt(sum(vUnormalized.^2));
if( triAreasSum == 0 )
triAreasSum = 0.0001;
vUnormalized = triNormsSum./triAreasSum;
% re-normalize
vUnitNormals(pointNum,:) = ...
vUnormalized./sqrt(sum(vUnormalized.^2));
end
% 3. Find the gradients along the x and y directions for each vertex
% vertex's unit normal: n = [nx,ny,nz]
% x-direction gradient: dz/dx = -nx/nz
% y-direction gradient: dz/dy = -ny/nz
%
for pointNum = 1:totalPoints
nz = vUnitNormals(pointNum,3);
if( nz == 0 )
nz = 0.0001;
end
vGradientVecs(pointNum,1) = -vUnitNormals(pointNum,1)./nz;
vGradientVecs(pointNum,2) = -vUnitNormals(pointNum,2)./nz;
% end
end
end
% 1. Find the 3 equations for each vertex, and
% place them in c_equations matrix;
% c_equations = [A for vertex 1;
% A for vertex 2; ...
% A for vertex totalPoints]
% c(point,row,:) gives one row from an A matrix
Btotal = zeros(3,totalPoints);
c_equations = zeros(3*totalPoints,3,9);
for pointNum = 1:totalPoints
% % B = [pixVal; x gradient; y gradient] at this vertex
z = pixelVals(pointNum);
B = [z; vGradientVecs(pointNum,1); vGradientVecs(pointNum,2)];
%
% % Compile all B matrices into a vector
Btotal(:,pointNum) = B;
% B = Ac to calculate c which is c=[c1 c2 .....c9]' take invA and
% multiply by B
x = registeredPts(pointNum,1);
y = registeredPts(pointNum,2);
A = [1 x y x^2 y^2 x^3 (x^2)*y x*(y^2) y^3; ...
0 1 0 2*x 0 3*(x^2) 2*x*y y^2 0; ...
0 0 1 0 2*y 0 x^2 2*x*y 3*(y^2)];
% Compile all A matrices into a vector
c_equations(pointNum,1,:) = A(1,:);
c_equations(pointNum,2,:) = A(2,:);
c_equations(pointNum,3,:) = A(3,:);
end
% 2. Find the c values for each triangle patch
c = zeros(totalTris,9);
c9 = zeros(9,9);
for triNum = 1:totalTris
p1 = tri(triNum,1);
p2 = tri(triNum,2);
p3 = tri(triNum,3);
B9 = [Btotal(:,p1); Btotal(:,p2); Btotal(:,p3)];
c9 = [(c_equations(p1,1,:)); (c_equations(p1,2,:)); (c_equations(p1,3,:)); ...
(c_equations(p2,1,:)); (c_equations(p2,2,:));( c_equations(p2,3,:)); ...
(c_equations(p3,1,:)); (c_equations(p3,2,:));( c_equations(p3,3,:))];
C9=squeeze(c9);
c(triNum,:) = pinv(C9)*B9; %linsolve(c9,B9);
end
% xc = findBPolyCoefficients1(tri,registeredPts,pixelVals,vGradientVecs);
% save xc
% % 2. For each point on the HR grid, find the associated triangle patch,
% % extract its c values, and use these values as the coefficients
% % in a bivariate polynomial to calculate the HR pixel value at
% % each grid point (x,y)
[N1,N2]=size(image1_gray);
[totalTris,three] = size(tri);
M = N1*R-1;
N = N2*R-1;
HRimage = zeros(M,N);
HRtriangles = zeros(M,N);
[X,Y] = meshgrid(1:1/R:N2,1:1/R:N1);
% Check all the triangles in order noting in which triangle each HR
% grid point occurs.
for triNum = 1:totalTris
pts = registeredPts(tri(triNum,:),:);
IN = inpolygon(X,Y,pts(:,1),pts(:,2)); % NxM
HRtriangles(ind2sub(size(IN),find(IN==1))) = triNum;
end
% there is a problem with this part of code ,
for y = 1:M % row
for x = 1:N % col
% For testing, average the pixels from the vertices of the
% triangle the HR point is in.
% pix = pixelVals(tri(HRtriangles(x,y),:));
% HRimage(x,y) = (pix(1) + pix(2) + pix(3))/3;
% Extract appropriate set of 9 c values
HRptC = c(HRtriangles(x,y),:);
% Bivariate polynomial
HRimage(x,y) = sum(HRptC.*[1,x,y,x^2,y^2,x^3,(x^2)*y,x*(y^2),y^3]);
g(x,y)=HRimage(x,y);
%changd xy with yx
end
end
% HRimage = estimateGridVals1(tri,registeredPts,R,N1,N2,pixelVals);
% %Estimating Grid values at each patch
% %save HRimage
g(g(:,:)<0)=0;
figure(8),imshow(g,[]);