Sum of Absolute differences between images in Matlab - matlab

I want to implement sum of absolute difference in Matlab to establish a similarity metric between between one video frame and 5 frames either side of this frame (i.e. past and future frames). I only need the SAD value for the co-located pixel in each frame, rather than a full search routine, such as full search.
Obviously I could implement this as nested loops such as:
bs = 2; % block size
for (z_i = -bs:1:bs)
for (z_j = -bs:1:bs)
I1(1+bs:end-bs,1+bs:end-bs) = F1(1+bs+z_i:end-bs+z_i, 1+bs+z_j:end-bs+z_j);
I2(1+bs:end-bs,1+bs:end-bs) = F2(1+bs+z_i:end-bs+z_i, 1+bs+z_j:end-bs+z_j);
sad(:,:) = sad(:,:) + abs( I1(:,:) - I2(:,:));
end
end
However I'm wondering is there a more efficient way of doing it than this? At the very least I guess I should define the above code snippet as a function?
Any recommendations would be grateful accepted!

You should use the command im2col in MATLAB you will be able to do so in Vectorized manner.
Just arrange each neighborhood in columns (For each frame).
Put them in 3D Matrix and apply you operation on the 3rd dimension.
Code Snippet
I used Wikipedia's definition of "Sum of Absolute Differences".
The demo script:
```
% Sum of Absolute Differences Demo
numRows = 10;
numCols = 10;
refBlockRadius = 1;
refBlockLength = (2 * refBlockRadius) + 1;
mImgSrc = randi([0, 255], [numRows, numCols]);
mRefBlock = randi([0, 255], [refBlockLength, refBlockLength]);
mSumAbsDiff = SumAbsoluteDifferences(mImgSrc, mRefBlock);
```
The Function SumAbsoluteDifferences:
```
function [ mSumAbsDiff ] = SumAbsoluteDifferences( mInputImage, mRefBlock )
%UNTITLED2 Summary of this function goes here
% Detailed explanation goes here
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
blockLength = size(mRefBlock, 1);
blockRadius = (blockLength - 1) / 2;
mInputImagePadded = padarray(mInputImage, [blockRadius, blockRadius], 'replicate', 'both');
mBlockCol = im2col(mInputImagePadded, [blockLength, blockLength], 'sliding');
mSumAbsDiff = sum(abs(bsxfun(#minus, mBlockCol, mRefBlock(:))));
mSumAbsDiff = col2im(mSumAbsDiff, [blockLength, blockLength], [(numRows + blockLength - 1), (numCols + blockLength - 1)]);
end
```
Enjoy...

Related

Very slow execution of user defined convolution function for neural network in MATLAB

I have an implementation of a convolution neural network in MATLAB (from the open source DeepLearnToolbox). The following code finds the convolution of different weights and parameters:
z = z + convn(net.layers{l - 1}.a{i}, net.layers{l}.k{i}{j}, 'valid');
To update the tool, I have implemented my own fixed-point scheme based convolution using the following code:
function result = convolution(image, kernal)
% find dimensions of output
row = size(image,1) - size(kernal,1) + 1;
col = size(image,2) - size(kernal,2) + 1;
zdim = size(image,3);
%create output matrix
output = zeros(row, col);
% flip the kernal
kernal_flipped = fliplr(flipud(kernal));
%find rows and col of kernal for loop iteration
row_ker = size(kernal_flipped,1);
col_ker = size(kernal_flipped,2);
for k = 1 : zdim
for i = 0 : row-1
for j = 0 : col-1
sum = fi(0,1,8,7);
prod = fi(0,1,8,7);
for k_row = 1 : row_ker
for k_col = 1 : col_ker
a = image(k_row+i, k_col+j, k);
b = kernal_flipped(k_row,k_col);
prod = a * b;
% convert to fixed point
prod = fi((product/16384), 1, 8, 7);
sum = fi((sum + prod), 1, 8, 7);
end
end
output(i+1, j+1, k) = sum;
end
end
end
result = output;
end
The problem is that when I use my convolution implementation in the bigger application, it is super slow.
Any suggestions how to improve its execution time?
MATLAB doesn't support fixed point 2D convolution, but knowing that convolution can be written as matrix multiplication and that MATLAB has support for fixed point matrix multiplication you can use im2col to convert the image into column format and multiply it by the kernel to convolve them.
row = size(image,1) - size(kernal,1) + 1;
col = size(image,2) - size(kernal,2) + 1;
zdim = size(image,3);
output = zeros(row, col);
kernal_flipped = fliplr(flipud(kernal));
fi_kernel = fi(kernal_flipped(:).', 1, 8, 7) / 16384;
sz = size(kernal_flipped);
sz_img = size(image);
% Use the generated indexes to convert the image into column format
idx_col = im2col(reshape(1:numel(image)/zdim,sz_img(1:2)),sz,'sliding');
image = reshape(image,[],zdim);
for k = 1:zdim
output(:,:,k) = double(fi_kernel * reshape(image(idx_col,k),size(idx_col)));
end

MATLAB function for image filtering

I'm looking to implement my own Matlab function that can be used to compute image filtering with a 3x3 kernel.
It has to be like this: function [ output_args ] = fFilter( img, mask )
where img is a original image and mask is a kernel (for example B = [1,1,1;1,4,1;1,1,1] )
I'm not supposed to use any in-built functions from Image Processing Toolbox.
I have to use this
where:
s is an image after filter
p is an image before filter
M is a kernel
and N is 1 if sum(sum(M)) == 0 else N = sum(sum(M))
I'm new to MATLAB and this is like black magic for me -_-
This should do the work (Wasn't verified):
function [ mO ] = ImageFilter( mI, mMask )
%UNTITLED2 Summary of this function goes here
% Detailed explanation goes here
numRows = size(mI, 1);
numCols = size(mI, 2);
% Assuming Odd number of Rows / Columns
maskRadius = floor(siez(mMask, 1) / 2);
sumMask = sum(mMask(:));
if(sumMask ~= 0)
mMask(:) = mMask / sumMask;
end
mO = zeros([numRows, numCols]);
for jj = 1:numCols
for ii = 1:numRows
for kk = -maskRadius:maskRadius
nn = kk + 1; %<! Mask Index
colIdx = min(max(1, jj + kk), numCols); %<! Replicate Boundary
for ll = -maskRadius:maskRadius
mm = ll + 1; %<! Mask Index
rowIdx = min(max(1, ii + ll), numRows); %<! Replicate Boundary
mO(ii, jj) = mO(ii, jj) + (mMask(mm, nn) * mI(rowIdx, colIdx));
end
end
end
end
end
The above is classic Correlation (Image Filtering) with Replicate Boundary Condition.

Applying a (Gaussian) Filter on an Image in Parallel - MATLAB

I created the following (MATLAB) function to apply Gaussian Filter blur on an image:
function [ mBlurredImage ] = ApplyGaussianBlur( mInputImage, gaussianKernelStd, stdToRadiusFactor )
gaussianBlurRadius = ceil(stdToRadiusFactor * gaussianKernelStd); % Imitating Photoshop - See Reference
vGaussianKernel = exp(-([-gaussianBlurRadius:gaussianBlurRadius] .^ 2) / (2 * gaussianKernelStd * gaussianKernelStd));
vGaussianKernel = vGaussianKernel / sum(vGaussianKernel);
mInputImagePadded = padarray(mInputImage, [gaussianBlurRadius, gaussianBlurRadius], 'replicate', 'both');
mBlurredImage = conv2(vGaussianKernel, vGaussianKernel.', mInputImagePadded, 'valid');
end
I'm trying to find the best approach to create a parallel version of it.
I want to find a method /strategy which applies to OpenMP as well.
I tried padding the image, then divide it to 4 sections and apply the blur on each.
Then I gathered all pieces.
Here's the code:
function [ mBlurredImage ] = ApplyGaussianBlurParallel( mInputImage, gaussianKernelStd, stdToRadiusFactor, numThreads )
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
% mBlurredImage = zeros(numRows, numCols);
gaussianKernelRadius = ceil(stdToRadiusFactor * gaussianKernelStd); % Imitating Photoshop - See Reference
vGaussianKernel = exp(-([-gaussianKernelRadius:gaussianKernelRadius] .^ 2) / (2 * gaussianKernelStd * gaussianKernelStd));
vGaussianKernel = vGaussianKernel / sum(vGaussianKernel);
numRowsPadded = numRows + (2 * gaussianKernelRadius);
numColsPadded = numCols + (2 * gaussianKernelRadius);
mInputImagePadded = padarray(mInputImage, [gaussianKernelRadius, gaussianKernelRadius], 'replicate', 'both');
vColIdxImageBlock = round(linspace(1, numCols, (numThreads + 1)));
vFirstColIdxImageBlock = vColIdxImageBlock(1:numThreads);
% Going form Image Axis to Padded Image Axis
vFirstColIdxImagePaddedBlock = vFirstColIdxImageBlock + gaussianKernelRadius;
% Adding Pixels to the left
vFirstColIdxImagePaddedBlock = vFirstColIdxImagePaddedBlock - gaussianKernelRadius;
vLastColIdxImageBlock = vColIdxImageBlock(2:(numThreads + 1));
% Going form Image Axis to Padded Image Axis
vLastColIdxImagePaddedBlock = vLastColIdxImageBlock + gaussianKernelRadius;
% Adding Pixels to the right
vLastColIdxImagePaddedBlock = vLastColIdxImagePaddedBlock + gaussianKernelRadius;
vRowsIdxImage = 1:numRows;
vRowsImagePadded = 1:numRowsPadded;
cImageBlock = cell(numThreads, 1);
cImageBlockProcessed = cell(numThreads, 1);
for iBlockIdx = 1:numThreads
firstColIdxImagePaddedBlock = vFirstColIdxImagePaddedBlock(iBlockIdx);
lastColIdxImagePaddedBlock = vLastColIdxImagePaddedBlock(iBlockIdx);
vColsIdxImagePadded = [firstColIdxImagePaddedBlock:lastColIdxImagePaddedBlock];
cImageBlock{iBlockIdx} = mInputImagePadded(vRowsImagePadded, vColsIdxImagePadded);
end
parfor iBlockIdx = 1:numThreads
cImageBlockProcessed{iBlockIdx} = conv2(vGaussianKernel, vGaussianKernel.', cImageBlock{iBlockIdx}, 'valid');
end
mBlurredImage = zeros(numRows, numCols);
for iBlockIdx = 1:numThreads
firstColIdxImageBlock = vFirstColIdxImageBlock(iBlockIdx);
lastColIdxImageBlock = vLastColIdxImageBlock(iBlockIdx);
vColsIdxImage = [firstColIdxImageBlock:lastColIdxImageBlock];
mBlurredImage(vRowsIdxImage, vColsIdxImage) = cImageBlockProcessed{iBlockIdx};
end
end
I also created the following script to analyze the performance:
% `ApplyGaussianBlurParallel` Test Case
clear();
vInputImageSize = [720, 1280, 1920, 2560];
numIterations = 20;
vRunTimeParallelGaussianBlur = zeros(numIterations, length(vInputImageSize));
vRunTimeSerialGaussianBlur = zeros(numIterations, length(vInputImageSize));
gaussianKernelStd = 10;
stdToRadiusFactor = 3.5;
numThreads = 4;
for iImageSizeIdx = 1:length(vInputImageSize);
imageSize = vInputImageSize(iImageSizeIdx);
mInputImage = randn(imageSize, 'single');
maxNumCompThreads(1);
for iIter = 1:numIterations
hTimeStart = tic();
mBlurredImage1 = ApplyGaussianBlur(mInputImage, gaussianKernelStd, stdToRadiusFactor);
vRunTimeSerialGaussianBlur(iIter, iImageSizeIdx) = toc(hTimeStart);
end
maxNumCompThreads(numThreads);
for iIter = 1:numIterations
hTimeStart = tic();
mBlurredImage1 = ApplyGaussianBlurParallel(mInputImage, gaussianKernelStd, stdToRadiusFactor, numThreads);
vRunTimeParallelGaussianBlur(iIter, iImageSizeIdx) = toc(hTimeStart);
end
end
vRunTimeParallelGaussianBlurMean = mean(vRunTimeParallelGaussianBlur);
vRunTimeParallelGaussianBlurStd = std(vRunTimeParallelGaussianBlur);
vRunTimeParallelGaussianBlurMedian = median(vRunTimeParallelGaussianBlur);
vRunTimeSerialGaussianBlurMean = mean(vRunTimeSerialGaussianBlur);
vRunTimeSerialGaussianBlurStd = std(vRunTimeSerialGaussianBlur);
vRunTimeSerialGaussianBlurMedian = median(vRunTimeSerialGaussianBlur);
figure();
plot(vInputImageSize, [vRunTimeParallelGaussianBlurMean(:), vRunTimeSerialGaussianBlurMean(:)], ...
'LineStyle', 'none', 'Marker', 'o');
title('Mean Runtime');
legend({['Parallel'], ['Serial']});
figure();
plot(vInputImageSize, [vRunTimeParallelGaussianBlurMedian(:), vRunTimeSerialGaussianBlurMedian(:)], ...
'LineStyle', 'none', 'Marker', 'o');
title('Median Runtime');
legend({['Parallel'], ['Serial']});
Yet what I get is:
Namely, I can't make it efficient enough.
Can anyone think on a better more efficient approach or doing it better?
Thank You.
At some point you are mixing up number of Threads in your matlab process with the number of computation workers the parallel computing toolbox is using.
maxNumCompThreads sets the number of threads each matlab process is allowed to use. This is not associated to the parallel computing toolbox.
parpool or matlabpool sets the number of workers (individual processes) which process jobs generated via one of the parallel computing toolbox functions like parfor.
The ApplyGaussianBlurParallel needs the number of workers, not the number of threads which you are currently passing.
Fixing this I got slightly better results, but parallel computing was still slower. I totally removed maxNumCompThreads, don't see a reason to use it here.
The most efficient way is probably using your GPU:
function [ mBlurredImage ] = ApplyGaussianBlur( mInputImage,
gaussianKernelStd, stdToRadiusFactor )
gaussianBlurRadius = ceil(stdToRadiusFactor * gaussianKernelStd); % Imitating Photoshop - See Reference
vGaussianKernel = exp(-([-gaussianBlurRadius:gaussianBlurRadius] .^ 2) / (2 * gaussianKernelStd * gaussianKernelStd));
vGaussianKernel = vGaussianKernel / sum(vGaussianKernel);
mInputImagePadded = padarray(mInputImage, [gaussianBlurRadius, gaussianBlurRadius], 'replicate', 'both');
GvGaussianKernel=gpuArray(vGaussianKernel);
GmInputImagePadded=gpuArray(mInputImagePadded);
mBlurredImage = conv2(GvGaussianKernel, GvGaussianKernel.', GmInputImagePadded, 'valid');
end
Same benchmark (Core i5-4690 4x 3500MHz / GT730):

Implementation of shadow free 1d invariant image

I implemented a method for removing shadows based on invariant color features found in the paper Entropy Minimization for Shadow Removal. My implementation seems to be yielding similar computational results sometimes, but they are always off, and my grayscale image is blocky, maybe as a result of incorrectly taking the geometric mean.
Here is an example plot of the information potential from the horse image in the paper as well as my invariant image. Multiply the x-axis by 3 to get theta(which goes from 0 to 180):
And here is the grayscale Image my code outputs for the correct maximum theta (mine is off by 10):
You can see the blockiness that their image doesn't have:
Here is their information potential:
When dividing by the geometric mean, I have tried using NaN and tresholding the image so the smallest possible value is .01, but it doesn't seem to change my output.
Here is my code:
I = im2double(imread(strname));
[m,n,d] = size(I);
I = max(I, .01);
chrom = zeros(m, n, 3, 'double');
for i = 1:m
for j = 1:n
% if ((I(i,j,1)*I(i,j,2)*I(i,j,3))~= 0)
chrom(i,j, 1) = I(i,j,1)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 2) = I(i,j,2)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 3) = I(i,j,3)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
% else
% chrom(i,j, 1) = 1;
% chrom(i,j, 2) = 1;
% chrom(i,j, 3) = 1;
% end
end
end
p1 = mat2gray(log(chrom(:,:,1)));
p2 = mat2gray(log(chrom(:,:,2)));
p3 = mat2gray(log(chrom(:,:,3)));
X1 = mat2gray(p1*1/(sqrt(2)) - p2*1/(sqrt(2)));
X2 = mat2gray(p1*1/(sqrt(6)) + p2*1/(sqrt(6)) - p3*2/(sqrt(6)));
maxinf = 0;
maxtheta = 0;
data2 = zeros(1, 61);
for theta = 0:3:180
M = X1*cos(theta*pi/180) - X2*sin(theta*pi/180);
s = sqrt(std2(X1)^(2)*cos(theta*pi/180) + std2(X2)^(2)*sin(theta*pi/180));
s = abs(1.06*s*((m*n)^(-1/5)));
[m, n] = size(M);
length = m*n;
sources = zeros(1, length, 'double');
count = 1;
for x=1:m
for y = 1:n
sources(1, count) = M(x , y);
count = count + 1;
end
end
weights = ones(1, length);
sigma = 2*s;
[xc , Ak] = fgt_model(sources , weights , sigma , 10, sqrt(length) , 6 );
sum1 = sum(fgt_predict(sources , xc , Ak , sigma , 10 ));
sum1 = sum1/sqrt(2*pi*2*s*s);
data2(theta/3 + 1) = sum1;
if (sum1 > maxinf)
maxinf = sum1;
maxtheta = theta;
end
end
InvariantImage2 = cos(maxtheta*pi/180)*X1 + sin(maxtheta*pi/180)*X2;
Assume the Fast Gauss Transform is correct.
I don't know whether this makes any difference as it is more than a month now, but the blockiness and different information potential plot is simply caused by compression of the used image. You can't expect to be getting same results using this image as they had, because they have used raw, high resolution uncompressed version of it. I have to say I am fairly impressed with your results, especially with implementing the information potential. That thing went over my head a little.
John.

Computing object statistics from the second central moments

I'm currently working on writing a version of the MATLAB RegionProps function for GNU Octave. I have most of it implemented, but I'm still struggling with the implementation of a few parts. I had previously asked about the second central moments of a region.
This was helpful theoretically, but I'm having trouble actually implementing the suggestions. I get results wildly different from MATLAB's (or common sense for that matter) and really don't understand why.
Consider this test image:
We can see it slants at 45 degrees from the X axis, with minor and major axes of 30 and 100 respectively.
Running it through MATLAB's RegionProps function confirms this:
MajorAxisLength: 101.3362
MinorAxisLength: 32.2961
Eccentricity: 0.9479
Orientation: -44.9480
Meanwhile, I don't even get the axes right. I'm trying to use these formulas from Wikipedia.
My code so far is:
raw_moments.m:
function outmom = raw_moments(im,i,j)
total = 0;
total = int32(total);
im = int32(im);
[height,width] = size(im);
for x = 1:width;
for y = 1:height;
amount = (x ** i) * (y ** j) * im(y,x);
total = total + amount;
end;
end;
outmom = total;
central_moments.m:
function cmom = central_moments(im,p,q);
total = 0;
total = double(total);
im = int32(im);
rawm00 = raw_moments(im,0,0);
xbar = double(raw_moments(im,1,0)) / double(rawm00);
ybar = double(raw_moments(im,0,1)) / double(rawm00);
[height,width] = size(im);
for x = 1:width;
for y = 1:height;
amount = ((x - xbar) ** p) * ((y - ybar) ** q) * double(im(y,x));
total = total + double(amount);
end;
end;
cmom = double(total);
And here's my code attempting to use these. I include comments for the values I get
at each step:
inim = logical(imread('135deg100by30ell.png'));
cm00 = central_moments(inim,0,0); % 2567
up20 = central_moments(inim,2,0) / cm00; % 353.94
up02 = central_moments(inim,0,2) / cm00; % 352.89
up11 = central_moments(inim,1,1) / cm00; % 288.31
covmat = [up20, up11; up11, up02];
%[ 353.94 288.31
% 288.31 352.89 ]
eigvals = eig(covmat); % [65.106 641.730]
minoraxislength = eigvals(1); % 65.106
majoraxislength = eigvals(2); % 641.730
I'm not sure what I'm doing wrong. I seem to be following those formulas correctly, but my results are nonsense. I haven't found any obvious errors in my moment functions, although honestly my understanding of moments isn't the greatest to begin with.
Can anyone see where I'm going astray? Thank you very much.
EDIT:
According to Wikipedia:
the eignevalues [...] are proportional
to the squared length of the eigenvector axes.
which is explained by:
axisLength = 4 * sqrt(eigenValue)
Shown below is my version of the code (I vectorized the moments functions):
my_regionprops.m
function props = my_regionprops(im)
cm00 = central_moments(im, 0, 0);
up20 = central_moments(im, 2, 0) / cm00;
up02 = central_moments(im, 0, 2) / cm00;
up11 = central_moments(im, 1, 1) / cm00;
covMat = [up20 up11 ; up11 up02];
[V,D] = eig( covMat );
[D,order] = sort(diag(D), 'descend'); %# sort cols high to low
V = V(:,order);
%# D(1) = (up20+up02)/2 + sqrt(4*up11^2 + (up20-up02)^2)/2;
%# D(2) = (up20+up02)/2 - sqrt(4*up11^2 + (up20-up02)^2)/2;
props = struct();
props.MajorAxisLength = 4*sqrt(D(1));
props.MinorAxisLength = 4*sqrt(D(2));
props.Eccentricity = sqrt(1 - D(2)/D(1));
%# props.Orientation = -atan(V(2,1)/V(1,1)) * (180/pi); %# sign?
props.Orientation = -atan(2*up11/(up20-up02))/2 * (180/pi);
end
function cmom = central_moments(im,i,j)
rawm00 = raw_moments(im,0,0);
centroids = [raw_moments(im,1,0)/rawm00 , raw_moments(im,0,1)/rawm00];
cmom = sum(sum( (([1:size(im,1)]-centroids(2))'.^j * ...
([1:size(im,2)]-centroids(1)).^i) .* im ));
end
function outmom = raw_moments(im,i,j)
outmom = sum(sum( ((1:size(im,1))'.^j * (1:size(im,2)).^i) .* im ));
end
... and the code to test it:
test.m
I = imread('135deg100by30ell.png');
I = logical(I);
>> p = regionprops(I, {'Eccentricity' 'MajorAxisLength' 'MinorAxisLength' 'Orientation'})
p =
MajorAxisLength: 101.34
MinorAxisLength: 32.296
Eccentricity: 0.94785
Orientation: -44.948
>> props = my_regionprops(I)
props =
MajorAxisLength: 101.33
MinorAxisLength: 32.275
Eccentricity: 0.94792
Orientation: -44.948
%# these values are by hand only ;)
subplot(121), imshow(I), imdistline(gca, [17 88],[9 82]);
subplot(122), imshow(I), imdistline(gca, [43 67],[59 37]);
Are you sure about the core of your raw_moments function? You might try
amount = ((x-1) ** i) * ((y-1) ** j) * im(y,x);
This doesn't seem like enough to cause the problems you're seeing, but it might be at least a part.