Using MATLAB to calculate offset between successive images - matlab

I'm taking images using a tunneling microscope. However, the scope is drifting between successive images. I'm trying to use MatLab to calculate the offset between images. The code below calculates in seconds for small images (e.g. 64x64 pixels), but takes >2 hrs to handle the 512x512 pixel images I'm dealing with. Do you have any suggestions for speeding up this code? Or do you know of better ways to track images in MatLab? Thanks for your help!
%Test templates
template = .5*ones(32);
template(25:32,:) = 0;
template(:,25:64) = 0;
data_A = template;
close all
imshow(data_A);
template(9:32,41:64) = .5;
template(:,1:24) = 0;
data_B = template;
figure, imshow(data_B);
tic
[m n] = size(data_B);
z = [];
% Loop over all possible displacements
for x = -n:n
for y = -m:m
paddata_B = data_B;
ax = abs(x);
zerocols = zeros(m,ax);
if x > 0
paddata_B(:,1:ax) = [];
paddata_B = [paddata_B zerocols];
else
paddata_B(:,(n-ax+1):n) = [];
paddata_B = [zerocols paddata_B];
end
ay = abs(y);
zerorows = zeros(ay,n);
if y < 0
paddata_B(1:ay,:) = [];
paddata_B = vertcat(paddata_B, zerorows);
else
paddata_B((m-ay+1):m,:) = [];
paddata_B = vertcat(zerorows, paddata_B);
end
% Full matrix sum after array multiplication
C = paddata_B.*data_A;
matsum = sum(sum(C));
% Populate array of matrix sums for each displacement
z(x+n+1, y+m+1) = matsum;
end
end
toc
% Plot matrix sums
figure, surf(z), shading flat
% Find maximum value of z matrix
[max_z, imax] = max(abs(z(:)));
[xpeak, ypeak] = ind2sub(size(z),imax(1))
% Calculate displacement in pixels
corr_offset = [(xpeak-n-1) (ypeak-m-1)];
xoffset = corr_offset(1)
yoffset = corr_offset(2)

What you're calculating is known as the cross-correlation of the two images. You can calculate the cross-correlation of all offsets at once using Discrete Fourier Transforms (DFT or FFT). So try something like
z = ifft2( fft2(dataA) .* fft2(dataB).' );
If you pad with zeros in the Fourier domain, you can even use this sort of math to get offsets in fractions of a pixel, and apply offsets of fractions of a pixel to an image.

A typical approach to this kind of problem is to use the fact that it works quickly for small images to your advantage. When you have large images, decimate them to make small images. Register the small images quickly and use the computed offset as your initial value for the next iteration. In the next iteration, you don't decimate the images as much, but you're starting with a good initial estimate of the offset so you can constrain your search for solutions to a small neighborhood near your initial estimate.
Although not written with tunneling microscopes in mind, a review paper that may be of some assistance is: "Mutual Information-Based Registration of Medical Images: A Survey" by Pluim, Maintz, and Viergever published in IEEE Transactions on Medical Imaging, Vol. 22, No. 8, p. 986.

below link will help you find transformation between 2 images and correct/recover the distorted (in your case, image with offset)
http://in.mathworks.com/help/vision/ref/estimategeometrictransform.html
index_pairs = matchFeatures(featuresOriginal,featuresDistorted, 'unique', true);
matchedPtsOriginal = validPtsOriginal(index_pairs(:,1));
matchedPtsDistorted = validPtsDistorted(index_pairs(:,2));
[tform,inlierPtsDistorted,inlierPtsOriginal] = estimateGeometricTransform(matchedPtsDistorted,matchedPtsOriginal,'similarity');
figure; showMatchedFeatures(original,distorted,inlierPtsOriginal,inlierPtsDistorted);
The inlierPtsDistored, inlierPtsOriginal have attributes called locations.
These are nothing but matching locations of one image on another. I think from that point it is very easy to calculate offset.

The function below was my attempt to compute the cross-correlation of the two images manually. Something's not quite right though. Will look at it again this weekend if I have time. You can call the function with something like:
>> oldImage = rand(64);
>> newImage = circshift(oldImage, floor(64/2)*[1 1]);
>> offset = detectOffset(oldImage, newImage, 10)
offset =
32 -1
function offset = detectOffset(oldImage, newImage, margin)
if size(oldImage) ~= size(newImage)
offset = [];
error('Test images must be the same size.');
end
[imageHeight, imageWidth] = size(oldImage);
corr = zeros(2 * imageHeight - 1, 2 * imageWidth - 1);
for yIndex = [1:2*imageHeight-1; ...
imageHeight:-1:1 ones(1, imageHeight-1); ...
imageHeight*ones(1, imageHeight) imageHeight-1:-1:1];
oldImage = circshift(oldImage, [1 0]);
for xIndex = [1:2*imageWidth-1; ...
imageWidth:-1:1 ones(1, imageWidth-1); ...
imageWidth*ones(1, imageWidth) imageWidth-1:-1:1];
oldImage = circshift(oldImage, [0 1]);
numPoint = abs(yIndex(3) - yIndex(2) + 1) * abs(xIndex(3) - xIndex(2) + 1);
corr(yIndex(1),xIndex(1)) = sum(sum(oldImage(yIndex(2):yIndex(3),xIndex(2):xIndex(3)) .* newImage(yIndex(2):yIndex(3),xIndex(2):xIndex(3)))) * imageHeight * imageWidth / numPoint;
end
end
[value, yOffset] = max(corr(margin+1:end-margin,margin+1:end-margin));
[dummy, xOffset] = max(value);
offset = [yOffset(xOffset)+margin-imageHeight xOffset+margin-imageWidth];

Related

How to superimpose two images and get SSIM (similarity index) value for these two images?

I have a clean image, and a noisy image. I created a denoiser and applied it to the noisy image, that was my final output. Now to compare how much this image is close to a clean image I need to compare it using PSNR and SSIM, but due to different positions of the image I am unable to compare.
Now I am getting SSIM as 0.5, which is very low, due to the improper placement of both the images. If the images are registered properly, then I guess SSIM should come around 0.80+. But I have not been able to accomplish this.
How can I align these two images to obtain a good SSIM value?
I have two coin images, 1st image (CLEAN), 2nd image (IMPROVED a NOISY IMG), for comparison.
Clean Img:
Noisy Img:
Due to positions of images at different positions ssim(img1,img2) is giving incorrect output. I tried cropping but that did not work.
Here is what I have tried so far:
Attempt 1:
function [valPSNR,valSSIM,badpict]=getSSIM(clean_img,img2)
% pad reference image since object is so close to edges
refpict = padarray(mat2gray(clean_img),[20 20],'replicate','both');
% crop test image down to extract the object alone
badpict = imcrop(mat2gray(img2),[2.5 61.5 357 363]);
% maximize normalized cross-correlation to find offset
szb = size(badpict);
c = normxcorr2(badpict,refpict);
[idxy idxx] = find(c == max(c(:)));
osy = idxy-szb(1);
osx = idxx-szb(2);
% crop the reference pict to the ROI
refpict = refpict(osy:idxy-1,osx:idxx-1);
%imshow(imfuse(badpict,refpict,'checkerboard'));
%imagesc(badpict);
valSSIM=ssim(badpict,refpict);
valPSNR=getPSNR(badpict,refpict);
img2=badpict;
clean_img=refpict;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
Attempt 2:
function [valPSNR,valSSIM,badpict]=getSSIM2(clean_img,img2)
% pad reference image since object is so close to edges
bw1 = im2bw(mat2gray(clean_img));
bw2 = imclose(im2bw(mat2gray(img2),0.3),strel('disk',9));
bw2 = bwareafilt(bw2,1);
% make same size
[r,c] = find(bw1);
clean_img = clean_img(min(r):max(r),min(c):max(c));
[r,c] = find(bw2);
img2 = img2(min(r):max(r),min(c):max(c));
img2= imresize(img2, size(clean_img),'bilinear');
valPSNR=getPSNR(mat2gray(clean_img),mat2gray(img2));
valSSIM=ssim(mat2gray(clean_img),mat2gray(img2));
badpict=img2;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
As others have pointed out, the resampling required by registration will have some non-zero error. But, here is some sample code that will take you through the registration part that is the crux of your question.
% SSIM isn't defined on RGB images, convert to grayscale.
ref = rgb2gray(imread('https://i.stack.imgur.com/tPKEJ.png'));
X = rgb2gray(imread('https://i.stack.imgur.com/KmU4y.png'));
% The input image data has bright borders at the edges that create
% artifacts in resampling, best to just crop those or maybe there are
% aquisitions that don't have these borders?
X = X(3:end-2,3:end-2);
ref = ref(4:end-3,4:end-3);
figure
montage({X,ref});
tform = imregcorr(X,ref,"translation");
Xreg = imwarp(X,tform,OutputView=imref2d(size(ref)),SmoothEdges=true);
figure
imshowpair(Xreg,ref)
ssim(Xreg,ref)
Maybe you can refer to my github.
I implemented a template matching algorithm by OpenCV which you can use NCC-Based Pattern Matching to find targets, and then get a score (similarity).
You can then use this score to decide if it is clean.
Besides, tranforming c++ code may be an issue for you, but just find the all corresponded function in matlab version.
Here are effects (red blocks are areas with similarity higher than threshold 0.85 in comparison with golden sample):
The whole function is too long to be posted here.
Part of the function:
for (int i = 0; i < iSize; i++)
{
Mat matRotatedSrc, matR = getRotationMatrix2D (ptCenter, vecAngles[i], 1);
Mat matResult;
Point ptMaxLoc;
double dValue, dMaxVal;
double dRotate = clock ();
Size sizeBest = GetBestRotationSize (vecMatSrcPyr[iTopLayer].size (), pTemplData->vecPyramid[iTopLayer].size (), vecAngles[i]);
float fTranslationX = (sizeBest.width - 1) / 2.0f - ptCenter.x;
float fTranslationY = (sizeBest.height - 1) / 2.0f - ptCenter.y;
matR.at<double> (0, 2) += fTranslationX;
matR.at<double> (1, 2) += fTranslationY;
warpAffine (vecMatSrcPyr[iTopLayer], matRotatedSrc, matR, sizeBest);
MatchTemplate (matRotatedSrc, pTemplData, matResult, iTopLayer);
minMaxLoc (matResult, 0, &dMaxVal, 0, &ptMaxLoc);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM)] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dMaxVal, vecAngles[i]);
for (int j = 0; j < m_iMaxPos + MATCH_CANDIDATE_NUM - 1; j++)
{
ptMaxLoc = GetNextMaxLoc (matResult, ptMaxLoc, -1, pTemplData->vecPyramid[iTopLayer].cols, pTemplData->vecPyramid[iTopLayer].rows, dValue, m_dMaxOverlap);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM) + j + 1] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dValue, vecAngles[i]);
}
}
FilterWithScore (&vecMatchParameter, m_dScore-0.05*iTopLayer);

Approximation of cosh and sinh functions that give large values in MATLAB

My calculation involves cosh(x) and sinh(x) when x is around 700 - 1000 which reaches MATLAB's limit and the result is NaN. The problem in the code is elastic_restor_coeff rises when radius is small (below 5e-9 in the code). My goal is to do another integral over a radius distribution from 1e-9 to 100e-9 which is still a work in progress because I get stuck at this problem.
My work around solution right now is to approximate the real part of chi_para with a step function when threshold2 hits a value of about 300. The number 300 is obtained from using the lowest possible value of radius and look at the cut-off value from the plot. I think this approach is not good enough for actual calculation since this value changes with radius so I am looking for a better approximation method. Also, the imaginary part of chi_para is difficult to approximate since it looks like a pulse instead of a step.
Here is my code without an integration over a radius distribution.
k_B = 1.38e-23;
T = 296;
radius = [5e-9,10e-9, 20e-9, 30e-9,100e-9];
fric_coeff = 8*pi*1e-3.*radius.^3;
elastic_restor_coeff = 8*pi*1.*radius.^3;
time_const = fric_coeff/elastic_restor_coeff;
omega_ar = logspace(-6,6,60);
chi_para = zeros(1,length(omega_ar));
chi_perpen = zeros(1,length(omega_ar));
threshold = zeros(1,length(omega_ar));
threshold2 = zeros(1,length(omega_ar));
for i = 1:length(radius)
for k = 1:length(omega_ar)
omega = omega_ar(k);
fric_coeff = 8*pi*1e-3.*radius(i).^3;
elastic_restor_coeff = 8*pi*1.*radius(i).^3;
time_const = fric_coeff/elastic_restor_coeff;
G_para_func = #(t) ((cosh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))-1).*exp(1i.*omega.*t))./(cosh(2*k_B*T./elastic_restor_coeff)-1);
G_perpen_func = #(t) ((sinh(2*k_B*T./elastic_restor_coeff.*exp(-t./time_const))).*exp(1i.*omega.*t))./(sinh(2*k_B*T./elastic_restor_coeff));
chi_para(k) = (1 + 1i*omega*integral(G_para_func, 0, inf));
chi_perpen(k) = (1 + 1i*omega*integral(G_perpen_func, 0, inf));
threshold(k) = 2*k_B*T./elastic_restor_coeff*omega;
threshold2(k) = 2*k_B*T./elastic_restor_coeff*(omega*time_const - 1);
end
figure(1);
semilogx(omega_ar,real(chi_para),omega_ar,imag(chi_para));
hold on;
figure(2);
semilogx(omega_ar,real(chi_perpen),omega_ar,imag(chi_perpen));
hold on;
end
Here is the simplified function that I would like to approximate:
where x is iterated in a loop and the maximum value of x is about 700.

Reverse-calculating original data from a known moving average

I'm trying to estimate the (unknown) original datapoints that went into calculating a (known) moving average. However, I do know some of the original datapoints, and I'm not sure how to use that information.
I am using the method given in the answers here: https://stats.stackexchange.com/questions/67907/extract-data-points-from-moving-average, but in MATLAB (my code below). This method works quite well for large numbers of data points (>1000), but less well with fewer data points, as you'd expect.
window = 3;
datapoints = 150;
data = 3*rand(1,datapoints)+50;
moving_averages = [];
for i = window:size(data,2)
moving_averages(i) = mean(data(i+1-window:i));
end
length = size(moving_averages,2)+(window-1);
a = (tril(ones(length,length),window-1) - tril(ones(length,length),-1))/window;
a = a(1:length-(window-1),:);
ai = pinv(a);
daily = mtimes(ai,moving_averages');
x = 1:size(data,2);
figure(1)
hold on
plot(x,data,'Color','b');
plot(x(window:end),moving_averages(window:end),'Linewidth',2,'Color','r');
plot(x,daily(window:end),'Color','g');
hold off
axis([0 size(x,2) min(daily(window:end))-1 max(daily(window:end))+1])
legend('original data','moving average','back-calculated')
Now, say I know a smattering of the original data points. I'm having trouble figuring how might I use that information to more accurately calculate the rest. Thank you for any assistance.
You should be able to calculate the original data exactly if you at any time can exactly determine one window's worth of data, i.e. in this case n-1 samples in a window of length n. (In your case) if you know A,B and (A+B+C)/3, you can solve now and know C. Now when you have (B+C+D)/3 (your moving average) you can exactly solve for D. Rinse and repeat. This logic works going backwards too.
Here is an example with the same idea:
% the actual vector of values
a = cumsum(rand(150,1) - 0.5);
% compute moving average
win = 3; % sliding window length
idx = hankel(1:win, win:numel(a));
m = mean(a(idx));
% coefficient matrix: m(i) = sum(a(i:i+win-1))/win
A = repmat([ones(1,win) zeros(1,numel(a)-win)], numel(a)-win+1, 1);
for i=2:size(A,1)
A(i,:) = circshift(A(i-1,:), [0 1]);
end
A = A / win;
% solve linear system
%x = A \ m(:);
x = pinv(A) * m(:);
% plot and compare
subplot(211), plot(1:numel(a),a, 1:numel(m),m)
legend({'original','moving average'})
title(sprintf('length = %d, window = %d',numel(a),win))
subplot(212), plot(1:numel(a),a, 1:numel(a),x)
legend({'original','reconstructed'})
title(sprintf('error = %f',norm(x(:)-a(:))))
You can see the reconstruction error is very small, even using the data sizes in your example (150 samples with a 3-samples moving average).

how can i convert my cpu code of dot product of two matrices to GPU in matlab

I want to take weighted sum of two matrices in GPUarray to be fast. for example my code on cpu is given below:
mat1 = rand(19,19);
mat2= rand(19,19);
Receptive_fieldsize = [4,3];
overlap = 1;
Output = GetweightedSum(mat1,mat2, Receptive_fieldsize,overlap); %this will output in an 6x6 matrix
where as my function body is:
function Output = GetweightedSum(mat1,mat2, RF,overlap)
gap = RF(1) - overlap;
size_mat = size(mat1);
output_size=[6,6];
for u=1: output_size(1)
for v=1: output_size(2)
min_u = (u - 1) * gap + 1;
max_u = (u - 1) * gap + RF(1);
min_v = (v - 1) * gap + 1;
max_v = (v - 1) * gap + RF(2);
input1 = mat1(min_u:max_u,min_v:max_v);
input2 = mat2(min_u:max_u,min_v:max_v);
Output(u,v) = sum(sum(input1 .*input2));
end
end
How can i convert it to GPUfunciton. Can i do it directly, OR can i use for loop in GPU code. I am totally new to GPU so don't know anything about it.
Will be thankful if some one guid me, or change the above code as reference to GPU function so that i may learn from it.
Regards
See if the codes and the comments alongside them make sense to you -
function Output = GetweightedSumGPU(mat1,mat2, RF,overlap)
%// Create parameters
gap = RF(1) - overlap;
output_size=[6,6];
sz1 = output_size(1);
sz2 = output_size(2);
nrows = size(mat1,1); %// get number of rows in mat1
%// Copy data to GPU
gmat1 = gpuArray(mat1);
gmat2 = gpuArray(mat2);
start_row_ind = gpuArray([1:RF(1)]'); %//' starting row indices for each block
col_offset = gpuArray([0:RF(2)-1]*nrows); %// column offset for each block
%// Linear indices for each block
ind = bsxfun(#plus,start_row_ind,col_offset);
%// Linear indices along rows and columns respectively
ind_rows = bsxfun(#plus,ind(:),[0:sz1-1]*gap);
ind_rows_cols = bsxfun(#plus,ind_rows,permute([0:sz2-1]*gap*nrows,[1 3 2]));
%// Elementwise multiplication, summing and gathering back result to CPU
Output = gather(reshape(sum(gmat1(ind_rows_cols).*gmat2(ind_rows_cols),1),sz1,sz2));
return;

How do I resize a Matlab matrix with a 3rd dimension?

So I'd like to resize a matrix that is of size 72x144x156 into a 180x360x156 grid. I can try to do it with this command: resizem(precip,2.5). The first two dimensions are latitude and longitude, while the last dimension is time. I don't want time to be resized.
This works if the matrix is of size 72x144. But it doesn't work for size 72x144x156. Is there a way to resize the first two dimensions without resizing the third?
Also, what is the fastest way to do this (preferably without a for loop). If a for loop is necessary, then that's fine.
I hinted in my comment, but could use interp3 like this:
outSize = [180 360 156];
[nrows,ncols,ntimes] = size(data);
scales = [nrows ncols ntimes] ./ outSize;
xq = (1:outSize(2))*scales(2) + 0.5 * (1 - scales(2));
yq = (1:outSize(1))*scales(1) + 0.5 * (1 - scales(1));
zq = (1:outSize(3))*scales(3) + 0.5 * (1 - scales(3));
[Xq,Yq,Zq] = meshgrid(xq,yq,zq);
dataLarge = interp3(data,Xq,Yq,Zq);
But the problem is simplified if you know you don't want to interpolate between time points, so you can loop as in Daniel R's answer. Although, this answer will not increase the number of time points.
D= %existing matrix
scale=2.5;
E=zeros(size(D,1)*2.5,size(D,2)*2.5,size(D,3))
for depth=1:size(D,3)
E(:,:,depth)=resizem(D(:,:,depth),scale)
end
This should provide the expected output.
% s = zeros(72, 144, 156);
% whos s;
% news = resize2D(s, 2.5);
% whos news;
function [result] = resize2D(input, multiply)
[d1, d2, d3] = size(input);
result = zeros(d1*multiply, d2*multiply, d3);
end