Implementing a filtered backprojection algorithm using the central slice theorem in Matlab - matlab

I'm working on a filtered back projection algorithm using the central slice theorem for a homework assignment and while I understand the theory on paper, I've run into an issue implementing it in Matlab. I was provided with a skeleton to follow to do it but there is a step that I think I'm maybe misunderstanding. Here is what I have:
function img = sampleFBP(sino,angs)
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(floor(diagDim/2), :) = fourierCurRow;
imContrib = imContrib * fft(rampFilter_1d);
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
The sinogram I'm inputting is just the output of the radon function on a Shepp-Logan phantom from 0 to 179 degrees. Running the code as it is now gives me a black image. I think I'm missing something in the loop where I add the FTs of rows to the image. From my understanding of the central slice theorem, what I think should be happening is this:
Initialize an array the same size as the what the 2DFT will be (i.e., diagDim x diagDim). This is the Fourier space.
Take a row of the sinogram which corresponds to the line integral information from a single angle and apply a 1D FT to it
According to the Central Slice Theorem, the FT of this line integral is a line through the Fourier domain that passes through the origin at an angle that corresponds to the angle at which the projection was taken. So to emulate that, I take the FT of that line integral and place it in the center row of the diagDim x diagDim matrix I created
Next I take the FT of the 1D ramp filter I created and multiply it with the FT of the line integral. Multiplication in the Fourier domain is equivalent to a convolution in the spatial domain so this convolves the line integral with the filter.
Now I rotate the entire matrix by the angle the projection was taken at. This should give me a diagDim x diagDim matrix with a single line of information passing through the center at an angle. Matlab increases the size of the matrix when it is rotated but since the sinogram was padded at the beginning, no information is lost and the matrices can still be added
If all of these empty matrices with a single line through the center are added up together, it should give me the complete 2D FT of the image. All that needs to be done is take the inverse 2D FT and the original image should be the result.
If the problem I'm running into is something conceptual, I'd be grateful if someone could point out where I messed up. If instead this is a Matlab thing (I'm still kind of new to Matlab), I'd appreciate learning what it is I missed.

The code that you have posted is a pretty good example of filtered backprojection (FBP) and I believe could be useful to people who wanted to learn the basis of FBP. One can use the function iradon(...) in MATLAB (see here) to perform FBP using a variety of filters. In your case of course, the point is to learn the basis of the central slice theorem and so finding a short cut is not the point. I have also learned a lot and refreshed my knowledge through answering to your question!
Now your code has been perfectly commented and describes the steps that need to be taken. There are a couple of subtle [programming] issues that need to be fixed so that the code works just fine.
First, your image representation in Fourier domain may end up having a missing array due to floor(diagDim/2) depending on the size of the sinogram. I would change this to round(diagDim/2) to have complete dataset in fimg. Be aware that this may lead to an error for certain sinogram sizes if not handled correctly. I would encourage you to visualize fimg to understand what that missing array is and why it matters.
Second issue is that your sinogram needs to be transposed to be consistent with your algorithm. Hence an addition of sino = sino'. Again, I do encourage you to try the code without this to see what happens! Note that zero padding must be happened along the views to avoid artifacts due to aliasing. I will demonstrate an example for this in this answer.
Third and most importantly, imContrib is a temporary holder for an array along fimg. Therefore, it must maintain the same size as fimg, so
imContrib = imContrib * fft(rampFilter_1d);
should be replaced with
imContrib(floor(diagDim/2), :) = imContrib(floor(diagDim/2), :)' .* rampFilter_1d;
Note that the Ramp filter is linear in frequency domain (thanks to #Cris Luengo for correcting this error). Therefore, you should drop the fft in fft(rampFilter_1d) as this filter is applied in the frequency domain (remember fft(x) decomposes the domain of x, such as time, space, etc to its frequency content).
Now a complete example to show how it works using the modified Shepp-Logan phantom:
angs = 0:359; % angles of rotation 0, 1, 2... 359
init_img = phantom('Modified Shepp-Logan', 100); % Initial image 2D [100 x 100]
sino = radon(init_img, angs); % Create a sinogram using radon transform
% Here is your function ....
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% Rotate the sinogram 90-degree to be compatible with your codes definition of view and radial positions
% dim 1 -> view
% dim 2 -> Radial position
sino = sino';
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% fprintf('rowIndex = %g => nn = %g\n', rowIndex, nn);
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(round(diagDim/2), :) = fourierCurRow;
imContrib(round(diagDim/2), :) = imContrib(round(diagDim/2), :)' .* rampFilter_1d; % <-- NOT fft(rampFilter_1d)
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
Note that your image has complex value. So, I use imshow(abs(rcon),[]) to show the image. A couple of helpful images (food for thought) with the final reconstructed image rcon:
And here is the same image if you comment out the zero padding step (i.e. comment out sino = padarray(sino, floor(size(sino,1)/2), 'both');):
Note the different object size in the reconstructed images with and without zero padding. The object shrinks when the sinogram is zero padded since the radial contents are compressed.

Related

Why does the convolution of two arrays in MATLAB result in ‘NaN’ values?

I am trying to compute the convolution of a curve with a scaled wavelet. Using the MATLAB convolution function https://www.mathworks.com/help/matlab/ref/conv.html, the result has a large number of 'NaN' values in the beginning, and I'd like to understand where these are coming from.
My instinct with convolution is that this should not be the case as I have defined upper bounds at each point of the convolution. The way I visualize this convolution is that the wavelet is flipped, and then moved from left to right over the curve. As it moves, the area below the intersection of the wavelet with the curve is what is stored in the output. If this is the case, the resulting output should exist and be positive at all points of the convolution except where there is no overlap and the result is 0.
% The curve is a set of responses to presented stimuli,
% and the following is an example:
curve = [0.0500, 0.1000, 0.1500, 0.2000, 0.3000, 0.5000, 0.8000;
11.6465, 14.8354, 5.0695, 0.4856, 0.5858, 0.2863, 0.3864];
% The scaled Mexican hat wavelet is as follows:
acuteness = 1000;
[mexh_y, mexh_x] = mexihat(-5,5,acuteness);
max_response = max(curve(2,:));
wav_x = mexh_x/100;
wav_y = mexh_y * (max_response+1);
% Here wav_x are the wavelet’s x-values and wav_y are the wavelet’s y-values.
% I interpolate the original curve to have more values as follows:
max_presentation = max(curve(1,:));
step = max_presentation/acuteness;
sf_vals_for_interpolation = [step:step:max_presentation];
interp_curve = interp1(curve(1,:),curve(2,:),sf_vals_for_interpolation);
% Now I’d like to perform a convolution:
conv_res = conv(interp_curve, wav_y);
As noted in the summary, the resulting convolution contains a number of 'NaN' values that I would like to understand.

Extract line shaped objects

I'm working on images with overlapping line shapes (left plot). Ultimately I want to segment single objects. I'm working with a Hough transform to achieve this and it works well in finding lines of (significantly) different orientation - e.g. represented by the two maxima in the hough space below (middle plot).
the green and yellow lines (left plot) and crosses (right plot) stem from an approach to do something with the thickness of the line. I couldn't figure out how to extract a broad line though, so I didn't follow up.
I'm aware of the ambiguity of assigning the "overlapping pixels". I will address that later.
Since I don't know, how many line objects one connected region may contain, my idea is to iteratively extract the object corresponding to the hough line with the highest activation (here painted in blue), i.e. remove the line shaped object from the image, so that the next iteration will find only the other line.
But how do I detect, which pixels belong to the line shaped object?
The function hough_bin_pixels(img, theta, rho, P) (from here - shown in the right plot) gives pixels corresponding to the particular line. But that obviously is too thin of a line to represent the object.
Is there a way to segment/detect the whole object that is orientied along the strongest houghline?
The key is knowing that thick lines in the original image translate to wider peaks on the Hough Transform. This image shows the peaks of a thin and a thick line.
You can use any strategy you like to group all the pixels/accumulator bins of each peak together. I would recommend using multithresh and imquantize to convert it to a BW image, and then bwlabel to label the connected components. You could also use any number of other clustering/segmentation strategies. The only potentially tricky part is figuring out the appropriate thresholding levels. If you can't get anything suitable for your application, err on the side of including too much because you can always get rid of erroneous pixels later.
Here are the peaks of the Hough Transform after thresholding (left) and labeling (right)
Once you have the peak regions, you can find out which pixels in the original image contributed to each accumulator bin using hough_bin_pixels. Then, for each peak region, combine the results of hough_bin_pixels for every bin that is part of the region.
Here is the code I threw together to create the sample images. I'm just getting back into matlab after not using it for a while, so please forgive the sloppy code.
% Create an image
image = zeros(100,100);
for i = 10:90
image(100-i,i)=1;
end;
image(10:90, 30:35) = 1;
figure, imshow(image); % Fig. 1 -- Original Image
% Hough Transform
[H, theta_vals, rho_vals] = hough(image);
figure, imshow(mat2gray(H)); % Fig. 2 -- Hough Transform
% Thresholding
thresh = multithresh(H,4);
q_image = imquantize(H, thresh);
q_image(q_image < 4) = 0;
q_image(q_image > 0) = 1;
figure, imshow(q_image) % Fig. 3 -- Thresholded Peaks
% Label connected components
L = bwlabel(q_image);
figure, imshow(label2rgb(L, prism)) % Fig. 4 -- Labeled peaks
% Reconstruct the lines
[r, c] = find(L(:,:)==1);
segmented_im = hough_bin_pixels(image, theta_vals, rho_vals, [r(1) c(1)]);
for i = 1:size(r(:))
seg_part = hough_bin_pixels(image, theta_vals, rho_vals, [r(i) c(i)]);
segmented_im(seg_part==1) = 1;
end
region1 = segmented_im;
[r, c] = find(L(:,:)==2);
segmented_im = hough_bin_pixels(image, theta_vals, rho_vals, [r(1) c(1)]);
for i = 1:size(r(:))
seg_part = hough_bin_pixels(image, theta_vals, rho_vals, [r(i) c(i)]);
segmented_im(seg_part==1) = 1;
end
region2 = segmented_im;
figure, imshow([region1 ones(100, 1) region2]) % Fig. 5 -- Segmented lines
% Overlay and display
out = cat(3, image, region1, region2);
figure, imshow(out); % Fig. 6 -- For fun, both regions overlaid on original image

Matlab Template Matching Using FFT

I am struggling with template matching in the Fourier domain in Matlab. Here are my images (the artist is RamalamaCreatures on DeviantArt):
My aim is to place a bounding box around the ear of the possum, like this example (where I performed template matching using normxcorr2):
Here is the Matlab code I am using:
clear all; close all;
template = rgb2gray(imread('possum_ear.jpg'));
background = rgb2gray(imread('possum.jpg'));
%% calculate padding
bx = size(background, 2);
by = size(background, 1);
tx = size(template, 2); % used for bbox placement
ty = size(template, 1);
%% fft
c = real(ifft2(fft2(background) .* fft2(template, by, bx)));
%% find peak correlation
[max_c, imax] = max(abs(c(:)));
[ypeak, xpeak] = find(c == max(c(:)));
figure; surf(c), shading flat; % plot correlation
%% display best match
hFig = figure;
hAx = axes;
position = [xpeak(1)-tx, ypeak(1)-ty, tx, ty];
imshow(background, 'Parent', hAx);
imrect(hAx, position);
The code is not functioning as intended - it is not identifying the correct region. This is the failed result - the wrong area is boxed:
This is the surface plot of the correlations for the failed match:
Hope you can help! Thanks.
What you're doing in your code is actually not correlation at all. You are using the template and performing convolution with the input image. If you recall from the Fourier Transform, the multiplication of the spectra of two signals is equivalent to the convolution of the two signals in time/spatial domain.
Basically, what you are doing is that you are using the template as a kernel and using that to filter the image. You are then finding the maximum response of this output and that's what is deemed to be where the template is. Where the response is being boxed makes sense because that region is entirely white, and using the template as the kernel with a region that is entirely white will give you a very large response, which is why it most likely identified that area to be the maximum response. Specifically, the region will have a lot of high values (~255 or so), and naturally performing convolution with the template patch and this region will give you a very large output due to the operation being a weighted sum. As such, if you used the template in a dark area of the image, the output would be small - which is false because the template is also consisting of dark pixels.
However, you can certainly use the Fourier Transform to locate where the template is, but I would recommend you use Phase Correlation instead. Basically, instead of computing the multiplication of the two spectra, you compute the cross power spectrum instead. The cross power spectrum R between two signals in the frequency domain is defined as:
Source: Wikipedia
Ga and Gb are the original image and the template in frequency domain, and the * is the conjugate. The o is what is known as the Hadamard product or element-wise product. I'd also like to point out that the division of the numerator and denominator of this fraction is also element-wise. Using the cross power spectrum, if you find the (x,y) location here that produces the absolute maximum response, this is where the template should be located in the background image.
As such, you simply need to change the line of code that computes the "correlation" so that it computes the cross power spectrum instead. However, I'd like to point out something very important. When you perform normxcorr2, the correlation starts right at the top-left corner of the image. The template matching starts at this location and it gets compared with a window that is the size of the template where the top-left corner is the origin. When finding the location of the template match, the location is with respect to the top-left corner of the matched window. Once you compute normxcorr2, you traditionally add the half of the rows and half of the columns of the maximum response to find the centre location.
Because we are more or less doing the same operations for template matching (sliding windows, correlation, etc.) with the FFT / frequency domain, when you finish finding the peak in this correlation array, you must also take this into account. However, your call to imrect to draw a rectangle around where the template matches takes in the top left corner of a bounding box anyway, so there's no need to do the offset here. As such, we're going to modify that code slightly but keep the offset logic in mind when using this code for later if want to find the centre location of the match.
I've modified your code as well to read in the images directly from StackOverflow so that it's reproducible:
clear all; close all;
template = rgb2gray(imread('http://i.stack.imgur.com/6bTzT.jpg'));
background = rgb2gray(imread('http://i.stack.imgur.com/FXEy7.jpg'));
%% calculate padding
bx = size(background, 2);
by = size(background, 1);
tx = size(template, 2); % used for bbox placement
ty = size(template, 1);
%% fft
%c = real(ifft2(fft2(background) .* fft2(template, by, bx)));
%// Change - Compute the cross power spectrum
Ga = fft2(background);
Gb = fft2(template, by, bx);
c = real(ifft2((Ga.*conj(Gb))./abs(Ga.*conj(Gb))));
%% find peak correlation
[max_c, imax] = max(abs(c(:)));
[ypeak, xpeak] = find(c == max(c(:)));
figure; surf(c), shading flat; % plot correlation
%% display best match
hFig = figure;
hAx = axes;
%// New - no need to offset the coordinates anymore
%// xpeak and ypeak are already the top left corner of the matched window
position = [xpeak(1), ypeak(1), tx, ty];
imshow(background, 'Parent', hAx);
imrect(hAx, position);
With that, I get the following image:
I also get the following when showing a surface plot of the cross power spectrum:
There is a clear defined peak where the rest of the output has a very small response. That's actually a property of Phase Correlation and so obviously, the location of the maximum value is clearly defined and this is where the template is located.
Hope this helps!
Just ended up implementing the same with python with similar ideas as #rayryeng's using scipy.fftpack.fftn() / ifftn() functions with the following result on the same target and template images:
import numpy as np
import scipy.fftpack as fp
from skimage.io import imread
from skimage.color import rgb2gray, gray2rgb
import matplotlib.pylab as plt
from skimage.draw import rectangle_perimeter
im = 255*rgb2gray(imread('http://i.stack.imgur.com/FXEy7.jpg')) # target
im_tm = 255*rgb2gray(imread('http://i.stack.imgur.com/6bTzT.jpg')) # template
# FFT
F = fp.fftn(im)
F_tm = fp.fftn(im_tm, shape=im.shape)
# compute the best match location
F_cc = F * np.conj(F_tm)
c = (fp.ifftn(F_cc/np.abs(F_cc))).real
i, j = np.unravel_index(c.argmax(), c.shape)
print(i, j)
# 214 317
# draw rectangle around the best match location
im2 = (gray2rgb(im)).astype(np.uint8)
rr, cc = rectangle_perimeter((i,j), end=(i + im_tm.shape[0], j + im_tm.shape[1]), shape=im.shape)
for x in range(-2,2):
for y in range(-2,2):
im2[rr + x, cc + y] = (255,0,0)
# show the output image
plt.figure(figsize=(10,10))
plt.imshow(im2)
plt.axis('off')
plt.show()
Also, the below animation shows the result obtained while locating a bird's template image inside a set of (target) frames extracted from a video with a flock of birds.
One thing to note: the output is very much dependent on the similarity of the size and shape of the object that is to be matched with the template, if it's quite different from that of the template image, the template may not be matched at all.

how can i use lsqcurvefit for image registration?

I have two 3D images, i need to register these two images using "lsqcurvefit". I know that I can use "imregister" but I want to use my own registration using "lsqcurvefit" in Matlab. My images are are following Gaussian distribution. it is not documented well that how should I provide it, anyone can help me in detail?
image registration is a repeated process of maping source image to target image using i.e affine. i want to use intensity base registration, and i use all voxels of my image. therefore, i need to fit these two images as much as possible.
Thanks
Here's an example of how to do point-wise image registration using lsqcurvefit. Basically you make a function that takes a set of points and an Affine matrix (we're just going to use the translate and rotate parts but you can use skew and magnify if desired) and returns a new set of points. There's probably a built-in function for this already but it's only two lines so it's easy to write. That function is:
function TformPts = TransformPoints(StartCoordinates, TransformMatrix)
TformPts = StartCoordinates*TransformMatrix;
Here's a script that generates some points, rotates and translates them by a random angle and vector, then uses the TransformPoints function as the input for lsqcurvefit to fit the needed transformation matrix for the registration. Then it's just a matrix multiplication to generate the registered set of points. If we did this all right the red circles (original data) will line up with the black stars (shifted then registered points) very well when the code below is run.
% 20 random points in x and y between 0 and 100
% row of ones pads out third dimension
pointsList = [100*rand(2, 20); ones(1, 20)];
rotateTheta = pi*rand(1); % add rotation, in radians
translateVector = 10*rand(1,2); % add translation, up to 10 units here
% 2D transformation matrix
% last row pads out third dimension
inputTransMatrix = [cos(rotateTheta), -sin(rotateTheta), translateVector(1);
sin(rotateTheta), cos(rotateTheta), translateVector(2);
0 0 1];
% Transform starting points by this matrix to make an array of shifted
% points.
% For point-wise registration, pointsList represents points from one image,
% shiftedPoints points from the other image
shiftedPoints = inputTransMatrix*pointsList;
% Add some random noise
% Remove this line if you want the registration to be exact
shiftedPoints = shiftedPoints + rand(size(shiftedPoints, 1), size(shiftedPoints, 2));
% Plot starting sets of points
figure(1)
plot(pointsList(1,:), pointsList(2,:), 'ro');
hold on
plot(shiftedPoints(1,:), shiftedPoints(2,:), 'bx');
hold off
% Fitting routine
% Make some initial, random guesses
initialFitTheta = pi*rand(1);
initialFitTranslate = [2, 2];
guessTransMatrix = [cos(initialFitTheta), -sin(initialFitTheta), initialFitTranslate(1);
sin(initialFitTheta), cos(initialFitTheta), initialFitTranslate(2);
0 0 1];
% fit = lsqcurvefit(#fcn, initialGuess, shiftedPoints, referencePoints)
fitTransMatrix = lsqcurvefit(#TransformPoints, guessTransMatrix, pointsList, shiftedPoints);
% Un-shift second set of points by fit values
fitShiftPoints = fitTransMatrix\shiftedPoints;
% Plot it up
figure(1)
hold on
plot(fitShiftPoints(1,:), fitShiftPoints(2,:), 'k*');
hold off
% Display start transformation and result fit
disp(inputTransMatrix)
disp(fitTransMatrix)

Circular Hough transform using edge gradients and orientations

I'm currently busy with implementing two circular Hough transforms. The last one is based on an idea which is described in a paper for iris segmentation (http://www.cse.iitk.ac.in/users/naditya/Aditya_Nigam_icic2012P_C.pdf).
I've tried to implement every step but this kind of circular detection doesn't give any positive and accurate results. Also when running this algorithm on an image, containing a simple circle without noise, the maximum value which appears in the accumulator matrix is mostly around 3 (which means that only 3 edge points where found, voting for this center point...). Based on my outcomes, I guess this has something to do with a mistake within my code? Or maybe I forgot to implement a crucial step?
% create binairy edge image
edgeImage = double(edge(image,'sobel'));
% create sobel masks
horizontalMaskSobel = fspecial('sobel');
verticalMaskSobel = fspecial('sobel')';
% calculate gradients
gradientX = imfilter(edgeImage,horizontalMaskSobel);
gradientY = imfilter(edgeImage,verticalMaskSobel);
% calculate the orientation for each edge within the image
orientationImage = atan2(gradientY,gradientX);
% create accumulator space
for y in searchwindow % height
for x in searchwindow % width
for r in radii;
if edgeImage(y,x) > 0 % bright enough?
cy1 = y + r*sin(orientationImage(y,x)-pi/2);
cx1 = x - r*cos(orientationImage(y,x)-pi/2);
cy2 = y - r*sin(orientationImage(y,x)-pi/2);
cx2 = x + r*cos(orientationImage(y,x)-pi/2);
if calculated coordinates within searchwindow
accumulator(cy1,cx1,r)++;
accumulator(cy2,cx2,r)++;
end
end
end
end
end