MATLAB ifanbeam() reconstructs blurry image - matlab

we have been using the ifanbeam function to produce an image of a simulated phantom for medical X-ray imaging. The phantom consists of a water disc with three smaller discs inserted, made of bone, fat, and air. The phantom is positioned in the middle between a detector and an X-ray source (distance between detector and source is 5 cm). The X-ray beam is defined as a fan beam of 56 deg opening angle. The phantom rotates around its axis.
Our issue is that the reconstructed image looks blurry inside and it is difficult to see the smaller discs. Image reconstructed using ifanbeam().
I've attached the ground truth image which I obtained from a different simulation using parallel beam rather than fan beam. Ground truth image reconstructed using iradon()
The matlab code is below. After we preprocessed the raw data, we are creating a 3D array of size 180x240x20, which corresponds to an array of the single projections images of size 180x240. FYI, the raw data only consists of 10 projections, but we faced some issues with the FanCoverage parameter, so we padded the sinogram with zeros to artificially add another 10 projections and then setting FanCoverage to "cycle".
Did anyone have a similar problem before or knows how to help?
n=max(size(indices_time)); % indices_time corresponds to the number of events in the simulation
images=zeros(180,240,nrOfProjections);
for m=1:n
images(indices_y(m),indices_x(m),indices_time(m))=images(indices_y(m),indices_x(m),indices_time(m))+1;
end
sinogram=zeros(240,nrOfProjections*2);
for m=1:nrOfProjections
sinogram(:,m)=sum(images(89:90,:,m));
end
theta=0:18:342;
figure(1)
colormap(gray)
imagesc(sinogram)
movegui('northwest')
rec_fanbeam=ifanbeam(sinogram,113.5,"FanCoverage","cycle","FanRotationIncrement",18,"FanSensorGeometry","line","FanSensorSpacing",0.25,"OutputSize",100);
figure(2)
colormap(gray)
imagesc(rec_fanbeam)
xlabel('xPos')
ylabel('yPos')
title('Reconstructed image')
movegui('northeast')

Related

Qualitative and Quantitative analysis of filtered back projection / iradon in matlab

I was wondering if anyone encountered this issue.
I can reconstruct images from matlab that resembles the original image, however, the actual values are always different.
For example, original image have values in the matrix ranging from 0 to 1, while my reconstructed image ranges from -0.2 to 0.4 for example.
The reconstructed image look similar to the original image though, just that the data in the image are of different scales.
this is a sample code of what i mean.
p=phantom(64);
theta=0:1:179;
r=radon(p,theta);
ir=iradon(r,theta);
figure
subplot(1,2,1);imagesc(p)
subplot(1,2,2);imagesc(ir)
Those results aren't quite what I found.
>> min(min(ir))
-0.0583
>> max(max(ir))
0.9658
Remember that the Inverse Radon Transform can only approximate the reconstruction of the original image. With only 180 views, there's bound to be some differences.
The Radon transform inherently causes some information to be lost because pixels have to be projected onto a new coordinate system and re-binned - both during projection and back projection. This causes the reconstructed image to be degraded slightly. The Radon transform is not identically invertible like the Fourier Transform.
For better results, try using a larger image size and more viewing angles.
p=phantom(256);
theta=0:0.01:179;
And also try using a different filter (the F in F.B.P.) such as the Shepp-Logan, which reduces high frequencies and lessens overshoot.
ir=iradon(r,theta,'linear','Shepp-Logan');

3D reconstruction based on stereo rectified edge images

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

Smooth circular data - Matlab

I am currently doing some image segmentation on a bone qCT picture, see for instance images below.
I am trying to find the different borders in the picture for instance the outer border separating the bone to the noisy background. In this analysis I am getting a list of points (vec(1,:) containing x values and vex(2,:) containing the y values) in random order.
To get them into order I am using using a block of code which effectively takes the first point vec(1,1),vec(1,2) and then finds the closest point among the rest of the points in the vector. And then repeats.
Now my problem is that I want to smooth the data but how do I do that as the points lie in a circular formation? (I do have the Curve Fitting Toolbox)
Not exactly a smoothing procedure, but a way to simplify your data would be to compute the boundary of the convex hull of the data.
K = convhull(O(1,:), O(2,:));
plot(O(1,K), O(2,K));
You could also consider using alpha shapes if you want more control.

Disparity calculation of two similar images in matlab

I have two images(both are exactly same images) and I am trying to calculate the disparity between them using sum of squared distances and reconstruct disparity in 3D space.
Do I need to rectify the image before calculating disparity?
The following are the steps that I have done so far for disparity map computation(I have tried with rectification and without rectification but both are returning all zeroes disparity matrix).
For each pixel in the left image X,
Take the pixels in the same row in the right image.
Separate the row in right image to windows.
For each window,
Calculate the disparity for each pixel in that window with X
Select the pixel in the window which gives minimum SSD with X
Find the pixel with minimum disparity among all windows as the best match to X
Am I doing it correctly?
How can I visualise the 3D reconstruction of the disparity as scatter plot in matlab?
Rectification guarantees that matches are to be found in the same row (for horizontally separated cameras). If you have doubts about rectification of your images you can try to compare rows by drawing horizontal lines between horizontally separated images. If the lines hit the same features you are fine, see the picture below where images are NOT rectified. The fact that they are distorted means there was a lens distortion correction as well as attempted (but not actually performed correctly) rectification.
Now, let’s see what you meant by the same images. Did you mean the images of the same object that were taken from different viewpoints? Note that if the images are literally the same (the same viewpoints) the disparity will be zero as was noted in another answer. The definition of disparity (for horizontally separated cameras) is a value of shift (in the same row) between matching features. The disparity is related to depth (if optical axes of cameras are parallel) as disparity d=f*B/z, where z - depth, B - baseline or separation between cameras and f is a focal length. You can transform the formula above into disparity/B=f/z which basically says that disparity related to camera separation as focal length is related to distance. In other words, the ratios of horizontal and distance measures are equal.
If your images are taken with the cameras shifted horizontally the disparity (in a simple correlation algorithm) is typically calculated in 5-embedded loops:
loop over image1 y
loop over image1 x
loop over disparity d
loop over correlation window y
loop over correlation window x
Disparity, or D_best, gives you the best matching window between image1 and image2 across all possible values of d. Finally, scatterplots are for 3D point clouds while disparity can be rather visualized as a heat color map. If you need to visualize 3D reconstruction or simply saying a 3D point cloud calculate X, Y, Z as:
Z=fB/D, X=uZ/f, Y=v*Z/f, where u and v are related to column and row of wxh image as
u=col-w/2 and v=h/2-row, that is u, v form an image centered coordinate system.
If your two images are exactly the same, then the disparity would be 0 for every pixel. You either have to use two separate cameras to take the images, or take them with a single camera from two different locations. The best way to do 3D reconstruction is to use a calibrated stereo pair of cameras. Here is an example of how to do that using the Computer Vision System Toolbox for MATLAB.

Artifacts in image after super resolution using delaunay triangulation in MATLAB

i have to do super resolution of two low resolution images to obtain a high resolution image.
2nd image is taken as base image and the first image is registered with respect to it . i used SURF algorithm for image registration . A Delaunay triangulation is constructed over the points using a built-in MATLAB delaunay function . The HR grid of size is constructed for a prespecified resolution enhancement factor R Then HR algorithm for interpolating the pixel values on the HR grid is summarized next.
HR Algorithm Steps: 1. Construct the Delaunay triangulation over the set of scattered vertices in the irregularly sampled raster formed from the LR frames.
Estimate the gradient vector at each vertex of the triangulation by calculating the unit normal vector of neighbouring vector using cross product method.Sum of the unit normal vector of each triangle multiplied by its area is divided by summation of area of all neighbouring triangles to get the vertex normal.
Approximate each triangle patch in the triangulation by a continuous and, possibly, a continuously differentiable surface, subject to some smoothness constraint. Bivariate polynomials or splines could be the approximants as explained below.
Set the resolution enhancement factor along the horizontal and vertical directions and then calculate the pixel value at each regularly spaced HR grid point to construct the initial HR image
now i have the results shown below
now for one kind of data set i get this result that has a few pixels black and wite in a random manner for the other type i get thin parallel lines all over image after super resolution the results are attached
any one can tell me the reason, i have figured out may be its demosaicing but i am not sure,because i dnt have much understanding of it , moreover can it be a bug in my code but it behaves different for different images, i have increased the size by super resolution twice.