Consider the following code:
P = Phantom(256);
theta = 0:1:179;
R = radon(P, theta);
I = iradon(R, theta);
iradon.m calculates the size of the reconstructed image using
N = 2*floor(size(R,1)/(2*sqrt(2)))
But why this formula? It gives N as approximately equal to number of projections divided by square root of 2. But how will this give size of image? Is there any better way to find the size of image given R and theta?
size(R,1) does not give you the number of projections, but the projection size. Number of projections would be size(R,2).
The output of radon is of size n x m where n = the size of individual projections and m is the number of projections. The projection size is larger than the image size - imagine taking a projection at 45 degrees, you need the projection to be about sqrt(2) times as large as the image in order not to lose any information. iradon is just doing the reverse calculation to get the original image size back.
In practice, possibly because of the way MATLAB has implemented radon, the size of your reconstructed image will be slightly larger than the original image.
I think it has to do with the maximum size of a square that can fit within a circle. Diameter of circle is the width of 2D images fed into iRadon.
Related
In Matlab, I have two similar images, but one has a pixels shift compare to the other. how can I calculate the the offset (amount of pixels) for axis x and y?
Image Center
Image Shift
The general problem is called image registration. You can use cross-correlation to find the subpixel image registration, for example by the function dftregistration on the file exchange.
To register the two images A and B within 0.1 pixels by specifying an upsampling parameter of 10
A=(imread("img1.png"));
B=(imread("img2.png"));
upsamplingfactor = 10;
output = dftregistration(fft2(A),fft2(B),upsamplingfactor );
disp(output),
0.2584 0.0000 75.5000 -85.5000
We get an image shifted by y=75.5 and x=-85.5 pixels with an uncertainty of 0.258 pixels. This can get more accurate if you process the image before to reduce noise via applying a threshold or blurring)
I have two sets of 3D images (they come in form of 2D stacks). Image A is 10 micron, with size: 1000 x 1024 x 1017, while image B is 5 micron, with size: 2004 x 2048 x 2036. I like to make some computations on randomly chosen set of the 2D slices of A, and then compare this to the same set of slices for B. However, since B has twice the number of slices for each slice of A, will it be sensible to compare two slices of B to each of A? If so, how do i determine exactly which of the two slices of B make up a slice of A?
While contemplating on this, i also thought of blowing up A by 2 using imresize function for each 2D slice that i chose for the computation. Will it be okay to compare this new B with the A, considering that i have completely ignored what happens with the z-coordinate?
Thank you.
As you mentioned this is microCT, I am assuming that both images are different size resolution of the same object. This means that pixels have specific spatial location, not only value, therefore for this case, there are no pixels (assuming a pixel is a infinitesimally small dot in the center of the cube) that match in both images.
So, lets assume that in image A, the locations of the pixel centers are their indices (1,1,1), (1,1,2) etc. This means that the image starts (pixel boundaries) at "world value" 0.5, and ends at size(imgA)+0.5
Now, first lets transform the desired pixel coordinates for interpolation to this range. imgB pixel centers are then in locations (ind-0.5)*size(imgA)/size(imgB)+0.5.
Example: Assume
size(imgA,1)=3; size(imgB,1)=4;
therefore the pixels in imgA are at x location 1:3. The pixels on imgB are, using the above formula, in [0.8750 1.6250 2.3750 3.1250].
Note how the first pixel is 0.375 from 0.5 (our image border) and the next pixel is at 0.75 (double 0.375*2).
We scaled a higher resolution image to the same "real world" coordinates.
Now to the real stuff.
We need to create the desired coordinates in the reference (A) image. For that, we do:
[y, x, z]=...
ndgrid((1:size(imgB,1)-0.5)*size(imgA,1)/size(imgB,1)+0.5),...
(1:size(imgB,2)-0.5)*size(imgA,2)/size(imgB,2)+0.5),...
(1:size(imgB,3)-0.5)*size(imgA,3)/size(imgB,3)+0.5);
Now these 3 have the coordinates we want. Caution! each of these are size(imgB) !!! You need to have the RAM 5*size(imgB) in total to work with this.
Now we can interpolate
imAinB=interp3(imgA,x,y,z,'linear'); % Or nearest
It seems to be that your function is imresize3. You can change one volume to the other's dimentions with:
B = imresize3(V,[numrows numcols numplanes])
You can also explore interpolation methods.
I am using SURF on image size of 60*83 with varying scale levels and MetricThreshold to generate more blobs. But location of points2 vector showing coordinates which is beyond the dimension of input image size. I really wonder why it is. I need to obtain exact coordinate of detected key-points.
I2 = rgb2gray(Temp); %I2= 60*83 uint8
points2 = detectSURFFeatures(I2,'NumScaleLevels',6,'MetricThreshold',600);
I am trying to get location of the detected points in command window and it is showing following coordinates (see the highlighted x-axis coordinate exceeding dimension).
But if I use following code then only all coordinates are inside the image dimension.
points2 = detectSURFFeatures(I2);
I need to do this using varying scale levels and MetricThreshold. Thanks in advance.
matlab stores matrix as nOfRows x nOfCols
detectSURFFeatures returns positions as [x,y]
http://www.mathworks.com/help/vision/ref/surfpoints-class.html
so results are in range.
What does size(I2) return? From what you wrote, I would expect it to return [60, 83], where 60 is the height of the image (number of rows), and 83 is the width (number of columns). If so, then your results make perfect sense, because the SURFPoints locations are [x,y].
You can also see if your points make sense by visualizing them:
imshow(I2)
hold on
plot(points2)
I have an assignment to filter in the frequency domain. It gives me various filters to use in part of the question but I am just trying to figure out how to add even one. I'm still learning Matlab and image processing in general.
Assignment Question: Use index2gray to convert “tees.tif” to gray level image. Filter the gray level image in
the frequency domain using 2D fft (fft2), after performing the operation you can use 2D
ifft (ifft2) to display the filtered image in the spatial domain for:
a- Rectangular low pass filter using cutoff frequency of (uc=N/8,vc=M/8) where N and M
are the row and column sizes of the image.
My Current code is:
[I,cmap] = imread('trees.tif');
img = ind2gray(I,cmap);
[rows columns] = size(img);
imshow(img);
fftO = fft2(double(img));
shifted = fftshift(fftO);
logged = log(1+abs(shifted));
imshow(logged,[]);
If I understand this correctly, I have the grey level image in the frequency domain and now need to filter it. I'm confused about the filtering part. I need to make a rectangular low pass filter using cutoff frequencies. How do I go about adding cutoff frequencies? I assume I will be using a Gaussian or Butterworth filter and have the filter size be equal to the image.
After I figure out this filter thing, I should be able to do (H is filter)
filtered = logged.*H;
invert = real(ifft2(filtered));
imshow(invert);
Anyone know how I need to proceed with the filter section?
KevinMc essentially told you what the answer is. fspecial certainly allows you to define certain 2D filters, but it doesn't have rectangular filters. You can create your own though! You create a filter / mask that is the same size as your image where the centre of this mask is a rectangle with height N/8 and width M/8. Once you do this, you simply multiply with your image in the frequency domain and you then take the ifft2 as you have specified in your code.
You've got the code correct... you just need to create the mask! As such, use meshgrid to generate a 2D grid of (x,y) co-ordinates, then use Boolean conditions to find those pixels that span between -N/8 to N/8 for the height and -M/8 to M/8 for the width, making sure that the centre of the mask is the origin (0,0). Therefore:
[X,Y] = meshgrid(1:columns, 1:rows);
H = (X - floor(columns/2) >= -columns/8) & (X - floor(columns/2) <= columns/8) & ...
(Y - floor(rows/2) >= -rows/8) & (Y - floor(rows/2) <= rows/8);
H = double(H);
The last line of code is important, as we need to cast to double so that you can convolve / filter your image by multiplying with the frequency domain version of the image. You can only multiply things in MATLAB provided that they are of the same type. H is a logical after applying the Boolean conditions, so you need to convert to double before you proceed.
As an example, let's say rows = 200 and cols = 200. This means that for the rectangular filter, this should span from horizontal frequencies of -25 to 25, and the same for the vertical frequencies. This means that we should get a square of 50 x 50. If we run this code, this is the image I get:
As such, just use those two lines of code, and your mask will be stored in H. You can then use this to filter your image.
Good luck!
I want to calculate the magnetic field from a given image using biot savarts law. For example if I have a picture of a triangle, I say that this triangle forms a closed wire carrying current. Using image derivatives I can get the co-ordinates and direction of the current (normals included). I am struggling implementing this...need a bit of help with logic too. Here is what I have:
Img = imread('littletriangle.bmp');
Img = Img(:,:,1);
Img = double(Img);
[x,y] = size(Img);
[Ix, Iy] = gradient(Img);
biot savart equation is:
b = mu/4*pi sum(Idl x rn / r^2)
where mu/4pi is const, I is current magnitude, rn distance unit vector between a pixel and current, r^2 is the squared magnitude of the displacement between a pixel and the current.
So just to start off, I read the image in, turn it into a binary and then take the image gradient. This gives me the location and orientation of the 'current'. I now need to calculate the magnetic field from this 'current' at every pixel in the image. I am only interested in getting the magnetic field in the x-y plane. anything just to start me off would be brilliant!
For wire
B = mu * I /(2*pi*r)
B is vector and has. Direction is perpendicular on line between wire an point of interest. Fastest way to rotate vector by 90° is just swapping (x.y) so it becomes (y,x) vector
What about current? If you deal whit current then current is homogenous inside of wire (can be triangle) and I in upper direction is just normalized I per point and per Whole I.
So how to do this?
Get current per pixel (current / number of pixel in shape)
For each point calculate B using (r calculated form protagora) as sum of all other mini wires expressed as pixel using upper equation. (B is vector and has also direction, so keep track of B as (x,y) )
having picture of 100*100 will yield (100*100)*(100*100) calculations of B equation or something less if you will not calculate filed from empty space.
B is at the end instead of just mu * I /(2*pi*r) sum of all wire and I becomes dI
You do not need to apply any derivatives, just integration (sum)