I took 200 projections at a step angle of 1.8 degrees using LabVIEW software. The size of the image is 2748 x 2748 pixels, uint16. Then using Matlab, I load the projection images, do the flat field correction, resize the image by 1/3 and save the images as .mat file. Then I run the code below for the filtered backprojection.
interp='linear'; %set interpolation: nearest, linear, spline, pchip, v5cubic
filter='Hann'; %set filter: Ram-Lak, Shepp-Logan, Cosine, Hamming, Hann, None
for s=1:916
for i=1:200
a(i,:)=proj065(:,s,i);
end
a=a';
%figure(3), imagesc(a)
b=iradon(a,1.8,interp,filter);
imagesc(b);
recon(:,:,s)=b;
s
clear a
end
If I used a filter in this code, I will get negative pixel values.
But, if I run the code without the filter, I will get positive pixel values.
Any idea why iradon returns negative pixel values in filtered back projection?
Thank you.
Nurul
Yes, the FBP (filtered back-projection) algorithm will do that. It can wrongly reconstruct voxels as having negative values, due to noise and discretization on the data. Nothing you can do about it than just crop those values generally.
As my PhD is about tomography reconstruction algorithms I feel contractually obligated (joking) to suggest the use of iterative algorithms to possibly obtain better images (never worse, often considerably better). Check SART/SIRT or CGLS for this problem.
However, you are calling your function wrong! In tomography, the step size is not enough to reconstruct an image, you generally need the exact angles, thus iradon doesnt accept a step size as an input, it accepts an array of angles.
in your case, theta should be theta=linspace(0,360-200/360,200), and you should call iradon(a,theta,...)
Related
I have a small project on Moving object detection in moving camera in which i have to use negative optical flow vector to minimize ego motion compensation. I have a video and some particular consecutive frames in which average of negative optical flow vector has to be computed. I have already calculated Optical flow between say, (k-1)th and kth frame. Also, I have calculated average of optical flow vector V=[u,v], where v is the average of horizontal optical flow and u is the average of vertical flow. Now, I have to apply inverse of optical flow vector i.e., -V to the (k-1)th frame. I'm new to matlab and i don't know much about it. Please help
I have tried this code segment to do so but the results aren't as expected
function I1=reverseOF(I,V)
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
[m,n]=size(rgb2gray(I));
for i=1:m
for j=1:n
v1=[j i];
v2=-V;
v3=v1.*v2;
R(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=R(i,j);
G(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=G(i,j);
B(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=B(i,j);
I1(floor(1+abs(v3(1,2))),floor(1+abs(v3(1,2))))=I(i,j);
end
end
I1=cat(3,R,G,B);
enter code here
I have used abs() function because otherwise some error was occuring like "attempted to access negative location; index must be a positive or logical".
Image A and Image B are the images that i have used to estimate the optical flow.enter image description here
This is the result that i am obtaining after applying the above function.
enter image description here
Unfortunately, you cant do this easily. This is a quite advanced research problem, because obtaining the inverse of a vector field on a mesh grid is not an easy problem, actually its quite hard.
Notice that your vector field (optical flow) start in a mesh grid, but it doesnt end in a mesh grid, it ends in random subpixel positions. If you just invert this field, doing -V is not enough! The result wont be the inverse!
This is a open research problem, look for example at this 2010 paper that addresses exactly this issue, and proposes a method to create "pseudoinverses".
Suppose you have that inverse, because you computed it somehow. Your code is quite bad for it, and the solutions (abs!) are showing (no offense) that you are not really understanding what you are doing. For a known vector field {Vx,Vy}, size equals to the image size (if its not, you can figure out easily how to interpolate it unsig interp2 ) the code would look something like:
newimg=zeros(size(I));
[ix,iy]=meshgrid(1:size(I,1),1:size(I,2));
newimg(:,:,1)=interp2(I(:,:,1),ix+Vx,iy+Vy); % this is your whole loop.
newimg(:,:,2)=interp2(I(:,:,3),ix+Vx,iy+Vy); % this is your whole loop.
newimg(:,:,3)=interp2(I(:,:,2),ix+Vx,iy+Vy); % this is your whole loop.
I want to evaluate the grid quality where all coordinates differ in the real case.
Signal is of a ECG signal where average life-time is 75 years.
My task is to evaluate its age at the moment of measurement, which is an inverse problem.
I think 2D approximation of the 3D case is hard (done here by Abo-Zahhad) with with 3-leads (2 on chest and one at left leg - MIT-BIT arrhythmia database):
where f is a piecewise continuous function in R^2, \epsilon is the error matrix and A is a 2D matrix.
Now, I evaluate the average grid distance in x-axis (time) and average grid distance in y-axis (energy).
I think this can be done by Matlab's Image Analysis toolbox.
However, I am not sure how complete the toolbox's approaches are.
I think a transform approach must be used in the setting of uneven and noncontinuous grids. One approach is exact linear time euclidean distance transforms of grid line sampled shapes by Joakim Lindblad et all.
The method presents a distance transform (DT) which assigns to each image point its smallest distance to a selected subset of image points.
This kind of approach is often a basis of algorithms for many methods in image analysis.
I tested unsuccessfully the case with bwdist (Distance transform of binary image) with chessboard (returns empty square matrix), cityblock, euclidean and quasi-euclidean where the last three options return full matrix.
Another pseudocode
% https://stackoverflow.com/a/29956008/54964
%// retrieve picture
imgRGB = imread('dummy.png');
%// detect lines
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
where I think the approach will not work here because the grids are not necessarily continuous and not necessary ideal.
pdist based on the Lumbreras' answer
In the real examples, all coordinates differ such that pdist hamming and jaccard are always 1 with real data.
The options euclidean, cytoblock, minkowski, chebychev, mahalanobis, cosine, correlation, and spearman offer some descriptions of the data.
However, these options make me now little sense in such full matrices.
I want to estimate how long the signal can live.
Sources
J. Müller, and S. Siltanen. Linear and nonlinear inverse problems with practical applications.
EIT with the D-bar method: discontinuous heart-and-lungs phantom. http://wiki.helsinki.fi/display/mathstatHenkilokunta/EIT+with+the+D-bar+method%3A+discontinuous+heart-and-lungs+phantom Visited 29-Feb 2016.
There is a function in Matlab defined as pdist which computes the pairwisedistance between all row elements in a matrix and enables you to choose the type of distance you want to use (Euclidean, cityblock, correlation). Are you after something like this? Not sure I understood your question!
cheers!
Simply, do not do it in the post-processing. Those artifacts of the body can be about about raster images, about the viewer and/or ... Do quality assurance in the signal generation/processing step.
It is much easier to evaluate the original signal than its views.
Due to the nature of my problem, I want to evaluate the numerical implementations of the Radon transform in Matlab (i.e. different interpolation methods give different numerical values).
while trying to code my own Radon, and compare it to Matlab's output, I found out that my radon projection sizes are different than Matlab's.
So a bit of intuition of how I compute the amount if radon samples needed. Let's do the 2D case.
The idea is that the maximum size would be when the diagonal (in a rectangular shape at least) part is proyected in the radon transform, so diago=sqrt(size(I,1),size(I,2)). As we dont wan nothing out, n_r=ceil(diago). n_r should be the amount of discrete samples of the radon transform should be to ensure no data is left out.
I noticed that Matlab's radon output is always even, which makes sense as you would want a "ray" through the rotation center always. And I noticed that there are 2 zeros in the endpoints of the array in all cases.
So in that case, n_r=ceil(diago)+mod(ceil(diago)+1,2)+2;
However, it seems that I get small discrepancies with Matlab.
A MWE:
% Try: 255,256
pixels=256;
I=phantom('Modified Shepp-Logan',pixels);
rd=radon(I,pi/4);
size(rd,1)
s=size(I);
diagsize=sqrt(sum(s.^2));
n_r=ceil(diagsize)+mod(ceil(diagsize)+1,2)+2
rd=
367
n_r =
365
As Matlab's Radon transform is a function I can not look into, I wonder why could it be this discrepancy.
I took another look at the problem and I believe this is actually the right answer. From the "hidden documentation" of radon.m (type in edit radon.m and scroll to the bottom)
Grandfathered syntax
R = RADON(I,THETA,N) returns a Radon transform with the
projection computed at N points. R has N rows. If you do not
specify N, the number of points the projection is computed at
is:
2*ceil(norm(size(I)-floor((size(I)-1)/2)-1))+3
This number is sufficient to compute the projection at unit
intervals, even along the diagonal.
I did not try to rederive this formula, but I think this is what you're looking for.
This is a fairly specialized question, so I'll offer up an idea without being completely sure it is the answer to your specific question (normally I would pass and let someone else answer, but I'm not sure how many readers of stackoverflow have studied radon). I think what you might be overlooking is the floor function in the documentation for the radon function call. From the doc:
The radial coordinates returned in xp are the values along the x'-axis, which is
oriented at theta degrees counterclockwise from the x-axis. The origin of both
axes is the center pixel of the image, which is defined as
floor((size(I)+1)/2)
For example, in a 20-by-30 image, the center pixel is (10,15).
This gives different behavior for odd- or even-sized problems that you pass in. Hence, in your example ("Try: 255, 256"), you would need a different case for odd versus even, and this might involve (in effect) padding with a row and column of zeros.
I couldn't find an answer for RGB image.
How can someone get a value of SD,mean and Entropy of RGB image using MATLAB?
From http://airccse.org/journal/ijdms/papers/4612ijdms05.pdf TABLE3, it seems he got one answer so did he get the average of the RGB values?
Really in need of any help.
After reading the paper, because you are dealing with colour images, you have three channels of information to access. This means that you could alter one of the channels for a colour image and it could still affect the information it's trying to portray. The author wasn't very clear on how they were obtaining just a single value to represent the overall mean and standard deviation. Quite frankly, because this paper was published in a no-name journal, I'm not surprised how they managed to get away with it. If this was attempted to be published in more well known journals (IEEE, ACM, etc.), this would probably be rejected outright due to that very ambiguity.
On how I interpret this procedure, averaging all three channels doesn't make sense because you want to capture the differences over all channels. Doing this averaging will smear that information and those differences get lost. Practically speaking, if you averaged all three channels, should one channel change its intensity by 1, and when you averaged the channels together, the reported average would be so small that it probably would not register as a meaningful difference.
In my opinion, what you should perhaps do is treat the entire RGB image as a 1D signal, then perform the mean, standard deviation and entropy of that image. As such, given an RGB image stored in image_rgb, you can unroll the entire image into a 1D array like so:
image_1D = double(image_rgb(:));
The double casting is important because you want to maintain floating point precision when calculating the mean and standard deviation. The images will probably be of an unsigned integer type, and so this casting must be done to maintain floating point precision. If you don't do this, you may have calculations that get saturated or clamped beyond the limits of that data type and you won't get the right answer. As such, you can calculate the mean, standard deviation and entropy like so:
m = mean(image_1D);
s = std(image_1D);
e = entropy(image_1D);
entropy is a function in MATLAB that calculates the entropy of images so you should be fine here. As noted by #CitizenInsane in his answer, entropy unrolls a grayscale image into a 1D vector and applies the Shannon definition of entropy on this 1D vector. In a similar token, you can do the same thing with a RGB image, but we have already unrolled the signal into a 1D vector anyway, and so the input into entropy will certainly be well suited for the unrolled RGB image.
I have no idea how the author actually did it. But what you could do, is to treat the image as a 1D-array of size WxHx3 and then simply calculate the mean and standard deviation.
Don't know if table 3 is obtain in the same way but at least looking at entropy routine in image toolbox of matlab, RGB values are vectorized to single vector:
I = imread('rgb'); % Read RGB values
I = I(:); % Vectorization of RGB values
p = imhist(I); % Histogram
p(p == 0) = []; % remove zero entries in p
p = p ./ numel(I); % normalize p so that sum(p) is one.
E = -sum(p.*log2(p));
I have real world 3D points which I want to project on a plane. The most of intensity [0-1] values fall in lower region (near zero).
Please see image 'before' his attched below.
I tried to normalize values
Col_=Intensity; % before
max(Col_)=0.46;min(Col_)=0.06;
Col=(Col_-min(Col_))/(max(Col_)-min(Col_));% after
max(Col)=1;min(Col)=0;
But still i have maximum values falling in lower region (near zero).
Please see second fig after normalization.
Result is still most of black region.Any suggestion. How can I strech my intensity information.
regards,!
It looks like you have already normalized as much as you can with linear scaling. If you want to get more contrast, you will have to give up preserving the original scaling and use a non-linear equalization.
For example: http://en.wikipedia.org/wiki/Histogram_equalization
If you have the image processing toolbox, matlab will do it for you:
http://www.mathworks.com/help/toolbox/images/ref/histeq.html
It looks like you have very few values outside the first bin, if you don't need to preserve the uniqueness of the intensities, you could just scale by a larger amount and clip the few that exceed 1.
When I normalize intensities I do something like this:
Col = Col - min(Col(:));
Col = Col/max(Col(:));
This will normalize your data points to the range [0,1].
Now, since you have many small values, you might be able to make out small changes better through log scaling.
Col_scaled = log(1+Col);
Linear scaling with such data rarely works for me. Using the log function is akin to tweaking gamma for visualization purposes.
I think the only thing you can do here is reduce the range.
After normalization do the following:
t = 0.1;
Col(Col > t) = t;
This will simply truncate the range of the data, which may be sufficient for what you are doing. Then you can re-normalize again if you wish.