Due to the nature of my problem, I want to evaluate the numerical implementations of the Radon transform in Matlab (i.e. different interpolation methods give different numerical values).
while trying to code my own Radon, and compare it to Matlab's output, I found out that my radon projection sizes are different than Matlab's.
So a bit of intuition of how I compute the amount if radon samples needed. Let's do the 2D case.
The idea is that the maximum size would be when the diagonal (in a rectangular shape at least) part is proyected in the radon transform, so diago=sqrt(size(I,1),size(I,2)). As we dont wan nothing out, n_r=ceil(diago). n_r should be the amount of discrete samples of the radon transform should be to ensure no data is left out.
I noticed that Matlab's radon output is always even, which makes sense as you would want a "ray" through the rotation center always. And I noticed that there are 2 zeros in the endpoints of the array in all cases.
So in that case, n_r=ceil(diago)+mod(ceil(diago)+1,2)+2;
However, it seems that I get small discrepancies with Matlab.
A MWE:
% Try: 255,256
pixels=256;
I=phantom('Modified Shepp-Logan',pixels);
rd=radon(I,pi/4);
size(rd,1)
s=size(I);
diagsize=sqrt(sum(s.^2));
n_r=ceil(diagsize)+mod(ceil(diagsize)+1,2)+2
rd=
367
n_r =
365
As Matlab's Radon transform is a function I can not look into, I wonder why could it be this discrepancy.
I took another look at the problem and I believe this is actually the right answer. From the "hidden documentation" of radon.m (type in edit radon.m and scroll to the bottom)
Grandfathered syntax
R = RADON(I,THETA,N) returns a Radon transform with the
projection computed at N points. R has N rows. If you do not
specify N, the number of points the projection is computed at
is:
2*ceil(norm(size(I)-floor((size(I)-1)/2)-1))+3
This number is sufficient to compute the projection at unit
intervals, even along the diagonal.
I did not try to rederive this formula, but I think this is what you're looking for.
This is a fairly specialized question, so I'll offer up an idea without being completely sure it is the answer to your specific question (normally I would pass and let someone else answer, but I'm not sure how many readers of stackoverflow have studied radon). I think what you might be overlooking is the floor function in the documentation for the radon function call. From the doc:
The radial coordinates returned in xp are the values along the x'-axis, which is
oriented at theta degrees counterclockwise from the x-axis. The origin of both
axes is the center pixel of the image, which is defined as
floor((size(I)+1)/2)
For example, in a 20-by-30 image, the center pixel is (10,15).
This gives different behavior for odd- or even-sized problems that you pass in. Hence, in your example ("Try: 255, 256"), you would need a different case for odd versus even, and this might involve (in effect) padding with a row and column of zeros.
Related
Here is the given system I want to plot and obtain the vector field and the angles they make with the x axis. I want to find the index of a closed curve.
I know how to do this theoretically by choosing convenient points and see how the vector looks like at that point. Also I can always use
to compute the angles. However I am having trouble trying to code it. Please don't mark me down if the question is unclear. I am asking it the way I understand it. I am new to matlab. Can someone point me in the right direction please?
This is a pretty hard challenge for someone new to matlab, I would recommend taking on some smaller challenges first to get you used to matlab's conventions.
That said, Matlab is all about numerical solutions so, unless you want to go down the symbolic maths route (and in that case I would probably opt for Mathematica instead), your first task is to decide on the limits and granularity of your simulated space, then define them so you can apply your system of equations to it.
There are lots of ways of doing this - some more efficient - but for ease of understanding I propose this:
Define the axes individually first
xpts = -10:0.1:10;
ypts = -10:0.1:10;
tpts = 0:0.01:10;
The a:b:c syntax gives you the lower limit (a), the upper limit (c) and the spacing (b), so you'll get 201 points for the x. You could use the linspace notation if that suits you better, look it up by typing doc linspace into the matlab console.
Now you can create a grid of your coordinate points. You actually end up with three 3d matrices, one holding the x-coords of your space and the others holding the y and t. They look redundant, but it's worth it because you can use matrix operations on them.
[XX, YY, TT] = meshgrid(xpts, ypts, tpts);
From here on you can perform whatever operations you like on those matrices. So to compute x^2.y you could do
x2y = XX.^2 .* YY;
remembering that you'll get a 3d matrix out of it and all the slices in the third dimension (corresponding to t) will be the same.
Some notes
Matlab has a good builtin help system. You can type 'help functionname' to get a quick reminder in the console or 'doc functionname' to open the help browser for details and examples. They really are very good, they'll help enormously.
I used XX and YY because that's just my preference, but I avoid single-letter variable names as a general rule. You don't have to.
Matrix multiplication is the default so if you try to do XX*YY you won't get the answer you expect! To do element-wise multiplication use the .* operator instead. This will do a11 = b11*c11, a12 = b12*c12, ...
To raise each element of the matrix to a given power use .^rather than ^ for similar reasons. Likewise division.
You have to make sure your matrices are the correct size for your operations. To do elementwise operations on matrices they have to be the same size. To do matrix operations they have to follow the matrix rules on sizing, as will the output. You will find the size() function handy for debugging.
Plotting vector fields can be done with quiver. To plot the components separately you have more options: surf, contour and others. Look up the help docs and they will link to similar types. The plot family are mainly about lines so they aren't much help for fields without creative use of the markers, colours and alpha.
To plot the curve, or any other contour, you don't have to test the values of a matrix - it won't work well anyway because of the granularity - you can use the contour plot with specific contour values.
Solving systems of dynamic equations is completely possible, but you will be doing a numeric simulation and your results will again be subject to the granularity of your grid. If you have closed form solutions, like your phi expression, they may be easier to work with conceptually but harder to get working in matlab.
This kind of problem is tractable in matlab but it involves some non-basic uses which are pretty hard to follow until you've got your head round Matlab's syntax. I would advise to start with a 2d grid instead
[XX, YY] = meshgrid(xpts, ypts);
and compute some functions of that like x^2.y or x^2 - y^2. Get used to plotting them using quiver or plotting the coordinates separately in intensity maps or surfaces.
In my project i have hige surfaces of 20.000 points computed by a algorithm. This algorithm, sometimes, has an error, computing 1 or more points in an small area incorrectly.
This error can not be solved in the algorithm, but needs to be detected afterwards.
The error can be seen in the next figure:
As you can see, there is a point wrongly computed that not only breaks the full homogeneous surface, but also destroys the aestetics of the plot (wich is also important in the project.)
Sometimes it can be more than a point, in general no more than 5 or 6. The error is allways the Z axis, so no need to check X and Y
I have been squeezing my mind to find a bit "generic" algorithm to detect this poitns.
I thougth that maybe taking patches of surface and meaning the Z, then detecting the points out of the variance... but I dont think it will work allways.
Any ideas?
NOTE: I dont want someone to write code for me, just an idea.
PD: relevant code for the avobe image:
[x,y] = meshgrid([-2:.07:2]);
Z = x.*exp(-x.^2-y.^2);
subplot(1,2,1)
surf(x,y,Z,gradient(Z))
subplot(1,2,2)
Z(35,35)=Z(35,35)+0.3;
surf(x,y,Z,gradient(Z))
The standard trick is to use a Laplacian, looking for the largest outliers. (This is not unlike what Mohsen posed for an answer, but is actually a bit easier.) You could even probably do it with conv2, so it would be pretty efficient.
I could offer a few ways to implement the idea. A simple one is to use my gridfit tool, found on the File Exchange. (Gridfit essentially uses a Laplacian for its smoothing operation.) Fit the surface with all points included, then look for the single point that was perturbed the most by the fit. Exclude it, then rerun the fit, again looking for the largest outlier. (With gridfit, you can use weights to give points a zero weight, a simple way to exclude a point or list of points.) When the largest perturbation that was needed is small enough, you can decide to stop the process. A nice thing is gridfit will also impute new values for the outliers, filling in all of the holes.
A second approach is to use the Laplacian directly, in more of a filtering approach. Here, you simply compute a value at each point that is the average of each neighbor to the left, right, above, and below. The single value that is most largely in disagreement with its computed average is replaced with a new value. Or, you can use a weighted average of the new value with the old one there. Again, iterate until the process does not generate anything larger than some tolerance. (This is the basis of an old outlier detection and correction scheme that I recall from the Fortran IMSL libraries, but probably dates back to roughly 30 years ago.)
Since your functions seems to vary smoothly these abrupt changes can be detected by looking into the derivatives. You can
Take the derivative in one direction
Calculate mean and standard deviation of derivative
Find the points by looking for points that are further from mean by certain multiple of standard deviation.
Here is the code
U=diff(Z);
V=(U-mean(U(:)))/std(U(:));
surf(x(2:end,:),y(2:end,:),V)
V=[zeros(1,size(V,2)); V];
V(abs(V)<10)=0;
V=sign(V);
W=cumsum(V);
[I,J]=find(W);
outliers = [I, J];
For your example you get this plot for V with a peak at around 21.7 while second peak is at around 1.9528, so maybe a threshold of 10 is ok.
and running the code returns
outliers =
35 35
The need for cumsum is for the cases that you have a patch of points next to each other that are incorrect.
I was given this task, I am a noob and need some pointers to get started with centroid calculation in Matlab:
Instead of an image first I was asked to simulate a Gaussian distribution(2 dimensional), add noise(random noise) and plot the intensities, now the position of the centroid changes due to noise and I need to bring it back to its original position by
-clipping level to get rid of the noise, noise reduction by clipping or smoothing, sliding average (lpf) (averaging filter 3-5 samples ), calculating the means or using Convolution filter kernel - which does matrix operations which represent the 2-D images
Since you are a noob, even if we wrote down the answer verbatim you probably won't understand how it works. So instead I'll do what you asked, give you pointers and you'll have to read the related documentation :
a) to produce a 2-d Gaussian use meshgrid or ndgrid
b) to add noise to the image look into rand ,randn or randi, depending what exactly you need.
c) to plot the intensities use imagesc
d) to find the centroid there are several ways, try to further search SO, you'll find many discussions. Also you can check TMW File exchange for different implementations for that.
Its just a basic question. I am fitting lines to scatter points using polyfit.
I have some cases where my scatter points have same X values and polyfit cant fit a line to it. There has to be something that can handle this situation. After all, its just a line fit.
I can try swapping X and Y and then fir a line. Any easier method because I have lots of sets of scatter points and want a general method to check lines.
Main goal is to find good-fit lines and drop non-linear features.
First of all, this happens due to the method of fitting that you are using. When doing polyfit, you are using the least-squares method on Y distance from the line.
(source: une.edu.au)
Obviously, it will not work for vertical lines. By the way, even when you have something close to vertical lines, you might get numerically unstable results.
There are 2 solutions:
Swap x and y, as you said, if you know that the line is almost vertical. Afterwards, compute the inverse linear function.
Use least-squares on perpendicular distance from the line, instead of vertical (See image below) (more explanation in here)
(from MathWorld - A Wolfram Web Resource: wolfram.com)
Polyfit uses linear ordinary least-squares approximation and will not allow repeated abscissa as the resulting Vandermonde matrix will be rank deficient. I would suggest trying to find something of a more statistical nature.
If you wish to research Andreys method it usually goes by the names Total least squares or Orthogonal distance regression http://en.wikipedia.org/wiki/Total_least_squares
I would tentatively also put forward the possibility of detecting when you have simultaneous x values, then rotating your data about the origin, fitting the line and then transform the line back. I could not say how poorly this would perform and only you could decide if it was an option based on your accuracy requirements.
I've got a theoretical curve which was calculated numerically and an experimental curve (better to say a massive of experimental points). I need to calculate the residuals between these two curves to check the accuracy of modeling with the least squares sum method. These matrixes (curves) are of different size. Is there any function in MATLAB providing the calculation of residuals for two matrixes of different size?
I thought I'll just elaborate a bit on what Aabaz said in case there are others who might find this useful (Although Aabaz's explanation is probably clear enough for people who have an understanding of the necessary math etc.).
First, I'm assuming you have a 2D plot but it shouldn't be difficult to generalize to ND case.
Basically, for each point in your experimental data (xi, yi), use your "theoretical curve" to estimate yi' for the value xi. This is probably what Aabaz is referring to by making grid step size the same so that you interpolate the points exactly at the x coordinate values xi of your experimental data using the formula for your curve(s).
Next, to measure whether the fitting is good, you could for e.g. measure the sum of square differences using:
error = sum( (yi' - yi)^2 ){where i range over all points in your exp. data}
Of course other error metrics other than least square could be used to estimate how well the data fit your model (i.e. your curve) but by far for most applications, least square is the most common.
Hope this helps.