MATLAB's fminsearch function - matlab

I have two images I'm trying to co-register - ie, one could be of a ball in the centre of the picture, the other is of the same ball near the edge and I'm trying to find the numbed of pixels I have to move the second image so that the balls would be in the same place. (I'm actually using 3D MRI brain scans, but the principle is the same).
I've written a function that will move the ball left, right, up or down by a given number of pixels as well as another function that compares the correlation of the ball-in-the-centre image with the translated ball-at-the-edge image. When the two balls are in the same place the correlation function will return 0 and a number larger than 0 for other positions.
I'm trying to use fminsearch (documentation) to find the optimal translation for the correlation function's minimum (ie, the balls being in the same place) like so:
global reference_im unknown_im;
starting_trans = [0 0 0];
trans_vector = fminsearch(#correlate_images,starting_trans)
correlate_images.m:
function r = correlate_images(translate)
global reference_im unknown_im;
new_im = move_image(unknown_im,translate(1),translate(2),translate(3));
% This bit is unimportant to the question
% but you can see how I calculate my correlation
r = 1 - corr(reshape(new_im,[],1),reshape(reference_im,[],1));
There are two problems, firstly fminsearch insists on passing float values for the translation vector into the correlate_images function. Is there any way to inform it that only integers are necessary? (I would save a large number of cpu cycles!)
Secondly, when I run this program the resulting trans_vector is always the same as starting_trans - I assume this is because no minimum has been found, but is there another reason its just plain not working?
Many thanks!
EDIT
I've discovered what I think is the reason the output trans_vector is always the same as starting_trans. The fminsearch looks at the starting value, then a small increment in each direction from there, this small increment is always less than one, which means that the result from the correlation will be a perfect match (as the move_image will return the same as the input image for sub-pixel movements). I'm going to continue working on convincing matlab to only fminsearch over integer values!

First, I'd say that Matlab might not be the best tool for this problem. I'd look at Elastix, which is a pretty user-friendly wrapper around the registration functions in ITK. You get a variety of registration techniques, and the manuals for both programs do a good job of explaining the specifics of image registration.
Second, for this kind of simple translational registration, you can use the FFT. Forward transform both images, multiply the images together (pointwise! That is, use A .* B, not A * B, as those are different operations, and the first is what you want), and there should be a peak in the inverse transform whose offset from the origin is the translational amount you need. Numerical Recipes in C has a good explanation; here's a link to an index pdf. The speed difference between the FFT version and the direct correlation version is huge; the FFT is O(N log N), while the correlation method will be O (N * M), where M is the number of pixels in your search neighborhood. If you want to allow the entire image to be searched, then correlation becomes O (N*N), which will take much longer than the FFT version. Changing parameters from floats to integers won't solve the problem.
The reason the fminsearch function uses floats (if I can guess at the reasons behind the coders' decisions) is that for problems that aren't test problems (ie, spheres in a volume), you often need sub-pixel resolution to perform a correct registration. Take a look at the ITK documentation about the reasons behind this approach.
Third, I'd suggest that a good way to write this program in Matlab (if you still want to do so!) while still forcing integer correlations would be to avoid the fminsearch function, which will want to use floats. Try something like:
startXPos = -10; %these parameters dictate the size of your search neighborhood
startYPos = -10; %corresponds to M in the above explanation
endXPos = 10;
endYPos = 10;
optimalX = 0;
optimalY = 0;
maxCorrVal = 0;
for i=startXPos:endXPos
for j = startYPos:endYPos
%test the correlation of the two images here, where one image is shifted to another
currCorrVal = Correlate(image1, image2OffsetByiAndj);
if (currCorrVal > maxCorrVal)
maxCorrVal = currCorrVal;
optimalX = i;
optimalY = j;
end
end
end
From here, you just have to write the offset function. This way, you avoid the float problem, and you're also incrementing your translation vector (I don't see any way for that vector to move in your provided functions, which probably explains your lack of movement).

There is a very similar demo in the Image Processing Toolbox that uses the normalized cross-correlation function normxcorr2 to perform image registration. To avoid repeating the same thing, check out the demo directly:
Registering an Image Using Normalized Cross-Correlation

Related

Implementation of Radon transform in Matlab, output size

Due to the nature of my problem, I want to evaluate the numerical implementations of the Radon transform in Matlab (i.e. different interpolation methods give different numerical values).
while trying to code my own Radon, and compare it to Matlab's output, I found out that my radon projection sizes are different than Matlab's.
So a bit of intuition of how I compute the amount if radon samples needed. Let's do the 2D case.
The idea is that the maximum size would be when the diagonal (in a rectangular shape at least) part is proyected in the radon transform, so diago=sqrt(size(I,1),size(I,2)). As we dont wan nothing out, n_r=ceil(diago). n_r should be the amount of discrete samples of the radon transform should be to ensure no data is left out.
I noticed that Matlab's radon output is always even, which makes sense as you would want a "ray" through the rotation center always. And I noticed that there are 2 zeros in the endpoints of the array in all cases.
So in that case, n_r=ceil(diago)+mod(ceil(diago)+1,2)+2;
However, it seems that I get small discrepancies with Matlab.
A MWE:
% Try: 255,256
pixels=256;
I=phantom('Modified Shepp-Logan',pixels);
rd=radon(I,pi/4);
size(rd,1)
s=size(I);
diagsize=sqrt(sum(s.^2));
n_r=ceil(diagsize)+mod(ceil(diagsize)+1,2)+2
rd=
367
n_r =
365
As Matlab's Radon transform is a function I can not look into, I wonder why could it be this discrepancy.
I took another look at the problem and I believe this is actually the right answer. From the "hidden documentation" of radon.m (type in edit radon.m and scroll to the bottom)
Grandfathered syntax
R = RADON(I,THETA,N) returns a Radon transform with the
projection computed at N points. R has N rows. If you do not
specify N, the number of points the projection is computed at
is:
2*ceil(norm(size(I)-floor((size(I)-1)/2)-1))+3
This number is sufficient to compute the projection at unit
intervals, even along the diagonal.
I did not try to rederive this formula, but I think this is what you're looking for.
This is a fairly specialized question, so I'll offer up an idea without being completely sure it is the answer to your specific question (normally I would pass and let someone else answer, but I'm not sure how many readers of stackoverflow have studied radon). I think what you might be overlooking is the floor function in the documentation for the radon function call. From the doc:
The radial coordinates returned in xp are the values along the x'-axis, which is
oriented at theta degrees counterclockwise from the x-axis. The origin of both
axes is the center pixel of the image, which is defined as
floor((size(I)+1)/2)
For example, in a 20-by-30 image, the center pixel is (10,15).
This gives different behavior for odd- or even-sized problems that you pass in. Hence, in your example ("Try: 255, 256"), you would need a different case for odd versus even, and this might involve (in effect) padding with a row and column of zeros.

Explaining corr2 function in Matlab

Can someone explain to me the correlation function corr2 in MATLAB? I know that it is for 2D comparing similarities of objects, but in the equation I have doubts what it is A and B (probably matrices for comparison), and also Amn and Bmn.
I'm not sure how MATLAB executes this function, because I have found in several cases that the correlation is not executed for the entire image (matrix) but instead it divides the image into blocks and then compares blocks of one picture with blocks of another picture.
In MATLAB's documentation, the corr2 equation is not put as referral point to the way the equation itself is calculated, like in other functions in MATLAB's documentation, such as referring to what book it is taken from and where it is explained.
The correlation coefficient is a number representing the similarity between 2 images in relation with their respective pixel intensity.
As you pointed out this function is used to calculate this coefficient:
Here A and B are the images you are comparing, whereas the subscript indices m and n refer to the pixel location in the image. Basically what Matab does is to compute, for every pixel location in both images, the difference between the intensity value at that pixel and the mean intensity of the whole image, denoted as a letter with a straightline over it.
As Kostya pointed out, typing edit corr2 in the command window will show you the code used by Matlab to compute the correlation coefficient. The formula is basically this:
a = a - mean2(a);
b = b - mean2(b);
r = sum(sum(a.*b))/sqrt(sum(sum(a.*a))*sum(sum(b.*b)));
where:
a is the input image and b is the image you wish to compare to a.
If we break down the formula, we see that a - mean2(a) and b-mean2(b) are the elements in the numerator of the above equation. mean2(a) is equivalent to mean(mean(a)) or mean(a(:)), that is the mean intensity of the whole image. This is only calculated once.
The 3rd line of code calculates the coefficient. Here sum(sum(a.*b)) calculates the double-sum present in the formula element-wise, that is considering each pixel location separately. Be aware that using sum(a) calculates the sum in every column individually, hence in order to get a single value you need to apply sum twice.
That's pretty much the same happening in the denominator, however calculations are performed on a-mean2(a)^2 and b-mean2(b)^2. You can see this a some kind of normalization process in which you consider the pixel intensity difference among each individual image.
As for your last comment, you can break down an image into small blocks and calculate the correlation coefficient on them; that might save some time for very large images but since everything is vectorized the calculation is quite fast. It might be useful in distributed processing I guess. Of course the correlation coefficient between 2 blocks of images is not necessarily identical to that of the whole image.
For the sake of curiosity you can look at this paper which highlights some caveats in using the correlation coefficient for image comparison.
Hope that makes things a bit clearer!

Rate of change in matlab

i have plotted a graph between time vs theta when time increases theta decreases up to some time ofter that it started increasing now i want to find what rate it is decreasing. equation theta=exp(-t/tau) i have to find tau ? can any one help me please..
It is not entirely clear from your question where you think that your problem is. But, when I read your question, it sounds like you are trying to fit an equation to some real data. Specifically, it sounds like: (1) you have some real data, (2) only part of the data is interesting to you, and (3) for that interesting data, you want to fit it to the equation theta=exp(-t/tau).
If that is indeed what you want, then you first must find just those data points that you think should be fit with the equation. I would plot your data points and then, by eye, decide which are the ones that are relevant to you. Discard the rest.
Next, you need to fit them to your equation. Since your equation is an exponential, the easiest way to find "tau" is to convert it to a linear equation. When you do this, you get 'log(theta) = -t / tau'. Or, said similarly, log(theta) = -1/tau * t.
If you take the log of all of your theta data points and plot them versus t, you should see a straight line. If this is truly the equation that will match your data, your data points should go through log(theta) = 0.0 at t = 0.0. If so, you can find tau by evaluating the slope of the line: slope = mean(log(theta)./t). Then, tau = -1/slope.
If your data points did not go through zero, you will need to shift them by some time offset so that they do go through zero. Then you can evaluate the slope and get your tau value.
This isn't really a Matlab question, by the way. Computationally, this is a very simple problem, so if Matlab is new to you, you might be making this harder than it needs to be. It could just as easily be done in Excel (or any spreadsheet) or whatever tool might be easier to use.

Measuring the entropy of a transition probability matrix in matlab

I'm working on a project which requires to analyze certain graph properties of transition probability matrices which are constructed as weighted directed graphs.
one of the properties of interest is the entropy of these graphs, which i have yet to find a proper way to measure, the general idea is that i need some sort of measure which allows me to quantify the extent to which a certain graph is "ordered" in order to ascertain the predictive value of the nodes within the graph (I.E if all the nodes have the exact same connection patterns, then effectively their predictive value is zero, though this is a very simplistic explanation as there are many other contributing factors to a nodes predictive power).
Iv'e experimented with certain built in matlab commands:
entropy - generally used to determine the entropy of an image
wentropy - to be honest i do not fully understand the proper use of this function, but iv'e tried using it with the 'shannon' and 'log energy' types, and have produced some incosistent results
this is a very basic script i whipped up to some testing, which produces two matrices:
an 20*20 matrix constructed with values drawn entirely from a uniform distribution, intended to produce a matrix with a relatively low degree of order - unordgraph
a 20*20 matrix constructed with 4 5*5 "patches" in which the values are integers drawn from a uniform distribution with a given range that is significantly larger than one, while the rest of the values are drawn from a uniform distribution on the range 0-1 (as in the previous matrix), this form of graph is more "ordered" than the previous patch - ordgraph
when i run the code:
clear all;
n = 50;
gsize = 20;
orderedrange = [100 200];
enttype = 'shannon';
for i = 1:n;
unordgraph = rand(gsize);
% entvec(1,i) = entropy(unordgraph);
entvec(1,i) = wentropy(unordgraph,enttype);
% ordgraph = reshape(1:gsize^2,gsize,gsize);
ordgraph = rand(gsize);
ordgraph(1:5,1:5) = randi(orderedrange,5);
ordgraph(6:10,6:10) = randi(orderedrange,5);
ordgraph(11:15,11:15) = randi(orderedrange,5);
ordgraph(16:20,16:20) = randi(orderedrange,5);
% entvec(2,i) = entropy(ordgraph);
entvec(2,i) = wentropy(ordgraph,enttype);
end
fprintf('the mean entropy of the unordered graph is: %.4f\n',mean(entvec(1,:)));
fprintf('the mean entropy of the ordered graph is: %.4f\n',mean(entvec(2,:)));
i get outputs such as:
the mean entropy of the unordered graph is: 88.8871
the mean entropy of the ordered graph is: -23936552.0113
i'm not really sure about the meaning of such negative values as running the same script on a matrix comprised entirely of zeros or ones (and hence maximally ordered) produces a mean entropy of 0.
i have a pretty rudimentary background in graph theory, making this task that much more difficult, and i would be really grateful for any help, whether theoretical or algorithmical
thanks in advance,
Ron

Matlab recursive curve fitting with custom equations

I have a curve IxV. I also have an equation that I want to fit in this IxV curve, so I can adjust its constants. It is given by:
I = I01(exp((V-R*I)/(n1*vth))-1)+I02(exp((V-R*I)/(n2*vth))-1)
vth and R are constants already known, so I only want to achieve I01, I02, n1, n2. The problem is: as you can see, I is dependent on itself. I was trying to use the curve fitting toolbox, but it doesn't seem to work on recursive equations.
Is there a way to make the curve fitting toolbox work on this? And if there isn't, what can I do?
Assuming that I01 and I02 are variables and not functions, then you should set the problem up like this:
a0 = [I01 I02 n1 n2];
MinFun = #(a) abs(a(1)*(exp(V-R*I)/(a(3)*vth))-1) + a(2)*(exp((V-R*I)/a(4)*vth))-1) - I);
aout = fminsearch(a0,MinFun);
By subtracting I and taking the absolute value, the point where both sides are equal will be the point where MinFun is zero (minimized).
No, the CFTB cannot fit such recursively defined functions. And errors in I, since the true value of I is unknown for any point, will create a kind of errors in variables problem. All you have are the "measured" values for I.
The problem of errors in I MAY be serious, since any errors in I, or lack of fit, noise, model problems, etc., will be used in the expression itself. Then you exponentiate these inaccurate values, potentially casing a mess.
You may be able to use an iterative approach. Thus something like
% 0. Initialize I_pred
I_pred = I;
% 1. Estimate the values of your coefficients, for this model:
% (The curve fitting toolbox CAN solve this problem, given I_pred)
I = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% 2. Generate new predictions for I_pred
I_pred = I01(exp((V-R*I_pred)/(n1*vth))-1)+I02(exp((V-R*I_pred)/(n2*vth))-1)
% Repeat steps 1 and 2 until the parameters from the CFTB stabilize.
The above pseudo-code will work only if your starting values are good, and there are not large errors/noise in the model/data. Even on a good day, the above approach may not converge well. But I see little hope otherwise.