I use two projection matrices P1 and P2 (for example I'm using dinosaur dataset) and I need to compute the fundamental matrix F.
So I use two Matlab functions:
Peter Kovesi's function: www.csse.uwa.edu.au/~pk/Research/MatlabFns/Projective/fundfromcameras.m
Zisserman: www.robots.ox.ac.uk/~vgg/hzbook/code/vgg_multiview/vgg_F_from_P.m
These functions should do the same thing, but I have a different F value! How it's possible? Which is the right functions?
If two points X1 and X2 are "the same" in two different images, X2^TFX1 = 0 ...
So I found two corresponded points from two rotated images (5 degrees) by using SURF, but X2^TFX1 is never equal to zero with this two funtcions.
Any ideas?
Instead if I use this function that computes F from matches points:
ransac fit fundamental matrix by Peter Kovesi: ransacfitfundmatrix.m
I have that X2^TFX1 = 0 .... Obviously F is different from the two F I had with the other two functions...
Well for one thing, it's overwhelmingly likely that the points aren't perfectly rotated version of each other. SURF uses a lot of approximations, bi-linear interpolation and a whole slew of things that break true rotational invariance. So there might not exist such a fundamental matrix (if there's no linear relationship between the two sets of points.) Yes, this is true even after you do point matching.
That said, your X2^T*F*X1 should probably be small if the matching is really good, but I'd be surprise if it's ever exactly zero for any real image.
The fundamental matrix is unique only up to a scale.
So, even if you have different fundamental matrices, both can be correct for your images.
Related
This question of mine is not tightly related to Matlab, but is relevant to it:
I'm looking how to fill in the matrix [[a,b,c],[d,e,f]] in a few nontrivial ways so that as many places as possible in
corrcoef([a,b,c],[d,e,f])
are zero. My attempts yield NaN result in most cases.
Given the current comments, you are trying to understand how two series of random draws from two distributions can have zero correlation. Specifically, exercise 4.6.9 to which you refer mentions draws from two normal distributions.
An issue with your approach is that you are hoping to derive a link between a theoretical property and experimentation, in this case using Matlab. And, as you seem to have noticed, unless you are looking at specific degenerate cases, your experimentation will fail. That is because although the true correlation parameter rho in the exercise might be zero, a sample of random draws will always have some level of correlation. Here is an illustration, and as you'll notice if you run it the actual correlations span the whole spectrum between -1 and 1 despite their average being zero (as it should be since both generators are pseudo-uncorrelated):
n=1e4;
experiment = nan(n,1);
for i=1:n
r = corrcoef(rand(4,1),rand(4,1));
experiment(i)=r(2);
end
hist(experiment);
title(sprintf('Average correlation: %.4f%%',mean(experiment)));
If you look at the definition of Pearson correlation in wikipedia, you will see that the only way this can be zero is when the numerator is zero, i.e. E[(X-Xbar)(Y-Ybar)]=0. Though this might be the case asymptotically, you will be hard-pressed to find a non-degenerate case where this will happen in a small sample. Nevertheless, to show you you can derive some such degenerate cases, let's dig a bit further. If you want the expectation of this product to be zero, you could make either the left or the right hand part zero when the other is non-zero. For one side to be zero, the draw must be exactly equal to the average of draws. Therefore we can imagine creating such a pair of variables using this technique:
we create two vectors of 4 variables, and alternate which draw will be equal to the average.
let's say we want X to average 1, and Y to average 2, and we make even-indexed draws equal to the average for X and odd-indexed draws equal to the average for Y.
one such generation would be: X=[0,1,2,1], Y=[2,0,2,4], and you can check that corrcoef([0,1,2,1],[2,0,2,4]) does in fact produce an identity matrix. This is because, every time a component of X is different than its average of 1, the component in Y is equal to its average of 2.
another example, where the average of X is 3 and that of Y is 4 is: X=[3,-5,3,11], Y=[1008,4,-1000,4]. etc.
If you wanted to know how to create samples from non-correlated distributions altogether, that would be and entirely different question, though (perhaps) more interesting in terms of understanding statistics. If this is your case, and given the exercise you mention discusses normal distributions, I would suggest you take a look at generating antithetic variables using the Box-Muller transform.
Happy randomizing!
Here is the given system I want to plot and obtain the vector field and the angles they make with the x axis. I want to find the index of a closed curve.
I know how to do this theoretically by choosing convenient points and see how the vector looks like at that point. Also I can always use
to compute the angles. However I am having trouble trying to code it. Please don't mark me down if the question is unclear. I am asking it the way I understand it. I am new to matlab. Can someone point me in the right direction please?
This is a pretty hard challenge for someone new to matlab, I would recommend taking on some smaller challenges first to get you used to matlab's conventions.
That said, Matlab is all about numerical solutions so, unless you want to go down the symbolic maths route (and in that case I would probably opt for Mathematica instead), your first task is to decide on the limits and granularity of your simulated space, then define them so you can apply your system of equations to it.
There are lots of ways of doing this - some more efficient - but for ease of understanding I propose this:
Define the axes individually first
xpts = -10:0.1:10;
ypts = -10:0.1:10;
tpts = 0:0.01:10;
The a:b:c syntax gives you the lower limit (a), the upper limit (c) and the spacing (b), so you'll get 201 points for the x. You could use the linspace notation if that suits you better, look it up by typing doc linspace into the matlab console.
Now you can create a grid of your coordinate points. You actually end up with three 3d matrices, one holding the x-coords of your space and the others holding the y and t. They look redundant, but it's worth it because you can use matrix operations on them.
[XX, YY, TT] = meshgrid(xpts, ypts, tpts);
From here on you can perform whatever operations you like on those matrices. So to compute x^2.y you could do
x2y = XX.^2 .* YY;
remembering that you'll get a 3d matrix out of it and all the slices in the third dimension (corresponding to t) will be the same.
Some notes
Matlab has a good builtin help system. You can type 'help functionname' to get a quick reminder in the console or 'doc functionname' to open the help browser for details and examples. They really are very good, they'll help enormously.
I used XX and YY because that's just my preference, but I avoid single-letter variable names as a general rule. You don't have to.
Matrix multiplication is the default so if you try to do XX*YY you won't get the answer you expect! To do element-wise multiplication use the .* operator instead. This will do a11 = b11*c11, a12 = b12*c12, ...
To raise each element of the matrix to a given power use .^rather than ^ for similar reasons. Likewise division.
You have to make sure your matrices are the correct size for your operations. To do elementwise operations on matrices they have to be the same size. To do matrix operations they have to follow the matrix rules on sizing, as will the output. You will find the size() function handy for debugging.
Plotting vector fields can be done with quiver. To plot the components separately you have more options: surf, contour and others. Look up the help docs and they will link to similar types. The plot family are mainly about lines so they aren't much help for fields without creative use of the markers, colours and alpha.
To plot the curve, or any other contour, you don't have to test the values of a matrix - it won't work well anyway because of the granularity - you can use the contour plot with specific contour values.
Solving systems of dynamic equations is completely possible, but you will be doing a numeric simulation and your results will again be subject to the granularity of your grid. If you have closed form solutions, like your phi expression, they may be easier to work with conceptually but harder to get working in matlab.
This kind of problem is tractable in matlab but it involves some non-basic uses which are pretty hard to follow until you've got your head round Matlab's syntax. I would advise to start with a 2d grid instead
[XX, YY] = meshgrid(xpts, ypts);
and compute some functions of that like x^2.y or x^2 - y^2. Get used to plotting them using quiver or plotting the coordinates separately in intensity maps or surfaces.
I want to solve following optimization problem -
Cost Function: 1/2 ||W||^2
Subject to : Y_i(w.X_i - b) >= 1
Where X is a 700x3 matrix, Y is a vector stores the label of classes for those instances (valued as 1/-1) and w.X_i is the dot product of w and X_i.
I am using CVX -
cvx_begin
variable W(3);
variable B;
minimize (0.5*W'*W)
subject to
Y'*(X*W - B) >= 1;
cvx_end
then, I am plotting, w1.x1 + w2.x2 - b
which does not seem to be separating hyper-plane?
Whats wrong am I doing?
In short:
when you are doing w1.x1 + w2.x2 - b you are trying to specify a hyperplane at a particular location, which is also the same as specifying a particular point on a vector. To do either in a 3D space you need to use all three dimensions, so: w1.x1 + w2.x2 +w3.x3 - b
In longer:
When performing a linear classification such as this, the task can be viewed in two ways:
Finding a separating hyperplane such that all samples of one class are on one side, and all samples of the other class are on the other side.
Finding a projection of the multidimensional space which the samples are in, into a single dimensional line, such that there is a point on the line which clearly separates them.
These are identical tasks, since the single dimension in 2 is essentially how far each sample is from the separating hyperplane (and which side said sample is on). I find it helps to bear both of these viewpoints in mind, particularly since the separating hyperplane is the plane orthogonal to the single dimensional vector.
So, in the case you are dealing with, the weight vector w provided by the model is used to project the samples in matrix X onto a single dimensional line and the offset b indicates at which point along this vector the separating hyperplane occurs. By subtracting b from the projected values they are shifted such that this hyperplane is the one orthogonal to the line at point 0 which makes for simple thresholding.
Let's assume I have two gmdistibution models that i obtained using
modeldata1=gmdistribution.fit(data1,1);
modeldata2=gmdistribution.fit(data2,1);
Now I have an unknown 'data' observation, and I want to see if it belongs to data1 or data2.
Based on my understanding of these functions, nlogn output using posterior,cluster, or pdf commands wouldn't be a good measure since I am comparing 'data' to two different distributions.
What measure or output should I use find what is the p(data|modeldata1) and p(data|modeldata2) ?
Many thanks,
If I understand you correctly, you want to assign a new, unknown, datapoint to either class 1 or class 2 with the descriptors for each class (in this case the mean vector and covariance matrix) found by gmdistribution.fit.
In seeing this new datapoint, lets call it x, you should ask yourself what is
p(modeldata1 | x) and p(modeldata2 | x) and which ever one of these is the highest you should assign x to.
So how do you find these? You just apply Bayes rule and pick which ever one is the largest of:
p(modeldata1 | x) = p(x|modeldata1)p(modeldata1)/p(x)
p(modeldata1 | x) = p(x|modeldata2)p(modeldata2)/p(x)
Here you dont need to calculate p(x) as it is the same in each equation.
So, now you estimate the priors p(modeldata1) and p(modeldata2) by the number of training points from each class (or use some given information) and then calculate
p(x|modeldata1)=1/((2pi)^d/2 * sqrt(det(Sigma1)))*exp(0.5*(x-mu1)/Sigma1*(x-mu1))
where d is the dimensionality of your data, Sigma is a corvariance matrix, and mu is a mean vector. This is then your asked for p(data|modeldata1). (Just remember to also use p(modeldata1) and p(modeldata2) when you do the classification).
I know this was a bit unclear, but hopefully it can help you with a step in the right direction.
EDIT: Personally, I find a visualization such as the one below (takes from Pattern Recognition by Theodoridis and Koutroumbas). Here you have two gaussian mixtures with some priors and different covariance matrices. The blue area is where you would choose one class, while the gray area is where the other would be choosen.
I have some data let's say the following vector:
[1.2 2.13 3.45 4.59 4.79]
And I want to get a polynomial function, say f to fit this data. Thus, I want to go with something like polyfit. However, what polyfit does is minimizing the sum of least square errors. But, what I want is to have
f(1)=1.2
f(2)=2.13
f(3)=3.45
f(4)=4.59
f(5)=4.79
That is to say, I want to manipulate the fitting algorithm so that it will give me the exact points that I already gave as well as some fitted values where exact values are not given.
How can I do that?
I think everyone is missing the point. You said that "That is to say, I want to manipulate the fitting algorithm so that I will give me the exact points as well as some fitted values where exact fits are not present. How can I do that?"
To me, this means you wish an exact (interpolatory) fit for a listed set, and for some other points, you want to do a least squares fit.
You COULD do that using LSQLIN, by setting a set of equality constraints on the points to be fit exactly, and then allowing the rest of the points to be fit in a least squares sense.
The problem is, this will require a high order polynomial. To be able to fit 5 points exactly, plus some others, the order of the polynomial will be quite a bit higher. And high order polynomials, especially those with constrained points, will do nasty things. But feel free to do what you will, just as long as you also expect a poor result.
Edit: I should add that a better choice is to use a least squares spline, which is something you CAN constrain to pass through a given set of points, while fitting other points in a least squares sense, and still not do something wild and crazy as a result.
Polyfit does what you want. An N-1 degree polynomial can fit N points exactly, thus, when it minimizes the sum of squared error, it gets 0 (which is what you want).
y=[1.2 2.13 3.45 4.59 4.79];
x=[1:5];
coeffs = polyfit(x,y,4);
Will get you a polynomial that goes through all of your points.
What you ask is known as Lagrange Interpolation . There is a MATLAB file exchange available. http://www.mathworks.com/matlabcentral/fileexchange/899-lagrange-polynomial-interpolation
However, you should note that least squares polynomial fitting is generally preferred to Lagrange Interpolation since the data you have in principle will have noise in it and Lagrange Interpolation will fit the noise as well as the data you have. So if you know that your data actually represents M dimensional polynomial and have N data, where N>>M, then you will have a order N polynomial with Lagrange.
You have options.
Use polyfit, just give it enough leeway to perform an exact fit. That is:
values = [1.2 2.13 3.45 4.59 4.79];
p = polyfit(1:length(values), values, length(values)-1);
Now
polyval(p,2) %returns 2.13
Use interpolation / extrapolation
values = [1.2 2.13 3.45 4.59 4.79];
xInterp = 0:0.1:6;
valueInterp = interp1(1:length(values), values, xInterp ,'linear','extrap');
Interpolation provides a lot of options for smoothing, extrapolation etc. For example, try:
valueInterp = interp1(1:length(values), values, xInterp ,'spline','extrap');