Epipolar lines with known rotation and translation - matlab

I want to calculate the epipolar lines for the interest points between two images. I am working on a fountain dataset, so I have the rotation and translation matrix, as well as the camera matrix. I currently use Matlab in order to be fast, but the version I have is quite old(2009).
I am calculating the essential matrix through E=t*R and then the epipolar line with l=E*P, where P is the interest point/set of interest points. Then I get a vector with three lines which I guess are the line parameters of ax+by+c=0. The epipolar line drawn on the right image is totally wrong, far away from the point on the left image. Any idea???
Edit: Used dataset --> fountain benchmark, images 0000 and 0001 http://cvlabwww.epfl.ch/~strecha/multiview/denseMVS.html
Output: Essential matrix e.g. for point P1=[433.36;861.15;1]
E =
0.761857065048902 1.969487475012598 40.418915885686594
-0.927781947178923 0.698934833377211 33.173562943087106
-45.044061511303227 -26.573128396975097 1.000000000000000
It has two complex eigenvalues that are conjugated.
Epipolar line:1.0e+004 *
0.206660143270238
0.023299771007641
-4.240274401559348

Finally I found the solution to my problem. I post it here in case somebody else is interested.
To calculate correctly the relative rotation and translation matrices, the Roto-Translation matrix has to be used. This matrix is a 4x4 matrix for every image. The upper left part is the rotation (wrt the world coordinate system), the 4th sub-column is the translation vector (wrt to the world coordinate system) and the last row is [0 0 0 1]. So, if we have 2 such matrices for 2 images, the final roto-translation matrix is Qright-->left=inv(Qright)*Qleft. From this matrix, we extract the relative translation (t) and rotation(R) (4th sub column and upper left matrix respectively). Then, we create the skew symmetric matrix T for translation. The epipolar matrix is E=R*T. But this isn't enough. In order to calculate correctly the epipolar lines, the Fundamental matrix F has to be found. For a given dataset such the one I used, camera matrices K are given so this is easy: F=inv(Kright')*E*inv(Kleft), where (') is the transposed and inv is the inverted matrix. Then, the epipolar lines of the right image are calculated lines=F*P, where P is the point in homogeneous coordinates.
Thank you!

There are lots of documents that can found online that explain epipolar geometry and how to find epipolar lines in stereo images. Here is one. It walks you through different concepts decently. The trick to this topic, I found, is keeping track of the variables which are ultimately the result of matrix transformations and implied (professor shortcuts) algabraic operations.
My recommendation would be looking at page 12 of the link I've provided and applying it your scenario. Without any data to go off of other than the description you've provided, it's impossible to work out the problem.
Good luck.
Note: sorry to hear your Matlab version is old. I know that 2013 has built in functions for this stuff, but I'm not sure if 2009 does because MathWorks requries an account to read older documentation.

Related

How to compute distance and estimate quality of heterogeneous grids in Matlab?

I want to evaluate the grid quality where all coordinates differ in the real case.
Signal is of a ECG signal where average life-time is 75 years.
My task is to evaluate its age at the moment of measurement, which is an inverse problem.
I think 2D approximation of the 3D case is hard (done here by Abo-Zahhad) with with 3-leads (2 on chest and one at left leg - MIT-BIT arrhythmia database):
where f is a piecewise continuous function in R^2, \epsilon is the error matrix and A is a 2D matrix.
Now, I evaluate the average grid distance in x-axis (time) and average grid distance in y-axis (energy).
I think this can be done by Matlab's Image Analysis toolbox.
However, I am not sure how complete the toolbox's approaches are.
I think a transform approach must be used in the setting of uneven and noncontinuous grids. One approach is exact linear time euclidean distance transforms of grid line sampled shapes by Joakim Lindblad et all.
The method presents a distance transform (DT) which assigns to each image point its smallest distance to a selected subset of image points.
This kind of approach is often a basis of algorithms for many methods in image analysis.
I tested unsuccessfully the case with bwdist (Distance transform of binary image) with chessboard (returns empty square matrix), cityblock, euclidean and quasi-euclidean where the last three options return full matrix.
Another pseudocode
% https://stackoverflow.com/a/29956008/54964
%// retrieve picture
imgRGB = imread('dummy.png');
%// detect lines
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
where I think the approach will not work here because the grids are not necessarily continuous and not necessary ideal.
pdist based on the Lumbreras' answer
In the real examples, all coordinates differ such that pdist hamming and jaccard are always 1 with real data.
The options euclidean, cytoblock, minkowski, chebychev, mahalanobis, cosine, correlation, and spearman offer some descriptions of the data.
However, these options make me now little sense in such full matrices.
I want to estimate how long the signal can live.
Sources
J. Müller, and S. Siltanen. Linear and nonlinear inverse problems with practical applications.
EIT with the D-bar method: discontinuous heart-and-lungs phantom. http://wiki.helsinki.fi/display/mathstatHenkilokunta/EIT+with+the+D-bar+method%3A+discontinuous+heart-and-lungs+phantom Visited 29-Feb 2016.
There is a function in Matlab defined as pdist which computes the pairwisedistance between all row elements in a matrix and enables you to choose the type of distance you want to use (Euclidean, cityblock, correlation). Are you after something like this? Not sure I understood your question!
cheers!
Simply, do not do it in the post-processing. Those artifacts of the body can be about about raster images, about the viewer and/or ... Do quality assurance in the signal generation/processing step.
It is much easier to evaluate the original signal than its views.

Implementation of Radon transform in Matlab, output size

Due to the nature of my problem, I want to evaluate the numerical implementations of the Radon transform in Matlab (i.e. different interpolation methods give different numerical values).
while trying to code my own Radon, and compare it to Matlab's output, I found out that my radon projection sizes are different than Matlab's.
So a bit of intuition of how I compute the amount if radon samples needed. Let's do the 2D case.
The idea is that the maximum size would be when the diagonal (in a rectangular shape at least) part is proyected in the radon transform, so diago=sqrt(size(I,1),size(I,2)). As we dont wan nothing out, n_r=ceil(diago). n_r should be the amount of discrete samples of the radon transform should be to ensure no data is left out.
I noticed that Matlab's radon output is always even, which makes sense as you would want a "ray" through the rotation center always. And I noticed that there are 2 zeros in the endpoints of the array in all cases.
So in that case, n_r=ceil(diago)+mod(ceil(diago)+1,2)+2;
However, it seems that I get small discrepancies with Matlab.
A MWE:
% Try: 255,256
pixels=256;
I=phantom('Modified Shepp-Logan',pixels);
rd=radon(I,pi/4);
size(rd,1)
s=size(I);
diagsize=sqrt(sum(s.^2));
n_r=ceil(diagsize)+mod(ceil(diagsize)+1,2)+2
rd=
367
n_r =
365
As Matlab's Radon transform is a function I can not look into, I wonder why could it be this discrepancy.
I took another look at the problem and I believe this is actually the right answer. From the "hidden documentation" of radon.m (type in edit radon.m and scroll to the bottom)
Grandfathered syntax
R = RADON(I,THETA,N) returns a Radon transform with the
projection computed at N points. R has N rows. If you do not
specify N, the number of points the projection is computed at
is:
2*ceil(norm(size(I)-floor((size(I)-1)/2)-1))+3
This number is sufficient to compute the projection at unit
intervals, even along the diagonal.
I did not try to rederive this formula, but I think this is what you're looking for.
This is a fairly specialized question, so I'll offer up an idea without being completely sure it is the answer to your specific question (normally I would pass and let someone else answer, but I'm not sure how many readers of stackoverflow have studied radon). I think what you might be overlooking is the floor function in the documentation for the radon function call. From the doc:
The radial coordinates returned in xp are the values along the x'-axis, which is
oriented at theta degrees counterclockwise from the x-axis. The origin of both
axes is the center pixel of the image, which is defined as
floor((size(I)+1)/2)
For example, in a 20-by-30 image, the center pixel is (10,15).
This gives different behavior for odd- or even-sized problems that you pass in. Hence, in your example ("Try: 255, 256"), you would need a different case for odd versus even, and this might involve (in effect) padding with a row and column of zeros.

Explaining corr2 function in Matlab

Can someone explain to me the correlation function corr2 in MATLAB? I know that it is for 2D comparing similarities of objects, but in the equation I have doubts what it is A and B (probably matrices for comparison), and also Amn and Bmn.
I'm not sure how MATLAB executes this function, because I have found in several cases that the correlation is not executed for the entire image (matrix) but instead it divides the image into blocks and then compares blocks of one picture with blocks of another picture.
In MATLAB's documentation, the corr2 equation is not put as referral point to the way the equation itself is calculated, like in other functions in MATLAB's documentation, such as referring to what book it is taken from and where it is explained.
The correlation coefficient is a number representing the similarity between 2 images in relation with their respective pixel intensity.
As you pointed out this function is used to calculate this coefficient:
Here A and B are the images you are comparing, whereas the subscript indices m and n refer to the pixel location in the image. Basically what Matab does is to compute, for every pixel location in both images, the difference between the intensity value at that pixel and the mean intensity of the whole image, denoted as a letter with a straightline over it.
As Kostya pointed out, typing edit corr2 in the command window will show you the code used by Matlab to compute the correlation coefficient. The formula is basically this:
a = a - mean2(a);
b = b - mean2(b);
r = sum(sum(a.*b))/sqrt(sum(sum(a.*a))*sum(sum(b.*b)));
where:
a is the input image and b is the image you wish to compare to a.
If we break down the formula, we see that a - mean2(a) and b-mean2(b) are the elements in the numerator of the above equation. mean2(a) is equivalent to mean(mean(a)) or mean(a(:)), that is the mean intensity of the whole image. This is only calculated once.
The 3rd line of code calculates the coefficient. Here sum(sum(a.*b)) calculates the double-sum present in the formula element-wise, that is considering each pixel location separately. Be aware that using sum(a) calculates the sum in every column individually, hence in order to get a single value you need to apply sum twice.
That's pretty much the same happening in the denominator, however calculations are performed on a-mean2(a)^2 and b-mean2(b)^2. You can see this a some kind of normalization process in which you consider the pixel intensity difference among each individual image.
As for your last comment, you can break down an image into small blocks and calculate the correlation coefficient on them; that might save some time for very large images but since everything is vectorized the calculation is quite fast. It might be useful in distributed processing I guess. Of course the correlation coefficient between 2 blocks of images is not necessarily identical to that of the whole image.
For the sake of curiosity you can look at this paper which highlights some caveats in using the correlation coefficient for image comparison.
Hope that makes things a bit clearer!

How can I generate a set of n dimensional vectors that contains all integer points in an n-dimensional rectangular prism

Okay, so I'm working on a problem related to quantum chaos and one of the things I need to do is to map the unit cube in n-dimensions to a parallelepiped in n-dimensions and find all integer points in the interior of this parallelepiped. I have been trying to do this using the following scheme:
Given the linear map B and the dimension of the cube n, we find the coordinates of the corners of the unit hypercube by converting numbers j from 0 to (2^n -1) into their binary representation and turning them into vectors that describe the vertices of the cube.
The next step was to apply the map B to each of these vectors, which gives me a set of 2^n vectors describing the coordinates of the vertices of the parallelepiped in n dimensions
Now, we take the maximum and minimum value attained by any of these vertices in each coordinate direction, i.e the first element of my vectors might have a maximum value of 4 across all of the vertices and a minimum value of -3 etc. This gives me an n-dimensional rectangular prism that contains my parallelepiped and some extra unwanted space.
I now find all points with integer coordinates in this bounding rectangular prism described as vectors in n dimensions
Finally, I apply the inverse of the map B to each of the points and throw away any points that have any coefficients greater than 1 as they must originally have lain outside my unit hypercube.
My issue arises in step 4, I'm struggling to come up with a way of generating all vectors with integer coordinates in my rectangular hyper-prism such that I can change the number of dimensions n on the fly. Ideally, i'd like to be able to increase n at will until it becomes too computationally heavy to do so, but every method of finding all integer points in the prism i've tried so far has relied on n for loops to permute each element and thus I need to rewrite the code every time.
So I guess my question is this, is there any way to code this up so that I can change n on the fly? Also, any thoughts on the idea of the algorithm itself would be appreciated :) It wouldn't surprise me if i've overcomplicated things massively...
EDIT:
Of course as soon as I post the question I see a lovely little link in the side-bar where a clever method has been given already for how to do this: Generate a matrix containing all combinations of elements taken from n vectors
I'll leave this up for the moment just in case anyone has any comments on the method in general, but otherwise (since I can't upvote yet I'll just say it here) Luis Mendo, you are a hero!

computing PCA matrix for set of sift descriptors

I want to compute a general PCA matrix for a dataset, and I will use it to reduce dimensions of sift descriptors. I have already found some algorithms to compute it, but I couldn't find a way to compute it by using MATLAB.
Can someone help me?
[coeff, score] = princomp(X)
is the right thing to do, but knowing how to use it is a little tricky.
My understanding is that you did something like:
sift_image = sift_fun(img)
which gives you a binary image: sift_feature?
(Even if not binary, this still works.)
Inputs, formulating X:
To use princomp/pca formulate X so that each column is a numel(sift_image) x 1 vector (i.e. sift_image(:))
Do this for all your images and line them up as columns in X. So X will be numel(sift_image) x num_images.
If your images aren't the same size (e.g. pixel dimensions different, more or less of a scene in the images), then you'll need to bring them into some common space, which is a whole different problem.
Unless your stuff is binary, you'll probably want to de-mean/normalize X, both in the column direction (i.e. normalizing each individual image) and row direction (de-meaning the whole dataset).
Outputs
score is the set of eigen vectors: it will be num_pixels * num_images.
To get, say the first eigen vector back into an image shape, do:
first_component = reshape(score(:,1),size(im));
And so on for the rest of the components. There are as many components as input images.
Each row of coeff is the num_images (equal to num_components) set of weights that can be applied to generate each input image. i.e.
input_image_1 = reshape(score * coeff(:,1) , size(original_im));
where input_image_1 is the correct, original shape
coeff(1,:) is a vector (num_images x 1)
score is pixels x num_images
(Disclaimer: I may have the columns/rows mixed up, but the descriptions are correct.)
Does that help?
If you have access to Statistics Toolbox, you can use the command princomp, or in recent versions the command pca.