How to calculate 'half' of an affine transformation matrix in MATLAB - matlab

I am looking to find 'half' of an affine transformation matrix using MATLAB. Yes I understand, 'half' a matrix isn't really correct, but exactly what I'm looking for was actually explained very well here: stackexchange mathematics
So I'm looking for an affine transformation matrix (B) which when applied twice to my image, will give the same result as when applying my initial matrix (A) once.
Reflection will not be part of A, otherwise it would be impossible to find B.
My initial matrix (A) is calculated using A = estimateGeometricTransform(movingPoints,fixedPoints,'affine'), which gives me an affine2d object.
If there is no way to find the 'half' matrix from the initial matrix, maybe the arrays of matched points can be manipulated in a way to find B from them.
Cheers

I think there is a possibility to find the half matrix that you speak of. It is called the matrix square root. Suppose you have the matrixa A. In Matlab you can just do B=sqrtm(A), where the m stands for matrix. Then you get a matrix B, where norm(B*B - A) is very small, if the matrix A was well behaved.
If I understand correctly you want to have half of an affine transformation aff = #(x) A*x + b. This can be done using homogenious coordinates. Every transformation aff can be represented by a the matrix
M = [A b; zeros(1,length(b)) 1], where
normalize = #(y) y(1:end-1)/y(end);
affhom = #(x) normalize(M*[x; 1]);
Note that aff and affhom do exactly the same thing. Here we can use what I was talking about earlier. Half of affhom can be represented using
affhomhalf = #(x) normalize(sqrtm(M)*[x; 1])
where
affhomhalf(affhomhalf(y)) - aff(y)
is small for all y, if A and b were well behaved.
I'm not sure about this, but I think you can even decompose sqrtm(M) into an linear and translatory part.

Related

applying the same solver for complex matrix

Let assume an equation A x = b, where A is a real n x n matrix, b is a real valued vector of length n and x is the solution vector of this linear system.
We can find the solution of x by finding the inverse of A.
B= inv(A)
And therefore
x =A^{-1}b.
x= B*b
Can I apply the same solver if A and b are complex?
EDIT: I'm looking for explanation why it should work. Thanks :)
You can do that. Better in Matlab would be x = A\b. That will usually get the answer faster and more accurately.
In short yes, it works for any matrices irrespective of the field. (Well, at least real or complex field works.)
I think your question is whether B exists or not. It does as long as the determinant of B is non-zero. Vice versa.

Working with Givens rotations

If we consider a matrix R of size pxp. If we want to multiply A'RA where A is equal to (I+Givens rotation). Here I is an identity matrix and ' denotes the transpose operator.
We know that a Givens rotation is a sparse matrix written as:
To perform the multiplication A'RA in matlab, we can do this fast implementation:
%Fast implementation
ci = R(:,ik)*(cos(theta))+R(:,jk)*(sin(theta)); % R*A
cj = R(:,jk)*(cos(theta)) - R(:,ik)*(sin(theta));
R(:,ik) = ci;
R(:,jk) = cj;
ri = R(ik,:)*(cos(theta))+R(jk,:)*(sin(theta)); % A'*R*A
rj = R(jk,:)*(cos(theta)) - R(ik,:)*(sin(theta));
R(ik,:) = ri;
R(jk,:) = rj;
But I didn't understand how they wrote this Matlab code. In other terms, I am not understanding how this Matlab code apply the multiplication A'RA. Kindly, can someone help me to understand this code?
One possible source of confusion is that either the signs in the Givens rotation matrix, or the side on which we need to transpose, is wrong in your example. I'll assume the latter: I'll use the same A matrix as you defined, but transform with A*R*A' (changing the A to transpose is equivalent to taking the rotation angle with opposite sign).
The algorithm is relatively straightforward. For starters, as the comments in the code suggest, the transformation is performed in two steps:
Rnew = A * R * A' = A * (R * A')
First, we compute R*A'. For this, imagine the transformation matrix A = I + M with the Givens rotation matrix M. The formula which you showed basically says "Take a unit matrix, except for 2 specified dimensions in which you rotate by a given angle". Here's how the full A matrix looks like for a small problem (6d matrix, ik=2, jk=4, both in full and sparse form):
You can see that except for the (ik,jk) 2d subspace, this matrix is a unit matrix, leaving every other dimension intact. So the action of R*A' will result in R for every dimension except for columns ik and jk.
In these two columns the result of R*A' is the linear combination of R(:,ik) and R(:,jk) with these trigonometric coefficients:
[R*A'](:,ik) = R(:,ik)*cos(theta) + R(:,jk)*sin(theta)
[R*A'](:,jk) = -R(:,ik)*sin(theta) + R(:,jk)*cos(theta)
while the rest of the columns are left unchanged. If you look at the code you cited: this is exactly what it's doing. This is, by definition, what R*A' means with the A matrix shown above. All of this is the implication of that the A matrix is a unit matrix except for a 2d subspace.
The next step is then quite similar: using this new R*A' matrix we multiply from the left with A. Again, the effect along most of the dimensions (rows, this time) will be identity, but in rows ik and jk we again get a linear combination:
[A*[R*A']](ik,:) = cos(theta)*[R*A'](ik,:) + sin(theta)*[R*A'](jk,:)
[A*[R*A']](jk,:) = -sin(theta)*[R*A'](ik,:) + cos(theta)*[R*A'](jk,:)
By noting that the code overwrites the R matrix with R*A' after the first step, it's again clear that the same is performed in the "fast implementation" code.
Disclaimer: A' is the adjoint (conjugate transpose) in matlab, so you should use A.' to refer to the transpose. For complex matrices there's a huge difference, and people often forget to use the proper transpose when eventually encountering complex matrices.

Rotate a basis to align to vector

I have a matrix M of size NxP. Every P columns are orthogonal (M is a basis). I also have a vector V of size N.
My objective is to transform the first vector of M into V and to update the others in order to conservate their orthogonality. I know that the origins of V and M are the same, so it is basically a rotation from a certain angle. I assume we can find a matrix T such that T*M = M'. However, I can't figure out the details of how to do it (with MATLAB).
Also, I know there might be an infinite number of transforms doing that, but I'd like to get the simplest one (in which others vectors of M approximately remain the same, i.e no rotation around the first vector).
A small picture to illustrate. In my actual case, N and P can be large integers (not necessarily 3):
Thanks in advance for your help!
[EDIT] Alternative solution to Gram-Schmidt (accepted answer)
I managed to get a correct solution by retrieving a rotation matrix R by solving an optimization problem minimizing the 2-norm between M and R*M, under the constraints:
V is orthogonal to R*M[1] ... R*M[P-1] (i.e V'*(R*M[i]) = 0)
R*M[0] = V
Due to the solver constraints, I couldn't indicate that R*M[0] ... R*M[P-1] are all pairwise orthogonal (i.e (R*M)' * (R*M) = I).
Luckily, it seems that with this problem and with my solver (CVX using SDPT3), the resulting R*M[0] ... R*M[P-1] are also pairwise orthogonal.
I believe you want to use the Gram-Schmidt process here, which finds an orthogonal basis for a set of vectors. If V is not orthogonal to M[0], you can simply change M[0] to V and run Gram-Schmidt, to arrive at an orthogonal basis. If it is orthogonal to M[0], instead change another, non-orthogonal vector such as M[1] to V and swap the columns to make it first.
Mind you, the vector V needs to be in the column space of M, or you will always have a different basis than you had before.
Matlab doesn't have a built-in Gram-Schmidt command, although you can use the qr command to get an orthogonal basis. However, this won't work if you need V to be one of the vectors.
Option # 1 : if you have some vector and after some changes you want to rotate matrix to restore its orthogonality then, I believe, this method should work for you in Matlab
http://www.mathworks.com/help/symbolic/mupad_ref/numeric-rotationmatrix.html
(edit by another user: above link is broken, possible redirect: Matrix Rotations and Transformations)
If it does not, then ...
Option # 2 : I did not do this in Matlab but a part of another task was to find Eigenvalues and Eigenvectors of the matrix. To achieve this I used SVD. Part of SVD algorithm was Jacobi Rotation. It says to rotate the matrix until it is almost diagonalizable with some precision and invertible.
https://math.stackexchange.com/questions/222171/what-is-the-difference-between-diagonalization-and-orthogonal-diagonalization
Approximate algorithm of Jacobi rotation in your case should be similar to this one. I may be wrong at some point so you will need to double check this in relevant docs :
1) change values in existing vector
2) compute angle between actual and new vector
3) create rotation matrix and ...
put Cosine(angle) to diagonal of rotation matrix
put Sin(angle) to the top left corner of the matric
put minus -Sin(angle) to the right bottom corner of the matrix
4) multiple vector or matrix of vectors by rotation matrix in a loop until your vector matrix is invertible and diagonalizable, ability to invert can be calculated by determinant (check for singularity) and orthogonality (matrix is diagonalized) can be tested with this check - if Max value in LU matrix is less then some constant then stop rotation, at this point new matrix should contain only orthogonal vectors.
Unfortunately, I am not able to find exact pseudo code that I was referring to in the past but these links may help you to understand Jacobi Rotation :
http://www.physik.uni-freiburg.de/~severin/fulltext.pdf
http://web.stanford.edu/class/cme335/lecture7.pdf
https://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2003/rapporter03/maleko_mercy_03003.pdf

2D Self-Deconvolution in MATLAB

I have some data, a 2D matrix we'll call A, which I know in theory can be described by a self-convolution of another matrix we'll call B:
A=conv2(B,B)
I am trying to extract B. Is there a way to perform a self deconvolution of a 2D matrix in MATLAB? Can anyone point me in the right direction?
We can view A as the coefficients a polynomial in two variables, and we want to find a polynomial B so that B^2 = A. This type of calculation is not what Matlab was designed to do, but I think if you have the symbolic math toolbox, you can make a symbolic polynomial from A, take the square root, and convert this back to a matrix of coefficients. If the coefficients of A are noisy, then you might evaluate A and then sqrt(A) on several (x,y) points away from where A is 0, fit a polynomial B to those values, and extract the coefficients from B. -B will also work. Try not to choose points separated by a curve where A is 0, or you might mix values of B and -B.

Exponential curve fit matlab

I have the following equation:
I want to do a exponential curve fitting using MATLAB for the above equation, where y = f(u,a). y is my output while (u,a) are my inputs. I want to find the coefficients A,B for a set of provided data.
I know how to do this for simple polynomials by defining states. As an example, if states= (ones(size(u)), u u.^2), this will give me L+Mu+Nu^2, with L, M and N being regression coefficients.
However, this is not the case for the above equation. How could I do this in MATLAB?
Building on what #eigenchris said, simply take the natural logarithm (log in MATLAB) of both sides of the equation. If we do this, we would in fact be linearizing the equation in log space. In other words, given your original equation:
We get:
However, this isn't exactly polynomial regression. This is more of a least squares fitting of your points. Specifically, what you would do is given a set of y and set pair of (u,a) points, you would build a system of equations and solve for this system via least squares. In other words, given the set y = (y_0, y_1, y_2,...y_N), and (u,a) = ((u_0, a_0), (u_1, a_1), ..., (u_N, a_N)), where N is the number of points that you have, you would build your system of equations like so:
This can be written in matrix form:
To solve for A and B, you simply need to find the least-squares solution. You can see that it's in the form of:
Y = AX
To solve for X, we use what is called the pseudoinverse. As such:
X = A^{*} * Y
A^{*} is the pseudoinverse. This can eloquently be done in MATLAB using the \ or mldivide operator. All you have to do is build a vector of y values with the log taken, as well as building the matrix of u and a values. Therefore, if your points (u,a) are stored in U and A respectively, as well as the values of y stored in Y, you would simply do this:
x = [u.^2 a.^3] \ log(y);
x(1) will contain the coefficient for A, while x(2) will contain the coefficient for B. As A. Donda has noted in his answer (which I embarrassingly forgot about), the values of A and B are obtained assuming that the errors with respect to the exact curve you are trying to fit to are normally (Gaussian) distributed with a constant variance. The errors also need to be additive. If this is not the case, then your parameters achieved may not represent the best fit possible.
See this Wikipedia page for more details on what assumptions least-squares fitting takes:
http://en.wikipedia.org/wiki/Least_squares#Least_squares.2C_regression_analysis_and_statistics
One approach is to use a linear regression of log(y) with respect to u² and a³:
Assuming that u, a, and y are column vectors of the same length:
AB = [u .^ 2, a .^ 3] \ log(y)
After this, AB(1) is the fit value for A and AB(2) is the fit value for B. The computation uses Matlab's mldivide operator; an alternative would be to use the pseudo-inverse.
The fit values found this way are Maximum Likelihood estimates of the parameters under the assumption that deviations from the exact equation are constant-variance normally distributed errors additive to A u² + B a³. If the actual source of deviations differs from this, these estimates may not be optimal.