I am implementing a research paper in MATLAB and have encountered a matrix transformation that I don't know how to get done in MATLAB.
Here it is,
P*L*Q = [I O]
where P,Q are transformation matrices, L is the given matrix, and I,O are identity and zero matrices respectively.
Can anyone help me get this done in MATLAB through some function or an algorithm so I can implement this through my code?
I would say that the easiest way would be to use the in-build function svd.
https://www.mathworks.com/help/matlab/ref/svd.html?s_tid=gn_loc_drop
Related
I have the following problem. I have a N x N real matrix called Z(x; t), where x and t might be vectors in general. I have N_s observations (x_k, Z_k), k=1,..., N_s and I'd like to find the vector of parameters t that better approximates the data in the least square sense, which means I want t that minimizes
S(t) = \sum_{k=1}^{N_s} \sum_{i=1}^{N} \sum_{j=1}^N (Z_{k, i j} - Z(x_k; t))^2
This is in general a non-linear fitting of a matrix function. I'm only finding examples in which one has to fit scalar functions which are not immediately generalizable to a matrix function (nor a vector function). I tried using the scipy.optimize.leastsq function, the package symfit and lmfit, but still I don't manage to find a solution. Eventually, I'm ending up writing my own code...any help is appreciated!
You can do curve-fitting with multi-dimensional data. As far as I am aware, none of the low-level algorithms explicitly support multidimensional data, but they do minimize a one-dimensional array in the least-squares sense. And the fitting methods do not really care about the "independent variable(s)" x except in that they help you calculate the array to be minimized - perhaps to calculate a model function to match to y data.
That is to say: if you can write a function that would take the parameter values and calculate the matrix to be minimized, just flatten that 2-d (on n-d) array to one dimension. The fit will not mind.
I have been looking into Spark's documentation but still couldn't find how to get covariance matrix after doing linear regression.
Given input training data, I did a very simple linear regression similar to this:
val lr = new LinearRegression()
val fit = lr.fit(training)
Getting regression parameters is as easy as fit.coefficients but there seems to be no information on how to get covariance matrix.
And just to clarify, I am looking for function similar to vcov in R. With this, I should be able to do something like vcov(fit) to get the covariance matrix. Any other methods that can help to achieve this are okay too.
EDIT
The explanation on how to get covariance matrix from linear regression is discussed in detail here. Standard deviation is easy to get as it is provided by fit.summary.meanSsquaredError. However, the parameter (X'X)-1 is hard to get. It would be interesting to see if this can be used to somehow calculate the covariance matrix.
Although the whole covariance matrix is collected on the driver, it is not possible to obtain it without making your own solver. You can do that by copying WLS and setting additional "getters".
Closest you can get without digging into the code is lrModel.summary.coefficientStandardErrors that is based on diagonal of inverted matrix (A^T * W * A) which is based on upper triangular matrix (covariance).
I don't think that is enough so sorry about that.
I have a system of linear equations in the form of Ax = b to solve in Spark.
A is n by n
b is n by 1
I representA in the form of IndexedRowMatrix or RowMatrix and b in the form of DenseMatrix or DenseVector.
How can I solve this system to calculate the x vector?
If the suggested solution is Cholesky Decomposition, would you please guide me through doing it as it is not part of the public API ? For example if the original matrix A is:
1,2,3,4
2,1,5,6
3,5,1,7
4,6,7,1
and b is:
5,6,7,8
What is passed as argument to the solve method ?
Any other solution other than inversing A would be very helpful.
I don't know if the question is still relevant to you or not, but another solution to the question is to invert the coefficient matrix and then multiply the inverted matrix to the vector b. There are many matrix inversion algorithm. One such algorithm can be found in the following paper
SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark
You can find the complete code on GitHub link also.
Cheers!!
I want to obtain the natural frequencies of a simple mechanical system with mass matrix M and stiffness matrix K (.mat-file -> Download):
Mx''(t)+Kx(t)=0 (x= Position).
It means basically, that I have to solve det(K-w^2*M)=0. But how can I solve it in Matlab (or if necessary reduce it to a standard eigenvalue problem and solve it then)? The matrices are definitely solvable with Abaqus (FEM Software), but I have to solve it in Matlab.
I tried the following without success: det(K-w^2*M)=0 => det(M^-1*K-w^2*I)=0 (I := unity matrix)
But solving this eigenvalue problem with
sqrt(eigs(K*M^-1))
delivers wrong values and the warning:
"Matrix is singular to working precision.
In matlab.internal.math.mpower.viaMtimes (line 35)"
Other wrong values can be obtained via det(K-w^2*M)=0 => det(I/(w^2)-M*K^-1)=0:
1./sqrt(eigs(M*K^-1))
Any hint would help me. Thanks in advance.
As #Arpi mentioned, you actually want to solve the generalized eigenvalue problem:
K*x = w^2*M*x
Since your matrices K and M are apparently singular (or just one of them), it is not possible to use eigs, but you have to use eig:
V = eig(K,M);
w = sqrt(V);
I have a problem.
I have a task to write an own fft2 without using for-loops in Matlab.
There is a formula for computing this task:
F(u,v) = sum (0 to M-1) {sum(o to N-1) {f(m,n)*e^(-i*2pi*(um/M + vn/N))}}
Or for better reading:
http://www.directupload.net/file/d/3808/qs3r9ogz_png.htm
It is easy to do it with two for-loops but I have no idea how to do this without these loops, absolutely no idea.
We get no help by the teaching personal. They don't even give a hint or a reference to a book, where we could read about it.
Now, I want to try to get help here.
Are you familiar with the matrix form of DFT? have a look here: http://en.wikipedia.org/wiki/DFT_matrix
You can do something similar in order to get a matrix form for 2D DFT.
You need to transformation matrices. The first is a N-by-N DFT matrix that operates on the columns of f, as explained in the link above. Next you need another M-byM DFT matrix the operates on the rows of f. Finally, you transformed signal is given by
F = Wm * f * Wn;
without any loops.
Note that the DFT matrix can be constructed also without loop by using something like
(1:M)*((1:M)')
Just a little correction in Thp's answer: (1:M)*((1:M)') is not the right way to create the matrix, but (1:M)'*(1:M) is the correct way.