I have two matrices J1 (sparse) = J2(full).
The dimension of the matrices are ~ 5200x2600
Then when I do:
hlm1 = (J1'*J1 + u*I)\g, I = eye(n);
and
hlm2 = (J2'*J2 + u*I)\g, I = eye(n);
i have after that: norm(hlm1 - hlm12, Inf) is 4.8625e-05 ...
That difference is my problem, is correct the way to use the matrice sparse ?.
Thx.
This is not a complete answer, but I think it could be useful. I can partially reproduce this difference using some random data:
H1=sprand(1000,1000,.4);
g=sprand(1000,1,.5);
x=H1*g;
H2=full(H1);
x2=full(x);
g1=H1\x;
g2=H2\x2;
difference=norm(g1-g2,Inf)
errorSparse=norm(g1-g,Inf)
errorFull=norm(g2-g,Inf)
the norm ends up roughly O(1e-12). I think the difference is due to the method used to solve sparse system of equations. Solving the sparse system will be using sparse function, and solving the full matrix will be using a different set of functions. Naturally these functions will be different, and I think this is probably causing some differences. I can't explain why the errors are that large though.
See the documentation for mldivide which includes some short discussion about sparse matrices as well as some methods use to solve them.
Related
I have a linear system of about 2000 sparse equations in Matlab. For my final result, I only really need the value of one of the variables: the other values are irrelevant. While there is no real problem in simply solving the equations and extracting the correct variable, I was wondering whether there was a faster way or Matlab command. For example, as soon as the required variable is calculated, the program could in principle stop running.
Is there anyone who knows whether this is at all possible, or if it would just be easier to keep solving the entire system?
Most of the computation time is spent inverting the matrix, if we can find a way to avoid completely inverting the matrix then we may be able to improve the computation time. Lets assume I'm only interested in the solution for the last variable x(N). Using the standard method we compute
x = A\b;
res = x(N);
Assuming A is full rank, we can instead use LU decomposition of the augmented matrix [A b] to get x(N) which looks like this
[~,U] = lu([A b]);
res = U(end,end-1)/U(end,end);
This is essentially performing Gaussian elimination and then solving for x(N) using back-substitution.
We can extend this to find any value of x by swapping the columns of A before LU decomposition,
x_index = 123; % the index of the solution we are interested in
A(:,[x_index,end]) = A(:,[end,x_index]);
[~,U] = lu([A b]);
res = U(end,end)/U(end,end-1);
Bench-marking performance in MATLAB2017a with 10,000 random 200 dimensional systems we get a slight speed-up
Total time direct method : 4.5401s
Total time LU method : 3.9149s
Note that you may experience some precision issues if A isn't well conditioned.
Also, this approach doesn't take advantage of the sparsity of A. In my experiments even with 2000x2000 sparse matrices everything significantly slowed down and the LU method is significantly slower. That said full matrix representation only requires about 30MB which shouldn't be a problem on most computers.
If you have access to theory manuals on NASTRAN, I believe (from memory) there is coverage of partial solutions of linear systems. Also try looking for iterative or tri diagonal solvers for A*x = b. On this page, review the pqr solution answer by Shantachhani. Another reference.
I have a linear equation such as
Ax=b
where A is full rank matrix which its size is 512x512. b is a vector of 512x1. x is unknown vector. I want to find x, hence, I have some options for doing that
1.Using the normal way
inv(A)*b
2.Using SVD ( Singular value decomposition)
[U S V]=svd(A);
x = V*(diag(diag(S).^-1)*(U.'*b))
Both methods give the same result. So, what is benefit of using SVD to solve Ax=b, especially in the case A is a 2D matrix?
Welcome to the world of numerical methods, let me be your guide.
You, as a new person in this world wonders, "Why would I do something this difficult with this SVD stuff instead of the so commonly known inverse?! Im going to try it in Matlab!"
And no answer was found. That is, because you are not looking at the problem itself! The problems arise when you have an ill-conditioned matrix. Then the computing of the inverse is not possible numerically.
example:
A=[1 1 -1;
1 -2 3;
2 -1 2];
try to invert this matrix using inv(A). Youll get infinite.
That is, because the condition number of the matrix is very high (cond(A)).
However, if you try to solve it using SVD method (b=[1;-2;3]) you will get a result. This is still a hot research topic. Solving Ax=b systems with ill condition numbers.
As #Stewie Griffin suggested, the best way to go is mldivide, as it does a couple of things behind it.
(yeah, my example is not very good because the only solution of X is INF, but there is a way better example in this youtube video)
inv(A)*b has several negative sides. The main one is that it explicitly calculates the inverse of A, which is both time demanding, and may result in inaccuracies if values vary by many orders of magnitude.
Although it might be better than inv(A)*b, using svd is not the "correct" approach here. The MATLAB-way to do this is using mldivide, \. Using this, MATLAB chooses the best algorithm to solve the linear system based on its properties (Hermation, upper Hessenberg, real and positive diagonal, symmetric, diagonal, sparse etc.). Often, the solution will be a LU-triangulation with partial permutation, but it varies. You'll have a hard time beating MATLABs implementation of mldivide, but using svd might give you some more insight of the properties of the system if you actually investigates U, S, V. If you don't want to do that, do with mldivide.
I have two matrices D and Y.
I want to find the matrix G according to this:
G*D = Y
Note that all of these matrices are not square matrices.
According to Matlab's documentation, if you want to solve an equation of the form
xA = b
you can solve it by doing
x = b/A
Note that your system is underdetermined, and you cannot simply find a single solution without additional constraints. An example:
A=[1;2;3];
b=[14;32];
x=b/A;
x*A==b % check if solution is correct
[1,2,3;4,5,6]*A==b % another, equally correct solution
It "works", but without restating the problem you're not going to get at anything better.
Note this is quite extensively explained in the same documentation.
I'd like to find the principal components of a data matrix X in Matlab by solving the optimization problem min||X-XBB'||, where the norm is the Frobenius norm, and B is an orthonormal matrix. I'm wondering if anyone could tell me how to do that. Ideally, I'd like to be able to do this using the optimization toolbox. I know how to find the principal components using other methods. My goal is to understand how to set up and solve an optimization problem which has a matrix as the answer. I'd very much appreciate any suggestions or comments.
Thanks!
MJ
The thing about Optimization is that there are different methods to solve a problem, some of which can require extensive computation.
Your solution, given the constraints for B, is to use fmincon. Start by creating a file for the non-linear constraints:
function [c,ceq] = nonLinCon(x)
c = 0;
ceq = norm((x'*x - eye (size(x))),'fro'); %this checks to see if B is orthonormal.
then call the routine:
B = fmincon(#(B) norm(X - X*B*B','fro'),B0,[],[],[],[],[],[],#nonLinCon)
with B0 being a good guess on what the answer will be.
Also, you need to understand that this algorithms tries to find a local minimum, which may not be the solution you ultimately want. For instance:
X = randn(1,2)
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.4904 0.8719
0.8708 -0.4909
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.9864 -0.1646
0.1646 0.9864
So be careful, when using these methods, and try to select a good starting point
The Statistics toolbox has a built-in function 'princomp' that does PCA. If you want to learn (in general, without the optimization toolbox) how to create your own code to do PCA, this site is a good resource.
Since you've specifically mentioned wanting to use the Optimization Toolbox and to set this up as an optimization problem, there is a very well-trusted 3rd-party package known as CVX from Stanford University that can solve the optimization problem you are referring to at this site.
Do you have the optimization toolbox? The documentation is really good, just try one of their examples: http://www.mathworks.com/help/toolbox/optim/ug/brg0p3g-1.html.
But in general the optimization function look like this:
[OptimizedMatrix, OptimizedObjectiveFunction] = optimize( (#MatrixToOptimize) MyObjectiveFunction(MatrixToOptimize), InitialConditionsMatrix, ...optional constraints and options... );
You must create MyObjectiveFunction() yourself, it must take the Matrix you want to optimize as an input and output a scalar value indicating the cost of the current input Matrix. Most of the optimizers will try to minimise this cost. Note that the cost must be a scalar.
fmincon() is a good place to start, once you are used to the toolbox you and if you can you should choose a more specific optimization algorithm for your problem.
To optimize a matrix rather than a vector, reshape the matrix to a vector, pass this vector to your objective function, and then reshape it back to the matrix within your objective function.
For example say you are trying to optimize the 3 x 3 matrix M. You have defined objective function MyObjectiveFunction(InputVector). Pass M as a vector:
MyObjectiveFunction(M(:));
And within the MyObjectiveFunction you must reshape M (if necessary) to be a matrix again:
function cost = MyObjectiveFunction(InputVector)
InputMatrix = reshape(InputVector, [3 3]);
%Code that performs matrix operations on InputMatrix to produce a scalar cost
cost = %some scalar value
end
from a simulation problem, I want to calculate complex square matrices on the order of 1000x1000 in MATLAB. Since the values refer to those of Bessel functions, the matrices are not at all sparse.
Since I am interested in the change of the determinant with respect to some parameter (the energy of a searched eigenfunction in my case), I overcome the problem at the moment by first searching a rescaling factor for the studied range and then calculate the determinants,
result(k) = det(pre_factor*Matrix{k});
Now this is a very awkward solution and only works for matrix dimensions of, say, maximum 500x500.
Does anybody know a nice solution to the problem? Interfacing to Mathematica might work in principle but I have my doubts concerning feasibility.
Thank you in advance
Robert
Edit: I did not find a convient solution to the calculation problem since this would require changing to a higher precision. Instead, I used that
ln det M = trace ln M
which is, when I derive it with respect to k
A = trace(inv(M(k))*dM/dk)
So I at least had the change of the logarithm of the determinant with respect to k. From the physical background of the problem I could derive constraints on A which in the end gave me a workaround valid for my problem. Unfortunately I do not know if such a workaround could be generalized.
You should realize that when you multiply a matrix by a constant k, then you scale the determinant of the matrix by k^n, where n is the dimension of the matrix. So for n = 1000, and k = 2, you scale the determinant by
>> 2^1000
ans =
1.07150860718627e+301
This is of course a huge number, so you might expect that it should fail, since in double precision, MATLAB will only represent floating point numbers as large as realmax.
>> realmax
ans =
1.79769313486232e+308
There is no need to do all the work of recomputing that determinant, not that computing the determinant of a huge matrix like that is a terribly well-posed problem anyway.
If speed is not a concern, you may want to use det(e^A) = e^(tr A) and take as A some scaling constant times your matrix (so that A - I has spectral radius less than one).
EDIT: In MatLab, the log of a matrix (logm) is calculated via trigonalization. So it is better for you to compute the eigenvalues of your matrix and multiply them (or better, add their logarithm). You did not specify whether your matrix was symmetric or not: if it is, finding eigenvalues are easier than if it is not.
You said the current value of the determinant is about 10^-300.
Are you trying to get the determinant at a certain value, say 1? If so, rescaling is awkward: the matrix you are considering is ill-conditioned, and, considering the precision of the machine, you should consider the output determinant to be zero. It is impossible to get a reliable inverse in other words.
I would suggest to modify the columns or lines of the matrix rather than rescale it.
I used R to make a small test with a random matrix (random normal values), it seems the determinant should be clearly non-zero.
> n=100
> M=matrix(rnorm(n**2),n,n)
> det(M)
[1] -1.977380e+77
> kappa(M)
[1] 2318.188
This is not strictly a matlab solution, but you might want to consider using Mahout. It's specifically designed for large-scale linear algebra. (1000x1000 is no problem for the scales it's used to.)
You would call into java to pass data to/from Mahout.