Solve Ax = b using MATLAB - matlab

I have a linear system of equations AX = B to solve in MATLAB. What I have known is A is sparse, positive-definite and symmetric. I know the command x = A \ b works yet I am not sure MATLAB takes full advantage of A's good properties so as to maximize the efficiency. Is there any way to specify the algorithm to solve it, for example Conjugate Gradient algorithm in MATLAB?

If your matrix is sparse, you can use all these iterative functions, for example bicg for a biconjugate gradients method.

MATLAB's mldivide operator does indeed take advantage of properties of A. See the documentation for details - expand the "Algorithm" section.

Related

How to compute inverse of a matrix accurately?

I'm trying to compute an inverse of a matrix P, but if I multiply inv(P)*P, the MATLAB does not return the identity matrix. It's almost the identity (non diagonal values in the order of 10^(-12)). However, in my application I need more precision.
What can I do in this situation?
Only if you explicitly need the inverse of a matrix you use inv(), otherwise you just use the backslash operator \.
The documentation on inv() explicitly states:
x = A\b is computed differently than x = inv(A)*b and is recommended for solving systems of linear equations.
This is because the backslash operator, or mldivide() uses whatever method is most suited for your specific matrix:
x = A\B solves the system of linear equations A*x = B. The matrices A and B must have the same number of rows. MATLABĀ® displays a warning message if A is badly scaled or nearly singular, but performs the calculation regardless.
Just so you know what algorithm MATLAB chooses depending on your input matrices, here's the full algorithm flowchart as provided in their documentation
The versatility of mldivide in solving linear systems stems from its ability to take advantage of symmetries in the problem by dispatching to an appropriate solver. This approach aims to minimize computation time. The first distinction the function makes is between full (also called "dense") and sparse input arrays.
As a side-note about error of order of magnitude 10^(-12), besides the above mentioned inaccuracy of the inv() function, there's floating point accuracy. This post on MATLAB issues on it is rather insightful, with a more general computer science post on it here. Basically, if you are computing numerics, don't worry (too much at least) about errors 12 orders of magnitude smaller.
You have what's called an ill-conditioned matrix. It's risky to try to take the inverse of such a matrix. In general, taking the inverse of anything but the smallest matrices (such as those you see in an introduction to linear algebra textbook) is risky. If you must, you could try taking the Moore-Penrose pseudoinverse (see Wikipedia), but even that is not foolproof.

Linear program with double constraints on the same variable

I have a linear program of the form min(f*x) s.t. A1*x < d1; A2*x < d2. The form with one constraint is implemented in Matlab in command linprog. What command can I use to solve linear program with two constrraints?
I could of course create a block diagonal matrix, and double the size of the variable x, but if there is more efficient way I would like to use it, because the size of the matrix is quite large.
Possibly I don't understand the question right but can't you combine the matrixes A1 und A2 by A = [A1; A2]?
You maybe interested in Dantzig-Wolfe Decomposition algorithm for solving linear programming. It takes advantage of this block diagonal structure. However, I don't think there is an out-of-the box implementation of it in commercial softwares.

Solve the matrix equation

I have two matrices D and Y.
I want to find the matrix G according to this:
G*D = Y
Note that all of these matrices are not square matrices.
According to Matlab's documentation, if you want to solve an equation of the form
xA = b
you can solve it by doing
x = b/A
Note that your system is underdetermined, and you cannot simply find a single solution without additional constraints. An example:
A=[1;2;3];
b=[14;32];
x=b/A;
x*A==b % check if solution is correct
[1,2,3;4,5,6]*A==b % another, equally correct solution
It "works", but without restating the problem you're not going to get at anything better.
Note this is quite extensively explained in the same documentation.

Solve *sparse* upper triangular system

If I want to solve a full upper triangular system, I can call linsolve(A,b,'UT'). However this currently is not supported for sparse matrices. How can I overcome this?
UT and LT systems are amongst the easiest systems to solve. Have a read on the wiki about it. Knowing this, it is easy to write your own UT or LT solver:
%# some example data
A = sparse( triu(rand(100)) );
b = rand(100,1);
%# solve UT system by back substitution
x = zeros(size(b));
for n = size(A,1):-1:1
x(n) = (b(n) - A(n,n+1:end)*x(n+1:end) ) / A(n,n);
end
The procedure is quite similar for LT systems.
Having said that, it is generally much easier and faster to use Matlab's backslash operator:
x = A\b
which also works for spares matrices, as nate already indicated.
Note that this operator also solves UT systems which have non-square A or if A has some elements equal to zero (or < eps) on the main diagonal. It solves these cases in a least-squares sense, which may or may not be desirable for you. You could check for these cases before carrying out the solve:
if size(A,1)==size(A,2) && all(abs(diag(A)) > eps)
x = A\b;
else
%# error, warning, whatever you want
end
Read more about the (back)slash operator by typing
>> help \
or
>> help slash
on the Matlab command prompt.
Edit Since what you need is a triangular solve procedure, also called backward/forward substitution, you can use ordinary MATLAB backslash \ operator for that:
x = U\b
As mentioned in the original answer, MATLAB will recognise the fact that your matrix is triangular. To be sure of that, you can compare the performance to cs_usolve procedure found in SuiteSparse. It is a mex function implemented in C that computes sparse triangular solve for upper-triangular sparse matrix (there are similar functions there too: cs_lsolve, cs_utsolve and cs_ltsolve).
You can have a look at a performance comparison of native MATLAB and cs_l(t)solve in the context of sparse Cholesky factorization. Essentially, MATLAB performance is good. The only pitfall is if you want to solve a transposed system
x = U'\b
MATLAB does not recognize that and explicitly creates a transpose of U. In that case you should call cs_utsolve explicitly.
Original answer If your system is symmetric and you only store the upper triangular matrix part (that is how I understood full in your question), and if Cholesky decomposition is suitable for you, chol handles symmetric matrices, if your matrix is positive definite. For indefinite matrices you can use ldl. Both handle sparse storage and work on the symmetric matrix parts.
Newer matlab versions use cholmod and suitesparse for that. That is by far the best performing Cholesky factorization I know of. In matlab it is also parallelised usin parallel BALS.
The factor you obtain from the above functions is upper triangular matrix L such that
A=LL'
All you need to do now is perform forward and backward substitution, which is simple and cheap. In matlab this is automatically done in tha backslash operator
x=L'\(L\b)
the matrix can be sparse, and matlab will recognise that it is upper/lower triangular. You would also use this call together with forward substitution for factors obtained using the cholesky factorization.
you can use MLDIVIDE( \ ) or MRDIVIDE( / ) operators on your sparse matrices...

Doing a PCA using an optimization in Matlab

I'd like to find the principal components of a data matrix X in Matlab by solving the optimization problem min||X-XBB'||, where the norm is the Frobenius norm, and B is an orthonormal matrix. I'm wondering if anyone could tell me how to do that. Ideally, I'd like to be able to do this using the optimization toolbox. I know how to find the principal components using other methods. My goal is to understand how to set up and solve an optimization problem which has a matrix as the answer. I'd very much appreciate any suggestions or comments.
Thanks!
MJ
The thing about Optimization is that there are different methods to solve a problem, some of which can require extensive computation.
Your solution, given the constraints for B, is to use fmincon. Start by creating a file for the non-linear constraints:
function [c,ceq] = nonLinCon(x)
c = 0;
ceq = norm((x'*x - eye (size(x))),'fro'); %this checks to see if B is orthonormal.
then call the routine:
B = fmincon(#(B) norm(X - X*B*B','fro'),B0,[],[],[],[],[],[],#nonLinCon)
with B0 being a good guess on what the answer will be.
Also, you need to understand that this algorithms tries to find a local minimum, which may not be the solution you ultimately want. For instance:
X = randn(1,2)
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.4904 0.8719
0.8708 -0.4909
fmincon(#(B) norm(X - X*B*B','fro'),rand(2),[],[],[],[],[],[],#nonLinCon)
ans =
0.9864 -0.1646
0.1646 0.9864
So be careful, when using these methods, and try to select a good starting point
The Statistics toolbox has a built-in function 'princomp' that does PCA. If you want to learn (in general, without the optimization toolbox) how to create your own code to do PCA, this site is a good resource.
Since you've specifically mentioned wanting to use the Optimization Toolbox and to set this up as an optimization problem, there is a very well-trusted 3rd-party package known as CVX from Stanford University that can solve the optimization problem you are referring to at this site.
Do you have the optimization toolbox? The documentation is really good, just try one of their examples: http://www.mathworks.com/help/toolbox/optim/ug/brg0p3g-1.html.
But in general the optimization function look like this:
[OptimizedMatrix, OptimizedObjectiveFunction] = optimize( (#MatrixToOptimize) MyObjectiveFunction(MatrixToOptimize), InitialConditionsMatrix, ...optional constraints and options... );
You must create MyObjectiveFunction() yourself, it must take the Matrix you want to optimize as an input and output a scalar value indicating the cost of the current input Matrix. Most of the optimizers will try to minimise this cost. Note that the cost must be a scalar.
fmincon() is a good place to start, once you are used to the toolbox you and if you can you should choose a more specific optimization algorithm for your problem.
To optimize a matrix rather than a vector, reshape the matrix to a vector, pass this vector to your objective function, and then reshape it back to the matrix within your objective function.
For example say you are trying to optimize the 3 x 3 matrix M. You have defined objective function MyObjectiveFunction(InputVector). Pass M as a vector:
MyObjectiveFunction(M(:));
And within the MyObjectiveFunction you must reshape M (if necessary) to be a matrix again:
function cost = MyObjectiveFunction(InputVector)
InputMatrix = reshape(InputVector, [3 3]);
%Code that performs matrix operations on InputMatrix to produce a scalar cost
cost = %some scalar value
end