Is mldivide always the same as OLS in MATLAB? - matlab

I am doing a comparison of some alternate linear regression techniques.
Clearly these will be bench-marked relative to OLS (Ordinary Least Squares).
But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress().
I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing least squares estimation as it is superior in terms of execution time and numerical accuracy.
However, I'm concerned as to whether it's okay to use mldivide to find the OLS estimates? As an operator it seems I can't see what the function is doing by "stepping-in" in the debugger.
Can I be assume that mldivide will produce the same answers as theoretical OLS under all conditions (including in the presence of) singular/i-ll conditioned matrices)?
If not what is the best way to compute pure OLS answers in MATLAB without any preconditioning of the data?

The short answer is:
When the system A*x = b is overdetermined, both algorithms provide the same answer. When the system is underdetermined, PINV will return the solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will pick the solution with least number of non-zero elements.
As for how mldivide works, MathWorks also posted a description of how the function operates.
However, you might also want to have a look at this answer for the first part of the discussion about mldivide vs. other methods when the matrix A is square.
Depending on the shape and composition of the matrix you would use either Cholesky decomposition for symmetric positive definite, LU decomposition for other square matrix or QR otherwise. Then you can can hold onto the factorization and use linsolve to essentially just do back-substitution for you.
As to whether mldivide is preferable to pinv when A is either not square (overspecified) or is square but singular, the two options will give you two of the infinitely many solutions. According to those docs, both solutions will give you exact solutions:
Both of these are exact solutions in the sense that norm(A*x-b) and norm(A*y-b)are on the order of roundoff error.

According to the help page pinv gives a least squares solution to a system of equations, and so to solve the system Ax=b, just do x=pinv(A)*b.

Related

MATLAB eig vs eigs vs svd vs svds

I'm running an MCMC scheme, in which I calculate a lot of eigenvalues. The matrices will have between around 10x10 to 200x200, so not massive, and deifnitely not at the size where I would need to consider using sparse matrices.
Each matrix I'm looking at has a 0 eigenvalue, and I just need to find the eigenvector corresponding to that 0 eigenvalue. Which function out of eig, eigs, svd, svds would be fastest for this?
eigs allows you to specify that you only want the smallest eigenvalue (or nth smallest eigenvalues), so instinctively I'd think this would be faster - though don't know anything about the underlying methods. I think similar can be done for svd/ svds.
I've also run into issues with the methods (except for eigs) telling me that my system is almost singular, whihc doesn't occur when I use eig.
Does anyone have any suggestions on what the best method to use would be?

Matlab's `inv` function [duplicate]

I read at a few places (in the doc and in this blog post : http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/ ) that the use of inv in Matlab is not recommended because it is slow and inaccurate.
I am trying to find the reason of this inaccuracy. As of now, Google did not give m interesting result, so I thought someone here could guide me.
Thanks !
The inaccuracy I mentioned is with the method INV, not MATLAB's implementation of it. You should be using QR, LU, or other methods to solve systems of equations since these methods don't typically require squaring the condition number of the system in question. Using inv typically requires an operation that loses accuracy by squaring the condition number of the original system.
--Loren
I think the point of Loren's blog is not that MATLAB's inv function is particularly slower or more inaccurate than any other numerical implementation of computing a matrix inverse; rather, that in most cases the inverse itself is not needed, and you can proceed by other means (such as solving a linear system using \ - the backslash operator - rather than computing an inverse).
inv() is certainly slower than \ unless you have multiple right hand side vectors to solve for. However, the advice from MathWorks regarding inaccuracy is due to a overly conservative bound in a numerical linear algebra result. In other words, inv() is NOT inaccurate. The link elaborates further : http://arxiv.org/abs/1201.6035
Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers.

Return elements of the Groebner Basis as they are found

This question could refer to any computer algebra system which has the ability to compute the Groebner Basis from a set of polynomials (Mathematica, Singular, GAP, Macaulay2, MatLab, etc.).
I am working with an overdetermined system of polynomials for which the full groebner basis is too difficult to compute, however it would be valuable for me to be able to print out the groebner basis elements as they are found so that I may know if a particular polynomial is in the groebner basis. Is there any way to do this?
If you implement Buchberger's algorithm on your own, then you can simply print out the elements as the are found.
If you have Mathematica, you can use this code as your starting point.
https://www.msu.edu/course/mth/496/snapshot.afs/groebner.m
See the function BuchbergerSteps.
Due to the way the Buchberger algorithm works (see, for instance, Wikipedia or IVA), the partial results that you could obtain by printing intermediate results are not guaranteed to constitute a Gröbner basis.
Depending on your ultimate goal, you may want to try instead an algorithm for triangularization of ideals, such as Ritt-Wu's algorithm (see IVA or Shang-Ching Chou's book). This is somewhat similar to reduction to row echelon form in Linear Algebra, and you may interrupt the algorithm at any point to get a partially reduced system of polynomial equations.

MATLAB calculates INV wrong (for singular matrices)

MATLAB calculate INV wrong sometimes:
See this example
[ 8617412867597445*2^(-25), 5859840749966268*2^(-28)]
[ 5859840749966268*2^(-28), 7969383419954132*2^(-32)]
If you put this in MATLAB it doesn't have inverse but in s calculator it has one.
What is going on?
Please read What every scientist should know about floating point arithmetic
Next, don't compute the inverse anyway. An inverse matrix is almost never necessary, except in textbooks, where it is convenient to write. Sadly, many authors do not appreciate this fact anyway, because they had learned from textbooks by other people who also failed to understand that an inverse matrix is a bad thing to do in general.
Since this matrix is numerically singular in double precision arithmetic, the inverse of that matrix is meaningless.
Use of the matlab backslash operator will be better and faster in general than will inverse. Or use pinv, which will be more robust to problems.
Hi I wanted to comment on Woodchips' answer but since I'm a new user I can't seem to do that, that is one very interesting article and I must read it in more detail when I have the time...
With regards to matrix inversion, you could always use the 'cond' command to calculate the condition number of the matrix, for a non-singular matrix the value should be approaching unity. As Woodchips suggested, 'pinv' does come in handy if you need to find the psuedo-inverse of a non-square matrix.

Why is Matlab's inv slow and inaccurate?

I read at a few places (in the doc and in this blog post : http://blogs.mathworks.com/loren/2007/05/16/purpose-of-inv/ ) that the use of inv in Matlab is not recommended because it is slow and inaccurate.
I am trying to find the reason of this inaccuracy. As of now, Google did not give m interesting result, so I thought someone here could guide me.
Thanks !
The inaccuracy I mentioned is with the method INV, not MATLAB's implementation of it. You should be using QR, LU, or other methods to solve systems of equations since these methods don't typically require squaring the condition number of the system in question. Using inv typically requires an operation that loses accuracy by squaring the condition number of the original system.
--Loren
I think the point of Loren's blog is not that MATLAB's inv function is particularly slower or more inaccurate than any other numerical implementation of computing a matrix inverse; rather, that in most cases the inverse itself is not needed, and you can proceed by other means (such as solving a linear system using \ - the backslash operator - rather than computing an inverse).
inv() is certainly slower than \ unless you have multiple right hand side vectors to solve for. However, the advice from MathWorks regarding inaccuracy is due to a overly conservative bound in a numerical linear algebra result. In other words, inv() is NOT inaccurate. The link elaborates further : http://arxiv.org/abs/1201.6035
Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers.