Is there a way to turn off pivoting when computing the inverse of a tridiagonal matrix in matlab? I'm trying to see if a problem I'm having with solving a tridiagonal system is coming from not pivoting and I can test it simply in matlab by solving the same system and turning off pivoting. Any help is appreciated!
The documentation to mldivide doesn't list any options for setting low-level options like that.
I'd imagine that is because automatic pivoting is not only desired but expected from most tools these days.
For a tridiagonal matrix that is full, MATLAB will use its Hessenberg solver (which I imagine is akin to this flow) and, for a sparse tridiagonal matrix, will use a tridiagonal solver. In both cases, partial pivoting may used to ensure an accurate solution of the system.
To get around the fact that MATLAB doesn't have a toggle for pivoting, you could implement your own tridiagonal solver (see above link) without pivoting and see how the solution is affected.
Related
I have a linear program with order N^4 variables and order N^4 constraints. If I want to solve this in AMPL, I define the constraints one by one without having to bother about the exact coefficient matrices. No memory issues arises. When using the standard LP-solver in Matlab however, I need to define the matrices explicitly.
When I have variables with four subscripts, this will lead to a massively sparse matrix of dimension order N^4 x N^4. This matrix won't even fit in memory for non trivial problem sizes.
Is there a way to get around this problem using Matlab, apart from various column generation/cutting plane techniques? Since AMPL manages to solve it, I suppose they're either automating some kind of decomposition, or they somehow solve the LP without explicitly working with this sparse monster matrix.
Apart from sparse mentioned by m.s. you can also use AMPL API for MATLAB. It is especially useful if you already have an AMPL model and want to work with it from MATLAB.
Converting my comment into an answer:
MATLAB supports sparse matrices using the sparse command which allows you to build your constraint matrix without exceeding memory limits.
I have to identify an ARX under some linear constraints, this means that I have a quadratic programming with linear equality constraints problem.
One way is to use the following equations in the red boxes. A possible disadvantage in this case is the calculation of the matrix inversions (sometimes Matlab gives me the warning: Matrix is close to singular or badly scaled)
Another way is to use in Matlab the command: quadprog()
Another way is to use in Matlab the command: lsqlin()
Which of these three methods is the best one?
Which is the most robust numerically?
I am doing a comparison of some alternate linear regression techniques.
Clearly these will be bench-marked relative to OLS (Ordinary Least Squares).
But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress().
I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing least squares estimation as it is superior in terms of execution time and numerical accuracy.
However, I'm concerned as to whether it's okay to use mldivide to find the OLS estimates? As an operator it seems I can't see what the function is doing by "stepping-in" in the debugger.
Can I be assume that mldivide will produce the same answers as theoretical OLS under all conditions (including in the presence of) singular/i-ll conditioned matrices)?
If not what is the best way to compute pure OLS answers in MATLAB without any preconditioning of the data?
The short answer is:
When the system A*x = b is overdetermined, both algorithms provide the same answer. When the system is underdetermined, PINV will return the solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will pick the solution with least number of non-zero elements.
As for how mldivide works, MathWorks also posted a description of how the function operates.
However, you might also want to have a look at this answer for the first part of the discussion about mldivide vs. other methods when the matrix A is square.
Depending on the shape and composition of the matrix you would use either Cholesky decomposition for symmetric positive definite, LU decomposition for other square matrix or QR otherwise. Then you can can hold onto the factorization and use linsolve to essentially just do back-substitution for you.
As to whether mldivide is preferable to pinv when A is either not square (overspecified) or is square but singular, the two options will give you two of the infinitely many solutions. According to those docs, both solutions will give you exact solutions:
Both of these are exact solutions in the sense that norm(A*x-b) and norm(A*y-b)are on the order of roundoff error.
According to the help page pinv gives a least squares solution to a system of equations, and so to solve the system Ax=b, just do x=pinv(A)*b.
There are many curve fitting and interpolation tools like polyfit (or even this nice logfit toolbox I found here), but I can't seem to find anything that will fit a sigmoid function to my x-y data.
Does such a tool exist or do I need to make my own?
If you have the Statistics Toolbox installed, you can use nonlinear regression with nlinfit:
sigfunc = #(A, x)(A(1) ./ (A(2) + exp(-x)));
A0 = ones(size(A)); %// Initial values fed into the iterative algorithm
A_fit = nlinfit(x, y, sigfunc, A0);
Here sigfunc is just an example for a sigmoid function, and A is the vector of the fitting coefficients.
nlinfit, and especially gatool, are big hammers for this problem. A sigmoid is not a specific function. Most commonly it is taken to be the same as the logistic function (also often the most efficient to calculate):
y = 1./(1+exp(-x));
or a generalized logistic. But all manner of curves can have sigmoidal shapes. If you know if your data corresponds to one in particular, fitting can be improved and more efficient methods can be applied. For example, the error function (erf) has a sigmoidal shape and shows up in the CDF of the normal distribution. If you know that your data is the result of a Gaussian process (i.e., the data is the CDF) and you have the Stats toolbox, you can use the normfit function. This function is based on maximum likelihood estimation (MLE). If you end up needing to write a custom fitting function - say, for performance reasons - I'd investigate MLE techniques for the particular form of sigmoid that you'd like to fit.
I would suggest you use MATLAB's Global Optimization Toolbox, and in particular the Genetic Algorithm Solver, which you can use for your problem by optimizing (= finding the best fit for your data) the sigmoid function's parameters through genetic algorithm. It has a GUI that is easy to use.
The Genetic Algorithm Solver's GUI, which you can call using gatool:
I have a problem where I am fitting a high-order polynomial to (not very) noisy data using linear least squares. Currently I'm using polynomial orders around 15 - 25, which work surprisingly well: The dependence is very nearly linear, but the accuracy of modelling the 'very nearly' is critical. I'm using Matlab's polyfit() function, and (obviously) normalising the x-data. This generally works fine, but I have come across an issue with some recent datasets. The fitted polynomial has extrema within the x-data interval. For the application I'm working on this is a non-no. The polynomial model must have no stationary points over the x-interval.
So I need to add a constraint to the least-squares problem: the derivative of the fitted polynomial must be strictly positive over a known x-range (or strictly negative - this depends on the data but a simple linear fit will quickly tell me which it is.) I have had a quick look at the available optimisation toolbox functions, but I admit I'm at a loss to know how to go about this. Does anyone have any suggestions?
[I appreciate there are probably better models than polynomials for this data, but in the short term it isn't feasible to change the form of the model]
[A closing note: I have finally got the go-ahead to replace this awful polynomial model! I am going to adopt a nonparametric approach, spline smoothing, using the excellent SPLINEFIT code by Jonas Lundgren. This has the advantage that I'm already using a spline model in the end-user application, so I already have C# code available to evaluate a spline model]
You could use cftool and use the exclude data points option.