Which numerical algorithm is used in Matlab for solving set of linear equations when we use x=A\B? for example gauss jordan or LU method etc.?
Thank you
The best one!1
The flow chart from the official documentation below shows how the algorithm is chosen for full matrices. The flow chart is a bit larger for sparse matrices.
1Hopefully this will result in the best algorithm.
Related
Is there a way to turn off pivoting when computing the inverse of a tridiagonal matrix in matlab? I'm trying to see if a problem I'm having with solving a tridiagonal system is coming from not pivoting and I can test it simply in matlab by solving the same system and turning off pivoting. Any help is appreciated!
The documentation to mldivide doesn't list any options for setting low-level options like that.
I'd imagine that is because automatic pivoting is not only desired but expected from most tools these days.
For a tridiagonal matrix that is full, MATLAB will use its Hessenberg solver (which I imagine is akin to this flow) and, for a sparse tridiagonal matrix, will use a tridiagonal solver. In both cases, partial pivoting may used to ensure an accurate solution of the system.
To get around the fact that MATLAB doesn't have a toggle for pivoting, you could implement your own tridiagonal solver (see above link) without pivoting and see how the solution is affected.
does anybody know an implementation for the Iterative Closest Point (ICP) algorithm in Matlab that computes the covariance matrix?
All i have found is the icptoolboxformatlab but it seems to be offline.
http://censi.mit.edu/research/robot-perception/icpcov/
There is matlab code for 2D case. It provide ICP's covariance too.
I have implemented the 3D variant of Point to Point error metric based ICP covariance estimation, which is inspired from Andrea Censi's work.
Have a look at
https://sites.google.com/site/icpcovariance/home
I am doing a comparison of some alternate linear regression techniques.
Clearly these will be bench-marked relative to OLS (Ordinary Least Squares).
But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress().
I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing least squares estimation as it is superior in terms of execution time and numerical accuracy.
However, I'm concerned as to whether it's okay to use mldivide to find the OLS estimates? As an operator it seems I can't see what the function is doing by "stepping-in" in the debugger.
Can I be assume that mldivide will produce the same answers as theoretical OLS under all conditions (including in the presence of) singular/i-ll conditioned matrices)?
If not what is the best way to compute pure OLS answers in MATLAB without any preconditioning of the data?
The short answer is:
When the system A*x = b is overdetermined, both algorithms provide the same answer. When the system is underdetermined, PINV will return the solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will pick the solution with least number of non-zero elements.
As for how mldivide works, MathWorks also posted a description of how the function operates.
However, you might also want to have a look at this answer for the first part of the discussion about mldivide vs. other methods when the matrix A is square.
Depending on the shape and composition of the matrix you would use either Cholesky decomposition for symmetric positive definite, LU decomposition for other square matrix or QR otherwise. Then you can can hold onto the factorization and use linsolve to essentially just do back-substitution for you.
As to whether mldivide is preferable to pinv when A is either not square (overspecified) or is square but singular, the two options will give you two of the infinitely many solutions. According to those docs, both solutions will give you exact solutions:
Both of these are exact solutions in the sense that norm(A*x-b) and norm(A*y-b)are on the order of roundoff error.
According to the help page pinv gives a least squares solution to a system of equations, and so to solve the system Ax=b, just do x=pinv(A)*b.
What is the k nearest neighbour regression function in Matlab? Is only knn classification function available? Is anybody knowing any useful literature regarding to that?
Regards
Farideh
I don't believe the k-NN regression algorithm is directly implemented in matlab, but if you do some googling you can find some valid implementations. The algorithm is fairly simple though.
Find the k-Nearest elements using whatever distance metric is suitable.
Convert the inverse distance weight of each of the k elements
Compute weighted mean of the k elements using the inverse distance weight.
There are many curve fitting and interpolation tools like polyfit (or even this nice logfit toolbox I found here), but I can't seem to find anything that will fit a sigmoid function to my x-y data.
Does such a tool exist or do I need to make my own?
If you have the Statistics Toolbox installed, you can use nonlinear regression with nlinfit:
sigfunc = #(A, x)(A(1) ./ (A(2) + exp(-x)));
A0 = ones(size(A)); %// Initial values fed into the iterative algorithm
A_fit = nlinfit(x, y, sigfunc, A0);
Here sigfunc is just an example for a sigmoid function, and A is the vector of the fitting coefficients.
nlinfit, and especially gatool, are big hammers for this problem. A sigmoid is not a specific function. Most commonly it is taken to be the same as the logistic function (also often the most efficient to calculate):
y = 1./(1+exp(-x));
or a generalized logistic. But all manner of curves can have sigmoidal shapes. If you know if your data corresponds to one in particular, fitting can be improved and more efficient methods can be applied. For example, the error function (erf) has a sigmoidal shape and shows up in the CDF of the normal distribution. If you know that your data is the result of a Gaussian process (i.e., the data is the CDF) and you have the Stats toolbox, you can use the normfit function. This function is based on maximum likelihood estimation (MLE). If you end up needing to write a custom fitting function - say, for performance reasons - I'd investigate MLE techniques for the particular form of sigmoid that you'd like to fit.
I would suggest you use MATLAB's Global Optimization Toolbox, and in particular the Genetic Algorithm Solver, which you can use for your problem by optimizing (= finding the best fit for your data) the sigmoid function's parameters through genetic algorithm. It has a GUI that is easy to use.
The Genetic Algorithm Solver's GUI, which you can call using gatool: