does anybody know an implementation for the Iterative Closest Point (ICP) algorithm in Matlab that computes the covariance matrix?
All i have found is the icptoolboxformatlab but it seems to be offline.
http://censi.mit.edu/research/robot-perception/icpcov/
There is matlab code for 2D case. It provide ICP's covariance too.
I have implemented the 3D variant of Point to Point error metric based ICP covariance estimation, which is inspired from Andrea Censi's work.
Have a look at
https://sites.google.com/site/icpcovariance/home
Related
Which numerical algorithm is used in Matlab for solving set of linear equations when we use x=A\B? for example gauss jordan or LU method etc.?
Thank you
The best one!1
The flow chart from the official documentation below shows how the algorithm is chosen for full matrices. The flow chart is a bit larger for sparse matrices.
1Hopefully this will result in the best algorithm.
I am doing a comparison of some alternate linear regression techniques.
Clearly these will be bench-marked relative to OLS (Ordinary Least Squares).
But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress().
I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing least squares estimation as it is superior in terms of execution time and numerical accuracy.
However, I'm concerned as to whether it's okay to use mldivide to find the OLS estimates? As an operator it seems I can't see what the function is doing by "stepping-in" in the debugger.
Can I be assume that mldivide will produce the same answers as theoretical OLS under all conditions (including in the presence of) singular/i-ll conditioned matrices)?
If not what is the best way to compute pure OLS answers in MATLAB without any preconditioning of the data?
The short answer is:
When the system A*x = b is overdetermined, both algorithms provide the same answer. When the system is underdetermined, PINV will return the solution x, that has the minimum norm (min NORM(x)). MLDIVIDE will pick the solution with least number of non-zero elements.
As for how mldivide works, MathWorks also posted a description of how the function operates.
However, you might also want to have a look at this answer for the first part of the discussion about mldivide vs. other methods when the matrix A is square.
Depending on the shape and composition of the matrix you would use either Cholesky decomposition for symmetric positive definite, LU decomposition for other square matrix or QR otherwise. Then you can can hold onto the factorization and use linsolve to essentially just do back-substitution for you.
As to whether mldivide is preferable to pinv when A is either not square (overspecified) or is square but singular, the two options will give you two of the infinitely many solutions. According to those docs, both solutions will give you exact solutions:
Both of these are exact solutions in the sense that norm(A*x-b) and norm(A*y-b)are on the order of roundoff error.
According to the help page pinv gives a least squares solution to a system of equations, and so to solve the system Ax=b, just do x=pinv(A)*b.
What is the k nearest neighbour regression function in Matlab? Is only knn classification function available? Is anybody knowing any useful literature regarding to that?
Regards
Farideh
I don't believe the k-NN regression algorithm is directly implemented in matlab, but if you do some googling you can find some valid implementations. The algorithm is fairly simple though.
Find the k-Nearest elements using whatever distance metric is suitable.
Convert the inverse distance weight of each of the k elements
Compute weighted mean of the k elements using the inverse distance weight.
I'm wondering if someone is familiar with the NLMEFITSA algorithm in matlab. This algorithm gives me as result the fixed effects parameters (beta) for a mixed effects model as well as their covariance matrix (psi), but it also gives me, in the "stats" struct, one covariance matrix called "covb" I know this one has to do with the standard errors and it is important for calculating confidence intervals, but to be honest I don't know what is exactly the difference between this "covb" and the "psi " covariance matrix. And how could I use "covb" when simulating new data?
I have a problem where I am fitting a high-order polynomial to (not very) noisy data using linear least squares. Currently I'm using polynomial orders around 15 - 25, which work surprisingly well: The dependence is very nearly linear, but the accuracy of modelling the 'very nearly' is critical. I'm using Matlab's polyfit() function, and (obviously) normalising the x-data. This generally works fine, but I have come across an issue with some recent datasets. The fitted polynomial has extrema within the x-data interval. For the application I'm working on this is a non-no. The polynomial model must have no stationary points over the x-interval.
So I need to add a constraint to the least-squares problem: the derivative of the fitted polynomial must be strictly positive over a known x-range (or strictly negative - this depends on the data but a simple linear fit will quickly tell me which it is.) I have had a quick look at the available optimisation toolbox functions, but I admit I'm at a loss to know how to go about this. Does anyone have any suggestions?
[I appreciate there are probably better models than polynomials for this data, but in the short term it isn't feasible to change the form of the model]
[A closing note: I have finally got the go-ahead to replace this awful polynomial model! I am going to adopt a nonparametric approach, spline smoothing, using the excellent SPLINEFIT code by Jonas Lundgren. This has the advantage that I'm already using a spline model in the end-user application, so I already have C# code available to evaluate a spline model]
You could use cftool and use the exclude data points option.