I regularly have upwards of 10^8 sets of linear least squares to solve (heaps of Ax = B) coming out of monte-carlo simulations. Until now I have been using a simple loop, but obviously this is slow. Is there a way of vectorizing the process, so that I can send the lot at once to the function and it returns 10^8 solutions to the 10^8 sets? I have looked extensively online. All I can find is a python code that does what I want stacked_lstsq, but I don't know how to translate this into Matlab. Any help appreciated.
Related
I am facing the following problem: I have a system of 160000 linear equations with 160000 variables. I am going to write two programs on conjugate gradient method and steepest descent method to solve it. The matrix is block tridiagonal with only 5 non zero diagonals, thus it's not necessary to create and store the matrix. But I am having the following problem: when I go to the iterarion stepe, there must be dot product of vectors involved. I have tried the following commands: dot(u,v), u'*v, which are commonly used. But when I run the program, MATLAB told me the data size is too large for the memory.
To resolve this problem, I tried to decompose the huge vector into sparse vectors with small support, then calculate the dot products of small vectors and finally glue them together. But it seems that this method is more complicated and not very efficient, and it is easy (especially for beginners like me) to make mistakes. I wonder if there're any more efficient ways to deal with this problem. Thanks in advance.
I have been looking for a Matlab function that can do a nonlinear total least square fit, basically fit a custom function to data which has errors in all dimensions. The easiest case being x and y data-points with different given standard deviations in x and y, for every single point. This is a very common scenario in all natural sciences and just because most people only know how to do a least square fit with errors in y does not mean it wouldn't be extremely useful. I know the problem is far more complicated than a simple y-error, this is probably why most (not even physicists like myself) learned how to properly do this with multidimensional errors.
I would expect that a software like matlab could do it but unless I'm bad at reading the otherwise mostly useful help pages I think even a 'full' Matlab license doesn't provide such fitting functionality. Other tools like Origin, Igor, Scipy use the freely available fortran package "ODRPACK95", for instance. There are few contributions about total least square or deming fits on the file exchange, but they're for linear fits only, which is of little use to me.
I'd be happy for any hint that can help me out
kind regards
First I should point out that I haven't practiced MATLAB much since I graduated last year (also as a Physicist). That being said, I remember using
lsqcurvefit()
in MATLAB to perform non-linear curve fits. Now, this may, or may not work depending on what you mean by custom function? I'm assuming you want to fit some known expression similar to one of these,
y = A*sin(x)+B
y = A*e^(B*x) + C
It is extremely difficult to perform a fit without knowning the form, e.g. as above. Ultimately, all mathematical functions can be approximated by polynomials for small enough intervals. This is something you might want to consider, as MATLAB does have lots of tools for doing polynomial regression.
In the end, I would acutally reccomend you to write your own fit-function. There are tons of examples for this online. The idea is to know the true solution's form as above, and guess on the parameters, A,B,C.... Create an error- (or cost-) function, which produces an quantitative error (deviation) between your data and the guessed solution. The problem is then reduced to minimizing the error, for which MATLAB has lots of built-in functionality.
As part of my tutorial, I would like to show the plots of a) sum 1/n z^n, b)Log(z)=log|z|+i Arg(z) to drive home the point of analytic continuation.
Plotting the second is easy but not the second one. Any suggestions for Matlab, mathematica or c++?
So far I've been trying with the symbolic sum function, but it was too heavy for my machine. I might leave it for few hours to run.
Any quicker ways? Here is a graph of a natural boundary image I found online to get an idea (not enough rep to post).
https://upload.wikimedia.org/wikipedia/en/1/1b/Natural_boundary_example.gif
I've got a problem with my equation that I try to solve numerically using both MATLAB and Symbolic Toolbox. I'm after several source pages of MATLAB help, picked up a few tricks and tried most of them, still without satisfying result.
My goal is to solve set of three non-polynomial equations with q1, q2 and q3 angles. Those variables represent joint angles in my industrial manipulator and what I'm trying to achieve is to solve inverse kinematics of this model. My set of equations looks like this: http://imgur.com/bU6XjNP
I'm solving it with
numeric::solve([z1,z2,z3], [q1=x1..x2,q2=x3..x4,q3=x5..x6], MultiSolutions)
Changing the xn constant according to my needs. Yet I still get some odd results, the q1 var is off by approximately 0.1 rad, q2 and q3 being off by ~0.01 rad. I don't have much experience with numeric solve, so I just need information, should it supposed to look like that?
And, if not, what valid option do you suggest I should take next? Maybe transforming this equation to polynomial, maybe using a different toolbox?
Or, if trying to do this in Matlab, how can you limit your solutions when using solve()? I'm thinking of an equivalent to Symbolic Toolbox's assume() and assumeAlso.
I would be grateful for your help.
The numerical solution of a system of nonlinear equations is generally taken as an iterative minimization process involving the minimization (i.e., finding the global minimum) of the norm of the difference of left and right hand sides of the equations. For example fsolve essentially uses Newton iterations. Those methods perform a "deterministic" optimization: they start from an initial guess and then move in the unknowns space essentially according to the opposite of the gradient until the solution is not found.
You then have two kinds of issues:
Local minima: the stopping rule of the iteration is related to the gradient of the functional. When the gradient becomes small, the iterations are stopped. But the gradient can become small in correspondence to local minima, besides the desired global one. When the initial guess is far from the actual solution, then you are stucked in a false solution.
Ill-conditioning: large variations of the unknowns can be reflected into large variations of the data. So, small numerical errors on data (for example, machine rounding) can lead to large variations of the unknowns.
Due to the above problems, the solution found by your numerical algorithm will be likely to differ (even relevantly) from the actual one.
I recommend that you make a consistency test by choosing a starting guess, for example when using fsolve, very close to the actual solution and verify that your final result is accurate. Then you will discover that, by making the initial guess more far away from the actual solution, your result will be likely to show some (even large) errors. Of course, the entity of the errors depend on the nature of the system of equations. In some lucky cases, those errors could keep also very small.
I'm trying my best to work it out with fmincon in MATLAB. When I call the function, I get one of the two following errors:
Number of function evaluation exceeded, or
Number of iteration exceeded.
And when I look at the solution so far, it is way off the one intended (I know so because I created a minimum vector).
Now even if I increase any of the tolerance constraint or max number of iterations, I still get the same problem.
Any help is appreciated.
First, if your problem can actually be cast as linear or quadratic programming, do that first.
Otherwise, have you tried seeding it with different starting values x0? If it's starting in a bad place, it may be much harder to get to the optimum.
If it's possible for you to provide the gradient of the function, that can help the optimizer tremendously (though obviously only if you can find it some way other than numerical differentiation). Similarly, if you can provide the (full or sparse) Hessian relatively cheaply, you're golden.
You can also try using a different algorithm in the solver.
Basically, fmincon by default has almost no info about the function it's trying to optimize, and providing more can be extremely helpful. If you can tell us more about the objective function, we might be able to give more tips.
The L1 norm is not differentiable. That can make it difficult for the algorithm to converge to a point where one of the residuals is zero. I suspect this is why number of iterations limits are exceeded. If your original problem is
min norm(residual(x),1)
s.t. Aeq*x=beq
you can reformulate the problem differentiably, as follows
min sum(b)
s.t. -b(i)<=residual(x,i)<=b(i)
Aeq*x=beq
where residual(x,i) is the i-th residual, x is the original vector of unknowns, and b is a further unknown vector of bounds that you add to the problem.