I am trying to solve a "linearized" linear-system-of-equations, which requires two parameters to be estimated by iteration because of linearization. The actual problem is nonlinear actually, but using fourier series method, it iss linearized.
I have been solving linear system by just matrices and SVDs which takes not much time but these matrices depend on the two parameters that are to be iteratively solved. At the end I just need to make sure that one of the parameters I solve iteratively matches the response I get in the system. This is the criteria to be minimized.
I have been using "fmincon" and "multi-start" to solve for two parameters and I get some results, but it is taking longer than what I expect. There is local minima issue too, so I had to include "multi-start".
Anyone has an idea if any other method would be easier to solve this problem?
I really appreciate it.
A global optimization method that one may use is Simulated annealing.
May be MATLAB has a relevant routine.
There is free Simulated annealing software that you may also try.
I got an improvement in my problem, and I just replied it in comments but I think it is worth putting it in here since what I did emerged something unexpected:
So I ran a monte carlo sim for two variables to be iteratively solved, and plotted how the error changes with respect to input variables. I realized that there are tons of local minima in the error of the response and that's why fmincon was not able to solve itself because it was quickly jumping into one of those local minima holes, and I needed a very refined multi-start for fmincon so that I could get global minimum. This is very interesting observation because I wasn't expecting that rough error distribution with respect to two parameters.
Is there any efficient solver/optimizer in matlab that you know of, to get the global minimum in cases where there are many local minima? Or any other method?
Thanks,
I'm trying to use naive Bayes classifier to classify my dataset.My questions are:
1- Usually when we try to calculate the likehood we use the formula:
P(c|x)= P(c|x1) * P(c|x2)*...P(c|xn)*P(c) . But in some examples it says in order to avoid getting very small results we use P(c|x)= exp(log(c|x1) + log(c|x2)+...log(c|xn) + logP(c)). can anyone explain more to me the difference between these two formula and are they both used to calculate the "likehood" or the sec one is used to calculate something called "information gain".
2- In some cases when we try to classify our datasets some joints are null. Some ppl use "LAPLACE smoothing" technique in order to avoid null joints. Doesnt this technique influence on the accurancy of our classification?.
Thanks in advance for all your time. I'm just new to this algorithm and trying to learn more about it. So is there any recommended papers i should read? Thanks alot.
I'll take a stab at your first question, assuming you lost most of the P's in your second equation. I think the equation you are ultimately driving towards is:
log P(c|x) = log P(c|x1) + log P(c|x2) + ... + log P(c)
If so, the examples are pointing out that in many statistical calculations, it's often easier to work with the logarithm of a distribution function, as opposed to the distribution function itself.
Practically speaking, it's related to the fact that many statistical distributions involve an exponential function. For example, you can find where the maximum of a Gaussian distribution K*exp^(-s_0*(x-x_0)^2) occurs by solving the mathematically less complex problem (if we're going through the whole formal process of taking derivatives and finding equation roots) of finding where the maximum of its logarithm K-s_0*(x-x_0)^2 occurs.
This leads to many places where "take the logarithm of both sides" is a standard step in an optimization calculation.
Also, computationally, when you are optimizing likelihood functions that may involve many multiplicative terms, adding logarithms of small floating-point numbers is less likely to cause numerical problems than multiplying small floating point numbers together is.
I have a curve which looks roughly / qualitative like the curves displayed in those 3 images.
The only thing I know is that the first part of the curve is hardware-specific supposed to be a linear curve and the second part is some sort of logarithmic part (might be a combination of two logarithmic curves), i.e. linlog camera. But I couldn't tell the mathematic structure of the equation, e.g. wether it looks like a*log(b)+c or a*(log(c+b))^2 etc. Is there a way to best fit/find out a good regression for this type of curve and is there a certain way to do this specifically in MATLAB? :-) I've got the student version, i.e. all toolboxes etc.
fminsearch is a very general way to find best-fit parameters once you have decided on a parametric equation. And the optimization toolbox has a range of more-sophisticated ways.
Comparing the merits of one parametric equation against another, however, is a deep topic. The main thing to be aware of is that you can always tweak the equation, adding another term or parameter or whatever, and get a better fit in terms of lower sum-squared-error or whatever other goodness-of-fit metric you decide is appropriate. That doesn't mean it's a good thing to keep adding parameters: your solution might be becoming overly complex. In the end the most reliable way to compare how well two different parametric models are doing is to cross-validate: optimize the parameters on a subset of the data, and evaluate only on data that the optimization procedure has not yet seen.
You can try the "function finder" on my curve fitting web site zunzun.com and see what it comes up with - it is free. If you have any trouble please email me directly and I'll do my best to help.
James Phillips
zunzun#zunzun.com
I am trying to solve a system of non linear equations using fsolve; lets say
F(x;lambda) = 0, where lambda is a vector of parameters, and x the vector I want to solve for.
I am using Matlab's fsolve.
I have 2 values of the parameter lambda, that I want to solve the system for. For the one value of lambda I get a solution, which seems alright.
For the other value of lambda I get a solution again (matlab exits with a flag of 1. However I know this is not an actual solution For example I know that some of the dimensions of x have to be equal to each other, and this is not the case in the solution I get from fsolve.
I have tried both trust-region and the levenberg-marquardt algorithm, and I am not getting any better results. (explicitly enforcing those x's to be the same, still seems to give solutions that are not consistent with what I would be expecting from the properties of the system)
My question is: do the algorithms used by fsolve depend on any kind of stability of the system? Could it be that changing the parameter lambda in the second case I mention above, I make the system unstable, and could that make fsolve having a hard time to solve it correctly?
Thank you, George
fsolve isn't "failing" - as commented by jucestain, it's giving you a local minimum, which is not necessarily a global minimum. This is what it's designed to do.
To improve your chances of obtaining a global minimum you need to either:
Know that your initial guess is good
Run the optimisation several times with a grid of initial guesses, and pick the best result
Add constraints to prevent the solver straying into areas you know to have local minima
Modify your cost function to remove local minima
If you ever come across a non-linear solver that can guarantee a global minimum, do let us know!
I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.