How to let fminsearch only search over integers? - matlab

I'm using the fminsearch Method of Matlab to minimize a function:
c = cvpartition(200,'KFold',10);
minfn = #(z)kfoldLoss(fitcsvm(cdata,grp,'CVPartition',c,...
'KernelFunction','rbf','BoxConstraint',exp(z(2)),...
'KernelScale',exp(z(1))));
opts = optimset('TolX',5e-4,'TolFun',5e-4);
[searchmin fval] = fminsearch(minfn,randn(2,1),opts)
The minimization is over two parameters.
Now I would like to minimize a third parameter, but this parameter can only take positive integer values, i.e. 1,2,3,...
How can I tell fminsearch to only consider positive integers?
Second, if my third parameter gets initialized to 10 but it actual best value is 100, does fminsearch converge fast in such cases?

You can't tell fminsearch to consider only integers. The algorithm it uses is not suitable for discrete optimization, which in general is much harder than continuous optimization.
If there are only relatively few plausible values for your integer parameter(s), you could just loop over them all, but that might be too expensive. Or you could cook up your own 1-dimensional discrete optimization function and have it call fminsearch for each value of the integer parameter it tries. (E.g., you could imitate some standard 1-dimensional continuous optimization algorithm, and just return once you've found a parameter value that's, say, better than both its neighbours.) You may well be able to adapt this function to the particular problem you're trying to solve.

As #Gareth McCaughan said, you can't tell fminsearch to restrict the search space to integers. If you want to search for solvers that can handle this type of problem, you want to search for "mixed integer programming." Mixed integer is for part continuous, part integer programming. And "programming" is jargon for optimization (horribly confusing name, but like the QWERTY keyboard, we're stuck with it).
Be aware though that integer programming is in general NP-hard! Larger problems may be entirely intractable.

In side the case I handled, i looked for an vector-index which satifies a
condition.
The vector-Index is postive integer.
The workaround for fminsearch I did, was an interpolation of the error-function. Assume, fminsearch proposes 5.1267 as new index. Than I calculated the error-function for indexes 5 and 6 and gave an interpolation back. This leaded to stable and satisfying results.
Holger.Lindow#plr-magdeburg.de

Related

Matlab Variable Precision

I am writing a Matlab script that solves a system of differential equations via the Runge-Kutta method. Due to the iterative nature of this approach, errors accumulate very quickly. Hence I'm interested in carrying an extremely exaggerated number of decimal points, say, 100.
I have identified the digits function, which allows me to define the variable precision accuracy. However, it seems that I have to specify the vpa function in every equation where I want this precision used. Is there a way to put a command in the script header and have the specified number of decimal places used in all calculations? Matlab help is unusually unclear about this.
There is no way to tell matlab to use vpa everywhere. Typically you don't specify it in every equiation, instead cast all inputs and constants to vpa.

Tolerances in Numerical quadrature - MATLAB

What is the difference between abtol and reltol in MATLAB when performing numerical quadrature?
I have an triple integral that is supposed to generate a number between 0 and 1 and I am wondering what would be the best tolerance for my application?
Any other ideas on decreasing the time of integral3 execution.
Also does anyone know whether integral3 or quadgk is faster?
When performing the integration, MATLAB (or most any other integration software) computes a low-order solution qLow and a high-order solution qHigh.
There are a number of different methods of computing the true error (i.e., how far either qLow or qHigh is from the actual solution qTrue), but MATLAB simply computes an absolute error as the difference between the high and low order integral solutions:
errAbs = abs(qLow - qHigh).
If the integral is truly a large value, that difference may be large in an absolute sense but not a relative sense. For example, errAbs might be 1E3, but qTrue is 1E12; in that case, the method could be said to converge relatively since at least 8 digits of accuracy has been reached.
So MATLAB also considers the relative error :
errRel = abs(qLow - qHigh)/abs(qHigh).
You'll notice I'm treating qHigh as qTrue since it is our best estimate.
Over a given sub-region, if the error estimate falls below either the absolute limit or the relative limit times the current integral estimate, the integral is considered converged. If not, the region is divided, and the calculation repeated.
For the integral function and integral2/integral3 functions with the iterated method, the low-high solutions are a Gauss-Kronrod 7-15 pair (the same 7-th order/15-th order set used by quadgk.
For the integral2/integral3 functions with the tiled method, the low-high solutions are a Gauss-Kronrod 3-7 pair (I've never used this option, so I'm not sure how it compares to others).
Since all of these methods come down to a Gauss-Kronrod quadrature rule, I'd say sticking with integral3 and letting it do the adaptive refinement as needed is the best course.

User defined Jacobian pattern in MATLAB's lsqnonlin being ignored

I am using MATLAB's lsqnonlin function, and I am attempting to set a user-defined Jacboian pattern via the option JacobPattern. I set a preference for the trust-region-reflective algorithm to be used, and the output from lsqnonlin indicates that this was indeed the algorithm used by the solver (required for the use of the JacobPattern option).
The problem I am finding is that if my JacobPattern is too sparse (e.g. just a few rows of ones in a 500x500 Jacobian), it is being ignored by the solver and the full Jacobian is being computed instead.
This behaviour is not documented; can anyone shed any further light on it? I would like to be able to force the solver to use my JacobPattern no matter how absurdly sparse it is, or how shallow a gradient is found with it.
Update:
I have done some more experiments, and it appears the Jacobian is only recomputed if there are any all-zero rows in the Jacobian pattern. Any number of all-zero columns are ok, as long as at there is at least one '1' in each row. Although this helps to avoid the problem, the question still remains --- why does the solver require each dependent variable to have an associated gradient? In any case, I would expect the ignoring of a user-defined option to be at least worthy of a warning...
My guess is the following:
If you take a look at what the jacobian actually means, you'll see that all-zero rows mean that the corresponding function (part of the vector function defined) is independent of any variable. It is thus completely pointless adding it to the optimization.
As for purposefully handing a wrong Jacobian to the algorithm,
why would you want to do that?

Integration with matlab

i want to solve this problem:
alt text http://img265.imageshack.us/img265/6598/greenshot20100727091025.png
i don't want to use "int", i want to use "quad" family (quad,dblquad,triplequad)
but i can't.
can you help me?
I assume that your real problem is more complex than this trivial one. The best solution is just to use a symbolic integral. Why is numerical integration difficult?
Numerical integration in ONE dimension typically requires on the order of say 100 function evaluations. (The exact number will be very dependent on the accuracy required, the limits, etc.) This makes a 2-d integral typically require on the order of 100^2 = 10000 function evals. So an adaptive, 5-d integral will require on the order of 100^5 = 1e10 function evaluations. (This is only a very rough order of magnitude estimate here.) My point is, you simply don't want to do that!
Better is to reduce the problem in complexity. If your integral is separable (as is this one) then do so! Reduce a 5-d problem into multiple 1-d problems.
Also, in many cases I see people wanting to do a numerical integration of a Gaussian PDF. See that this is easily solved using a call to erf or erfc, coupled with a transformation. The point is that in many cases special functions are defined to greatly reduce the complexity of a problem.
I should add that in many cases, the key to solving a difficult problem in mathematics is to use mathematics to reduce the problem to something simpler. If you can find a way to reduce the dimensionality of your problem just a bit, it will become much more tractable.
The integral you show is
Analytically solvable: always do analytically what you can
?equal to a number: constant expressions should be eliminated from numerical calculations
not easy to get calculated in MATLAB (or very correct).
You can use cumtrapz to integrate over each variable alone, and call trapz the final integration. Remember that this will blow up the error on any problem that is more complicated than the simple sum of linear functions.
Mathematica is more suited to nD integrations, if you have access to that.
matlab can do symbolic integration
>> x = sym('x'); y = sym('y'); z = sym('z'); u = sym('u'); v = sym('v');
>> int(int(int(int(int(x+y+z+u+v,1,5),-2,3),0,1),-1,1),0,1)
ans =
180
Just noticed you want to do numeric, not symbolic integration
If you look at the source of dblquad and triplequad
>> edit dblquad
you see that they just call the lower versions.
it should be possible for you to add a quadquad and a quintquad (or recursively an n-quad)

Looking for ODE integrator/solver with a relaxed attitude to derivative precision

I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.