How to stop MPSolver of Google OR-Tools at the first feasible solution found? - or-tools

I have a MIP (BP, maximization) that takes too long to compute and I'd like to have MPSolver return the first feasible solution it finds, also, I'd like to know if I use RELATIVE_MIP_GAP solver parameter correctly.
I have tried two things:
Callback
I have searched the docs and have not found a callback possibility for MPSolver's solution iteration process (only for CpSolver) with which one could implement stopping on the first feasible solution found.
Relative gap as termination criterion
I tried using RELATIVE_MIP_GAP like so (this is Kotlin language):
val mpSolverParameters = MPSolverParameters().apply {
setDoubleParam(MPSolverParameters.DoubleParam.RELATIVE_MIP_GAP, 1.0)
}
solver.solve(mpSolverParameters)
I've seen as a documentation comment somewhere that a 0.05 value for RELATIVE_MIP_GAP means a 5% gap, so 1.0 should denote a 100% gap.
But it did not work. I know it because when I have set a time limit, it returned a solution at the end of the time limit, but when I ran the same problem without a time limit, it just went on and did not return anything even after much more time than when it stopped at the time limit previously.
If I understand relative gaps correctly, if I set a value of 1.0 for this relative gap parameter, the solver should stop at any feasible solution found immediately, because the objective value of any integer solution is inside a 100% relative difference of the objective value for any continuous state of the variables. I should add that my objective function is always positive, so there's no problem of these two having different signs.
Solution remarks
Both of Laurent Perron's suggestions work for my case.
If using SCIP as the solver, we may call solver.setSolverSpecificParametersAsString("limits/solutions = 1") to get the first feasible solution, but that will be a poor quality one. We may increase this value passed as we see fit.
Check out time limit too: call setTimeLimit(timeInMs) on your solver object. It will return the best feasible solution found so far, or the unsolved state if no solution has been found at all.
Still not sure why RELATIVE_MIP_GAP didn't work, it is part of the API, not a solver specific parameter.

can you try CP-SAT ? Does it fit there ? meaning no continuous variables.
Can you remove the objective function ?

Related

use number of solutions rather than maximum time to end solve attempts

I am using the CP-SAT solver on a JSP.
I am iterating so the solver runs many times (basically simulating each day for a year), I do not need to find the optimal solution, just a reasonably good one, so I would like to be a bit smarter on ending the solver than simply allowing it to run for X seconds each time. For example, i would like to take the 5th solution each time, or even to stop once the current solution makespan is only 5% (for example) shorter than the previous solution.
Is this possible? I am only aware of solver.parameters.max_time_in_seconds as a way of limiting the calculation time. Intermediate solutions are printed by SolutionPrinter but i think this is output only and there is no way to break the solver during a run?
wrong, you can stop the search in a callback, see this recipe:
https://github.com/google/or-tools/blob/stable/ortools/sat/docs/solver.md#stopping-search-early

scipy.integrate.odeint time dependend stepsize

I have the following problem:
I have to use an ode-solver to solve a chemical reaction equation. The rate constants are functions of time and can suddenly change (puls from electric discharge).
One way to solve this is to keep the stepsize very small hmax < dt. This results in a high comp. affort --> time consuming. My question is: Is there an efficient way to make this work? I thought about to def hmax(puls_ON) with plus_ON=True within the puls and plus_ON=False between. However, since dt is increasing in time, it may dose not even recognize the puls, because the time interval is growing hmax=hmax(t).
A time-grid would be the best option I thin, but I don't think this is possible with odeint?
Or is it possible to somehow force the solver to integrate at a specific point in time (e.g. t0 ->(hmax=False)->tpuls_1_start->(hmax=dt)->tpuls_1_end->(hmax=False)->puls_2_start.....)?
thx
There is an optional parameter tcrit for the odeint that you could try:
Vector of critical points (e.g. singularities) where integration care should be taken.
I don't know what it actually does but it may help to not simply step over the pulse.
If that does not work you can of course manually split your integration into different intervals. Integrate until your tpuls_1_start. Then restart the integration using the results from the previous one as initial values.

Searching for max-min in MATLAB

I am writing a matlab code where i calculate the max-min.
I am using matlab's "fminimax" to solve the following problem:
ki=G(i,:);
ki(i)=0;
fs(i)=-((G(i,i)*pt(i)+sum(ki.*pt)+C1)-(C2*(sum(ki.*pt)+C1)));
G: is a system matrix. pt: is the optimization variable.
When the actual system matrix is used, the "fminimax" stops after one iteration and returns the initial value of "pt", no matter what the initial value for "pt", i.e. no solution is found. (the initial value is defined as X0 in the documentation). The system has the following parameters: G is in the order of e-11, pt is in the order of e-1, and c1 is in the order of e-14.
when i try a randomly generated test matrix and different parameters, the "fminimax" finds a solution for the problem, and everything works fine. G in order of e-2, pt in order of e-2, c1 is in the order of e-7.
I tried to scale the actual system: "fminimax" lasted more than one iteration, however, it still returned the initial value of pt, i.e. it couldn't find a solution.
I tried to change the tolerance of the "fminmax", using "options" [StepTolerance, OptimalityTolerance, ConstraintTolerance, and functiontolerance]. There were no impact at all. still no solution.
I thought that the problem might be that the precision of "fminimax" is not that high, or it is not suitable to solve the problem. i think it is also slow.
i downloaded CPLX, and i wanted to transform the max-min problem into linear programing, using a method i found in a book. However, when i tried my code on a simple minimax it didn't give the same solution.
I thought of using CVX for example, but the problem is not convex.
What might be the problem?
P.S. the system matrix, G, has different realizations, i tried some of them. However, the "fminimax" responds in the same way for all of them, i.e. it wasn't able to find an adequate solution.
I am not convinced that the optimization solvers are broken. If the problem is nonconvex, then there can be multiple local minimizers. Given the information you have provided, we have no way of knowing whether you started at an initial condition.
The first place you need to start is by getting more information from the optimization exit condition... Did it finish because it hit the iteration limit? (I hope not since it isn't doing many iterations)... Did it finish because a tolerance was hit (e.g. the function did not change by more than xxxx)? Or perhaps it could not find a feasible solution? (I don't know if you have any constraints that need to be met).
More than likely, I wold guess that you are starting at a local minimizer without realizing it. So you need to determine whether you are indeed at a local minimizer by looking at the jacobian of the function evaluated at your initial guess. Either calculate it analytically or use a finite step approximation....

How to get the iteration number of lsqnonlin

I am doing parameter estimation in matlab using lsqnonlin function.
In my work, I need to plot a graph to show the error in terms of lsqnonlin iteration. So, I need to know which iteration is running at each point of time in lsqnonlin. Could anybody help me how I can extract the iteration number while lsqnonlin is running?
Thanks,
You want to pass it an options parameter setting 'display' to either 'iter' or 'iter-detailed'
http://www.mathworks.com/help/optim/ug/lsqnonlin.html#f265106
Never used it myself, but looking at the help of lsqnonlin, it seems that there is an option to set a custom output function, which gets called during every iteration of the solver. Looking at the specification, it seems that the values optimValues.iteration and optimValues.fval get passed into the function, which is probably the things you are interested in.
You should thus define your own function with the right signature, and depending on your wishes, this function prints it on the command line, makes a plot, saves the intermediate results in a vector, etc. Finally, you need to pass this function as a function handle to the solver: lsqnonlin(..., 'OutputFcn', #your_outputfun).
The simple way to do this would be:
Start with a low number of (maximum) iterations
Get the result
Increase the number of iterations
Get the result
If the maximum iterations is used Go to step 3
This is what I would recommend in most cases when performance is not a big issue.
However, if you cannot afford to do it like this, try edit lsqnonlin and go digging untill you find the point where the number of iterations is found. Then change the function to make sure you store the results you need at that point. (don't forget to change it back afterwards).
The good news is that all relevant files seem to be editable, the bad news is that it is not so clear where you can find the current number of iterations. A quick search led me to fminbnd, but I did not manage to confirm that this is actually used by lsqnonlin.

max likelihood fminsearch

I used Matlab-fminsearch for a negativ max likelihood model for a binomial distributed function. I don't get any error notice, but the parameter which I want to estimate, take always the start value. Apparently, there is a mistake. I know that I ask a totally general question. But is it possible that anybody had the same mistake and know how to deal with it?
Thanks a lot,
#woodchips, thank you a lot. Step by step, I've tried to do what you advised me. First of all, I actually maximized (-log(likelihood)) and this is not the problem. I think I found out the problem but I still have some questions, if I don't bother you. I have a model(param) to maximize in paramstart=p1. This model is built for (-log(likelihood(F))) and my F is a vectorized function like F(t,Z,X,T,param,m2,m3,k,l). I have a data like (tdata,kdata,ldata),X,T are grids and Z is a function on this grid and (m1,m2,m3) are given parameters.When I want to see the value of F(tdata,Z,X,T,m1,m2,m3,kdata,ldata), I get a good output. But I think fminsearch accept that F(tdata,Z,X,T,p,m2,m3,kdata,ldata) like a constant and thatswhy I always have as estimated parameter the start value. I will be happy, if you have any advise to tweak that.
You have some options you can try to tweak. I'd start with algorithm.
When the function value practically doesn't change around your startpoint it's also problematic. Maybe switching to log-likelyhood helps.
I always use fminunc or fmincon. They allow also providing the Hessian (typically better than "estimated") or 'typical values' so the algorithm doesn't spend time in unfeasible regions.
It is virtually always true that you should NEVER maximize a likelihood function, but ALWAYS maximize the log of that function. Floating point issues will almost always corrupt the problem otherwise. That your optimization starts and stops at the same point is a good indicator this is the problem.
You may well need to dig a little deeper than the above, but even so, this next test is the test I recommend that all users of optimization tools do for every one of their problems, BEFORE they throw a function into an optimizer. Evaluate your objective for several points in the vicinity. Does it yield significantly different values? If not, then look to see why not. Are you creating a non-smooth objective to optimize, or a zero objective? I.e., zero to within the supplied tolerances?
If it does yield different values but still not converge, then make sure you know how to call the optimizer correctly. Yeah, right, like nobody has ever made this mistake before. This is actually a very common cause of failure of optimizers.
If it does yield good values that vary, and you ARE calling the optimizer correctly, then think if there are regions into which the optimizer is trying to diverge that yield garbage results. Is the objective generating complex or imaginary results?