use number of solutions rather than maximum time to end solve attempts - or-tools

I am using the CP-SAT solver on a JSP.
I am iterating so the solver runs many times (basically simulating each day for a year), I do not need to find the optimal solution, just a reasonably good one, so I would like to be a bit smarter on ending the solver than simply allowing it to run for X seconds each time. For example, i would like to take the 5th solution each time, or even to stop once the current solution makespan is only 5% (for example) shorter than the previous solution.
Is this possible? I am only aware of solver.parameters.max_time_in_seconds as a way of limiting the calculation time. Intermediate solutions are printed by SolutionPrinter but i think this is output only and there is no way to break the solver during a run?

wrong, you can stop the search in a callback, see this recipe:
https://github.com/google/or-tools/blob/stable/ortools/sat/docs/solver.md#stopping-search-early

Related

Optimising first solution strategy for VRP

I'm trying to pick the best first solution strategy to use on a VRP.
My use case is that an individual case takes around 60 seconds to solve on average, but i need to run hundreds or thousands of cases sequentially such that my whole solution takes hours.
I can trade off finding the optimal solution against time; a good solution is usually good enough.
Using the different strategies, i get solve times between 1 and 120 seconds.
My questions:
Is it reasonable to assume that the best strategy for one case will also be the best for other cases given my model does not change much - just different pickup nodes and time windows?
Has anyone tried first testing each strategy then picking the best to use for the rest of the cases?
If i was to set the limit time to e.g. 1 second, would the strategy that gives the lowest objective function after say 1s also be likely to give the best solution strategy after 60s, unlimited?
Many thanks!

How to stop MPSolver of Google OR-Tools at the first feasible solution found?

I have a MIP (BP, maximization) that takes too long to compute and I'd like to have MPSolver return the first feasible solution it finds, also, I'd like to know if I use RELATIVE_MIP_GAP solver parameter correctly.
I have tried two things:
Callback
I have searched the docs and have not found a callback possibility for MPSolver's solution iteration process (only for CpSolver) with which one could implement stopping on the first feasible solution found.
Relative gap as termination criterion
I tried using RELATIVE_MIP_GAP like so (this is Kotlin language):
val mpSolverParameters = MPSolverParameters().apply {
setDoubleParam(MPSolverParameters.DoubleParam.RELATIVE_MIP_GAP, 1.0)
}
solver.solve(mpSolverParameters)
I've seen as a documentation comment somewhere that a 0.05 value for RELATIVE_MIP_GAP means a 5% gap, so 1.0 should denote a 100% gap.
But it did not work. I know it because when I have set a time limit, it returned a solution at the end of the time limit, but when I ran the same problem without a time limit, it just went on and did not return anything even after much more time than when it stopped at the time limit previously.
If I understand relative gaps correctly, if I set a value of 1.0 for this relative gap parameter, the solver should stop at any feasible solution found immediately, because the objective value of any integer solution is inside a 100% relative difference of the objective value for any continuous state of the variables. I should add that my objective function is always positive, so there's no problem of these two having different signs.
Solution remarks
Both of Laurent Perron's suggestions work for my case.
If using SCIP as the solver, we may call solver.setSolverSpecificParametersAsString("limits/solutions = 1") to get the first feasible solution, but that will be a poor quality one. We may increase this value passed as we see fit.
Check out time limit too: call setTimeLimit(timeInMs) on your solver object. It will return the best feasible solution found so far, or the unsolved state if no solution has been found at all.
Still not sure why RELATIVE_MIP_GAP didn't work, it is part of the API, not a solver specific parameter.
can you try CP-SAT ? Does it fit there ? meaning no continuous variables.
Can you remove the objective function ?

scipy.integrate.odeint time dependend stepsize

I have the following problem:
I have to use an ode-solver to solve a chemical reaction equation. The rate constants are functions of time and can suddenly change (puls from electric discharge).
One way to solve this is to keep the stepsize very small hmax < dt. This results in a high comp. affort --> time consuming. My question is: Is there an efficient way to make this work? I thought about to def hmax(puls_ON) with plus_ON=True within the puls and plus_ON=False between. However, since dt is increasing in time, it may dose not even recognize the puls, because the time interval is growing hmax=hmax(t).
A time-grid would be the best option I thin, but I don't think this is possible with odeint?
Or is it possible to somehow force the solver to integrate at a specific point in time (e.g. t0 ->(hmax=False)->tpuls_1_start->(hmax=dt)->tpuls_1_end->(hmax=False)->puls_2_start.....)?
thx
There is an optional parameter tcrit for the odeint that you could try:
Vector of critical points (e.g. singularities) where integration care should be taken.
I don't know what it actually does but it may help to not simply step over the pulse.
If that does not work you can of course manually split your integration into different intervals. Integrate until your tpuls_1_start. Then restart the integration using the results from the previous one as initial values.

why if I put a filter on an output I modify the source signal? is this a simulink bug?

I know it sounds strange and that's a bad way to write a question,but let me show you this odd behavior.
as you can see this signal, r5, is nice and clean. exactly what I expected from my simulation.
now look at this:
this is EXACTLY the same simulation,the only difference is that the filter is now not connected. I tried for hours to find a reason,but it seems like a bug.
This is my file, you can test it yourself disconnecting the filter.
----edited.
Tried it with simulink 2014 and on friend's 2013,on two different computers...if Someone can test it on 2015 it would be great.
(attaching the filter to any other r,r1-r4 included ''fixes'' the noise (on ALL r1-r8),I tried putting it on other signals but the noise won't go away).
the expected result is exactly the smooth one, this file showed to be quite robust on other simulations (so I guess the math inside the blocks is good) and this case happens only with one of the two''link number'' (one input on the top left) set to 4,even if a small noise appears with one ''link number'' set to 3.
thanks in advance for any help.
It seems to me that the only thing the filter could affect is the time step used in the integration, assuming you are using a dynamic time step (which is the default). So, my guess is that (if this is not a bug) your system is numerically unstable/chaotic. It could also be related to noise, caused by differentiation. Differentiating noise over a smaller time step mostly makes things even worse.
Solvers such as ode23 and ode45 use a dynamic time step. ode23 compares a second and third order integration and selects the third one if the difference between the two is not too big. If the difference is too big, it does another calculation with a smaller timestep. ode45 does the same with a fourth and fifth order calculation, more accurate, but more sensitive. Instabilities can occur if a smaller time step makes things worse, which could occur if you differentiate noise.
To overcome the problem, try using a fixed time step, change your precision/solver, or better: avoid differentiation, use some type of state estimator to obtain derivatives or calculate analytically.

What's the best way to measure and track performance over various calls at runtime?

I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime?
Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time?
Many thanks.
Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?
You want to use the built-in performance tools called 'Instruments', check out Apples guide to Instruments. Specifically you probably want the System Instruments. There's also the Tuning Guide which could be useful to you and Shark.
Imagine a chain of events with some
recursive calls over a fraction of a
second. What's the best way to track
where the CPU spends most of its time?
Short version of previous answer.
Learn an IDE or debugger. Make sure it has a "pause" button or you can interrupt it when your program is running and taking too long.
If your code runs too quickly to be manually paused, wrap a temporary loop of 10 to 1000 times around it.
When you pause it, make a copy of the call stack, into some text editor. Repeat several times.
Your answer will be in those stacks. If the CPU is spending most of its time in a statement, that statement will be at the bottom of most of the stack samples. If there is some function call that causes most of the time to be used, that function call will be on most of the stacks. It doesn't matter if it's recursive - that just means it shows up more than once on a stack.
Don't think about measuring microseconds, or counting calls. Think about "percent of time active". That's what stack samples tell you, and that's roughly what you'll save if you fix it.
It's that simple.
BTW, when you fix that problem, you will get a speedup factor. Then, other issues in your code will be magnified by that factor, so they will be easier to find. This way, you can keep going until you've squeezed every cycle out of it.
The first thing I tell people is to recognize the difference between
1) timing routines and counting how many times they are called, and
2) finding code that you can fruitfully optimize.
For (1) there are instrumenting profilers.
To be really successful at (2) you need a rare type of profiler.
You need a sampling profiler that
samples the entire call stack, not just the program counter
samples at random wall clock times, not just CPU, so as to capture possible I/O problems
samples when you want it to (not when waiting for user input)
for output, gives you, for each line of code that appears on stack samples, the percent of samples containing that line. That is a direct measure of the total time that could be saved if that line were not there.
(I actually do it by hand, interrupting the program under the debugger.)
Don't get sidetracked by problems you don't have, such as
accuracy of measurement. If a line of code appears on 30% of call stack samples, it's actual cost could be anywhere in a range around 30%. If you can find a way to eliminate it or invoke it a lot less, you will save what it costs, even if you don't know in advance exactly what its cost is.
efficiency of sampling. Since you don't need accuracy of time measurement, you don't need a large number of samples. Even if you get a large number of samples, they don't skew the results significantly, because they don't fail to spot the costly lines of code.
call graphs. They make nice graphics, but are not what you need to know. An arc on a call graph corresponds to a line of code in the best case, usually multiple lines, so knowing cost of an arc only tells the cost of a line in the best case. Call graphs concentrate on functions, when what you need to find is lines of code. Call graphs get wrapped up in the issue of recursion, which is irrelevant.
It's important to understand what to expect. Many programmers, using traditional profilers, can get a 20% improvement, consider that terrific, count the profiler a winner, and stop there. Others, working with large programs, can often get speedup factors of 20 times.
This is done by fixing a series of problems, each one giving a multiplicative speedup factor. As soon as the profiler fails to find the next problem, the process stops. That's why "good enough" isn't good enough.
Here is a brief explanation of the method.