Using Gurobi to run a MIQP: how can I improve time performance? - matlab

I am using Gurobi to run a MIQP (Mixed Integer Quadratic Programming) with linear constraints in Matlab. The solver is very slow and I would like your help to understand whether I can do something about it.
These are the lines which I use to launch the problem
clear model;
clear params;
model.A=[Aineq; Aeq];
model.rhs=[bineq; beq];
model.sense=[repmat('<', size(Aineq,1),1); repmat('=', size(Aeq,1),1)];
model.Q=Q;
model.obj=c;
model.vtype=type;
model.lb=total_lb;
model.ub=total_ub;
params.MIPGap=10^(-1);
result=gurobi(model,params);
This is a screenshot of the output in the Matlab window.
Question 1: It is the first time I am trying to run a MIQP and I would like to have your advice to understand what I can do to improve performance. Let me tell what I have tried so far:
I cheated by imposing params.MIPGap=10^(-1). In this way the phase of node exploration is made shorter. What are the cons of doing this?
I have big-M coefficients and I have tied them to the smallest possible values.
I have tried setting params.ScaleFlag=2; params.ObjScale=2 but it makes things slower
I have changed params.method but it does not seem to help (unless you have some specific recommendation)
I have increase params.Threads but it does not seem to help
Question 2 (minor): Why do I get a negative objective in the root simplex log? How can the objective function be negative?

Without having the full model here, there is not much on advise to give. Tight Big-M formulations are important, but you said, you checked them already. Sometimes splitting them up might help, but this is a complex field.
What might give great benefits for some problems is using the Gurobi parameter tuning tool. So try to export your model and feed the tuning tool with it. It automatically tries different of the hundreds of tuning parameters and might give some nice results.

Regarding the question about negative objectives in the simplex logs, I can think of a couple of possible explanations. First, note that the negative objective values occur in the presence of dual infeasibilities in the dual simplex run. In such a case, I'm not sure exactly what the primal objective values correspond to. Second, if you have a MIQP with products of binaries in the objective, Gurobi may convexify the objective in a way that makes it possible for a negative objective to appear in the reformulated model even when the original model must have a nonnegative objective in any feasible solution.

Related

matlab running all linprog algortithms (is there a matlab-list of algorithms?)

Matlab offers multiple algorithms for solving Linear Programs.
For example Matlab R2012b offers: 'active-set', 'trust-region-reflective', 'interior-point', 'interior-point-convex', 'levenberg-marquardt', 'trust-region-dogleg', 'lm-line-search', or 'sqp'.
But other versions of Matlab support different algorithms.
I would like to run a loop over all algorithms that are supported by the users Matlab-Version. And I would like them to be ordered like the recommendation order of Matlab.
I would like to implement something like this:
i=1;
x=[];
while (isempty(x))
options=optimset(options,'Algorithm',Here_I_need_a_list_of_Algorithms(i))
x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options);
end
In 99% this code should be equivalent to
x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options);
but sometimes the algorithm gives back an empty array because of numerical problems (exitflag -4). If there is a chance that one of the other algorithms can find a solution I would like to try them too.
So my question is:
Is there a possibility to automatically get a list of all linprog-algorithms that are supported by the installed Matlab-version ordered like Matlab recommends them.
I think looping through all algorithms can make sense in other scenarios too. For example when you need very precise data and have a lot of time, you could run them all and than evaluate which gives the best results.
Or one would like to loop through all algorithms, if one wants to find which algorithms is the best for LPs with a certain structure.
There's no automatic way to do this as far as I know. If you really want to do it, the easiest thing to do would be to go to the online documentation, and check through previous versions (online documentation is available for old versions, not just the most recent release), and construct some variables like this:
r2012balgos = {'active-set', 'trust-region-reflective', 'interior-point', 'interior-point-convex', 'levenberg-marquardt', 'trust-region-dogleg', 'lm-line-search', 'sqp'};
...
r2017aalgos = {...};
v = ver('matlab');
switch v.Release
case '(R2012b)'
algos = r2012balgos;
....
case '(R2017a)'
algos = r2017aalgos;
end
% loop through each of the algorithms
Seems boring, but it should only take you about 30 minutes.
There's a reason MathWorks aren't making this as easy as you might hope, though, because what you're asking for isn't a great idea.
It is possible to construct artificial problems where one algorithm finds a solution and the others don't. But in practice, typically if the recommended algorithm doesn't find a solution this doesn't indicate that you should switch algorithms, it indicates that your problem wasn't well-formulated, and you should consider modifying it, perhaps by modifying some constraints, or reformulating the objective function.
And after all, why stop with just looping through the alternative algorithms? Why not also loop through lots of values for other options such as constraint tolerances, optimality tolerances, maximum number of function evaluations, etc.? These may have just as much likelihood of affecting things as a choice of algorithm. And soon you're running an optimisation algorithm to search through the space of meta-parameters for your original optimisation.
That's not a great plan - probably better to just choose one of the recommended algorithms, stick to that, and if things don't work out then focus on improving your formulation of the problems rather than over-tweaking the optimisation itself.

How to optimize more than 3 objective functions on MATLAB? gamultibj is not efficient

I am using MATLAB gamultiobj optimization
as I have 6 to 12 objective functions; the gamultiobj function inefficiently handling the problem, always terminated because the number of generations exceeded, not because the changes of the objective functions become smaller
I looked at the gamultiobj options documentations, but it didn't help
http://www.mathworks.com/help/gads/examples/multiobjective-genetic-algorithm-options.html
1- how can I increase the capability of gamultiobj function to handle this number of objective functions?
2- are there a better way at all (using MATLAB)?
Well,
this is my update:
1- I increased the number of generations, the population size, and assigned proper initial population using the common ga options, it worked better (I didn't know that they are working with gamultiobj too, but I knew, it isn't stated anywhere in the documentation explicitly).
2- after running and inspecting the results I realized that gamultiobj can handle many objective functions efficiently providing that they are independent. As long as the objective functions are strongly dependent (which is the case of my problem, unfortunately) the gamultiobj solver's efficiency dramatically decreases.
thanks !
You should increase the number of generations, possibly play with the options such as crossover, mutation, the constraint bounds in which you're going to get the solution.
The bounds are to specified correctly. and the initial population is also pretty much needed to get it to the correct set of parameters that you want to optimize

Maximum Likelihood, Matlab

I'm writing code, that executes MLE. At each step, I get gradient at one point and then move along it to another point. But I have problem with determination of magnitude of the move. How to determine the best magnitude for good convergence? Can you give me an advice how to avoid other pitfalls, such as presence of several maximums?
Regarding the presence of several maxima: this issue will occur when dealing with a function that is not convex. It can be partially solved by multi-start optimization, which essentially means that you run the simulation multiple times in order to find as many maxima as possible and then selecting the 'highest' maximum from among them. Note that this does not guarantee global optimality, as the global optimum might be hard to reach (i.e. the local optima have a larger domain of attraction).
Regarding the optimal step size for convergence: you might want to look at back-tracking linesearch. A short explanation of it can be found in the answer to this question
We might be able to give you more specific help if you could give us some code to look at, as jkalden already pointed out.

Numerical Integral of large numbers in Fortran 90

so I have the following Integral that i need to do numerically:
Int[Exp(0.5*(aCosx + bSinx + cCos2x + dSin2x))] x=0..2Pi
The problem is that the output at any given value of x can be extremely large, e^2000, so larger than I can deal with in double precision.
I havn't had much luck googling for the following, how do you deal with large numbers in fortran, not high precision, i dont care if i know it to beyond double precision, and at the end i'll just be taking the log, but i just need to be able to handle the large numbers untill i can take the log..
Are there integration packes that have the ability to handle arbitrarily large numbers? Mathematica clearly can.. so there must be something like this out there.
Cheers
This is probably an extended comment rather than an answer but here goes anyway ...
As you've already observed Fortran isn't equipped, out of the box, with the facility for handling such large numbers as e^2000. I think you have 3 options.
Use mathematics to reduce your problem to one which does (or a number of related ones which do) fall within the numerical range that your Fortran compiler can compute.
Use Mathematica or one of the other computer algebra systems (eg Maple, SAGE, Maxima). All (I think) of these can be integrated into a Fortran program (with varying degrees of difficulty and integration).
Use a library for high-precision (often called either arbitray-precision or multiple-precision too) arithmetic. Your favourite search engine will turn up a number of these for you, some written in Fortran (and therefore easy to integrate), some written in C/C++ or other languages (and therefore slightly harder to integrate). You might start your search at Lawrence Berkeley or the GNU bignum library.
(Yes I know that I wrote that you have 3 options, but your question suggests that you aren't ready to consider this yet) You could write your own high-/arbitrary-/multiple-precision functions. Fortran provides everything you need to construct such a library, there is a lot of work already done in the field to learn from, and it might be something of interest to you.
In practice it generally makes sense to apply as much mathematics as possible to a problem before resorting to a computer, that process can not only assist in solving the problem but guide your selection or construction of a program to solve what's left of the problem.
I agree with High Peformance Mark that the best option here numerically is to use analytics to scale or simplify the result first.
I will mention that if you do want to brute force it, gfortran (as of 4.6, with the libquadmath library) has support for quadruple precision reals, which you can use by selecting the appropriate kind. As long as your answers (and the intermediate results!) don't get too much bigger than what you're describing, that may work, but it will generally be much slower than double precision.
This requires looking deeper at the problem you are trying to solve and the behavior of the underlying mathematics. To add to the good advice already provided by Mark and Jonathan, consider expanding the exponential and trig functions into Taylor series and truncating to the desired level of precision.
Also, take a step back and ask why you are trying to accomplish by calculating this value. As an example, I recently had to debug why I was getting outlandish results from a property correlation which was calculating vapor pressure of a fluid to see if condensation was occurring. I spent a long time trying to understand what was wrong with the temperature being fed into the correlation until I realized the case causing the error was a simulation of vapor detonation. The problem was not in the numerics but in the logic of checking for condensation during a literal explosion; physically, a condensation check made no sense. The real problem was the code was asking an unnecessary question; it already had the answer.
I highly recommend Forman Acton's Numerical Methods That (Usually) Work and Real Computing Made Real. Both focus on problems like this and suggest techniques to tame ill-mannered computations.

max likelihood fminsearch

I used Matlab-fminsearch for a negativ max likelihood model for a binomial distributed function. I don't get any error notice, but the parameter which I want to estimate, take always the start value. Apparently, there is a mistake. I know that I ask a totally general question. But is it possible that anybody had the same mistake and know how to deal with it?
Thanks a lot,
#woodchips, thank you a lot. Step by step, I've tried to do what you advised me. First of all, I actually maximized (-log(likelihood)) and this is not the problem. I think I found out the problem but I still have some questions, if I don't bother you. I have a model(param) to maximize in paramstart=p1. This model is built for (-log(likelihood(F))) and my F is a vectorized function like F(t,Z,X,T,param,m2,m3,k,l). I have a data like (tdata,kdata,ldata),X,T are grids and Z is a function on this grid and (m1,m2,m3) are given parameters.When I want to see the value of F(tdata,Z,X,T,m1,m2,m3,kdata,ldata), I get a good output. But I think fminsearch accept that F(tdata,Z,X,T,p,m2,m3,kdata,ldata) like a constant and thatswhy I always have as estimated parameter the start value. I will be happy, if you have any advise to tweak that.
You have some options you can try to tweak. I'd start with algorithm.
When the function value practically doesn't change around your startpoint it's also problematic. Maybe switching to log-likelyhood helps.
I always use fminunc or fmincon. They allow also providing the Hessian (typically better than "estimated") or 'typical values' so the algorithm doesn't spend time in unfeasible regions.
It is virtually always true that you should NEVER maximize a likelihood function, but ALWAYS maximize the log of that function. Floating point issues will almost always corrupt the problem otherwise. That your optimization starts and stops at the same point is a good indicator this is the problem.
You may well need to dig a little deeper than the above, but even so, this next test is the test I recommend that all users of optimization tools do for every one of their problems, BEFORE they throw a function into an optimizer. Evaluate your objective for several points in the vicinity. Does it yield significantly different values? If not, then look to see why not. Are you creating a non-smooth objective to optimize, or a zero objective? I.e., zero to within the supplied tolerances?
If it does yield different values but still not converge, then make sure you know how to call the optimizer correctly. Yeah, right, like nobody has ever made this mistake before. This is actually a very common cause of failure of optimizers.
If it does yield good values that vary, and you ARE calling the optimizer correctly, then think if there are regions into which the optimizer is trying to diverge that yield garbage results. Is the objective generating complex or imaginary results?