I have been trying to write a MILP in Matlab. I am using gurobi solver interfaced with Matlab.
It seems solver has found a heuristic optimal solution but does not stop the iterations and keep looking for it. I am posting the screenshot of the process.
Can anyone tell me, how to write a stopping criteria of gurobi in Matlab? I've tried to look through the documentation of gurobi, but it didn't help me much.
Even though Gurobi may have found the optimal solution very quickly, it does not know yet it is optimal. Only after the gap % has become zero we are sure there are no better integer solutions. You can set a gap tolerance (parameter mipgap), but for proven optimal solutions you need to leave it at (close to) zero.
Related
i am currently trying to minimize a function with linear inequality and equality constraints. The Problem is that fmincon (MATLAB Tool) can not find a feasible solution. I already tried to do everything from the list: http://de.mathworks.com/help/optim/ug/when-the-solver-fails.html
Maybe the problem is too large for fmincon. I have to solve with ~3300 inequality constraints and 1 equality constraint. The function is a scalar function with 9 variables: S = sum((X_i-1)^2)
In addition to that, i have to solve this problem ~3300 times (number of inequality constraints). So i can not wait too long for one minimization.
I do not know if fmincon is not capable of this problem and would like to her suggestions for alternative optimization tools. MATLAB would be perfect (or C/C++). And i can not afford to purchase any software.
I hope you can help me
So you want to solve a quadratic problem with 3300 equations and you expect it to be fast. I think the real problem isn't the programming, but that you'll have to do more analysis of your problem rather than just using brute force.
If you think that there is nothing more to do, one idea could be to use some heuristics, but then you aren't sure that you get the exact solution. Using Heuristics will require that you know your problem, such that you can apply the correct one.
Another possibility is to try and figure out which constraints are really going to be the ones that matter. Maybe you can identify 10 such constraints, solve the problem with those, and then apply one additional constraint after another with the previous solution as initial guess and then hoping that the solution not suddenly change.
I have a program using PSO algorithm using penalty function for Constraint Satisfaction. But when I run the program for different iterations, the output of the algorithm would be :
"Iteration 1: Best Cost = Inf"
.
Does anyone know why I always get inf answer?
There could be many reasons for that, none of which will be accurate if you don't provide a MWE with the code you have already tried or a context of the function you are analysing.
For instance, while studying the PSO algorithm you might use it on functions which have analytical solutions first. By doing this you can study the behaviour of the algorithm before applying to a similar problem, and fine tune its parameters.
My guess is that you might not be providing either the right function (I have done that already, getting a signal wrong is easy!), the right constraints (same logic applies), your weights for the penalty function and velocity update are way off.
When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.
Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is.
P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here.
P.P.S I saw the other question regarding the open source alternatives. That's not what I'm looking for. I'm looking for more efficient ones, if someone faced the same problem adn shifted to a better solver.
Hmmm...
Without further information, I'd guess that fmincon runs out of memory because it needs the Hessian (which, given that your decision variable is 10^4, will be 10^4 x numel(f(x1,x2,x3,....)) large).
It also takes a lot of time to determine the values of the Hessian, because fmincon normally uses finite differences for that if you don't specify derivatives explicitly.
There's a couple of things you can do to speed things up here.
If you know beforehand that there will be a lot of zeros in your Hessian, you can pass sparsity patterns of the Hessian matrix via HessPattern. This saves a lot of memory and computation time.
If it is fairly easy to come up with explicit formulae for the Hessian of your objective function, create a function that computes the Hessian and pass it on to fmincon via the HessFcn option in optimset.
The same holds for the gradients. The GradConstr (for your non-linear constraint functions) and/or GradObj (for your objective function) apply here.
There's probably a few options I forgot here, that could also help you. Just go through all the options in the optimization toolbox' optimset and see if they could help you.
If all this doesn't help, you'll really have to switch optimizers. Given that fmincon is the pride and joy of MATLAB's optimization toolbox, there really isn't anything much better readily available, and you'll have to search elsewhere.
TOMLAB is a very good commercial solution for MATLAB. If you don't mind going to C or C++...There's SNOPT (which is what TOMLAB/SNOPT is based on). And there's a bunch of things you could try in the GSL (although I haven't seen anything quite as advanced as SNOPT in there...).
I don't know on what version of MATLAB you have, but I know for a fact that in R2009b (and possibly also later), fmincon has a few real weaknesses for certain types of problems. I know this very well, because I once lost a very prestigious competition (the GTOC) because of it. Our approach turned out to be exactly the same as that of the winners, except that they had access to SNOPT which made their few-million variable optimization problem converge in a couple of iterations, whereas fmincon could not be brought to converge at all, whatever we tried (and trust me, WE TRIED). To this day I still don't know exactly why this happens, but I verified it myself when I had access to SNOPT. Once, when I have an infinite amount of time, I'll find this out and report this to the MathWorks. But until then...I lost a bit of trust in fmincon :)
I tried fmincon today and I found that it converges really fast. The values that it gives are also kind of perfect. I am not sure how. At the start it takes a big step. I had two parameters initialized at 1 and 1. Suddenly it jumps to have the values changed to 51 and 130. That's a big step. I am not sure if this is a good thing. But I want to know how come fmincon converges so fast and finds the value. Any insights
MALTAB's fmincon function implements several algorithms. So the speed of convergence will depend on the objective function and type of the constraints. MATLAB will choose automatically the best possible algorithm. In most cases it will be interior-point algorithm. These family of algorithms is known for its fast convergence on really big problems. Most of interior-point algorithms take about 20-60 steps to converge. Bottom line, my answer is yes - this is absolutely normal if fmincon converges really fast. If you need more details set Display option to 'iter-detailed' using optimset and you will see details for each iterations.