I am using a brute force method to optimize a solution in one of my recent projects and it is working quite well. Basically the optimization process involves searching for a global maximum in the space of all possible solutions. I was curious if there are other techniques which can be used to speed up a brute force search or other methods entirely. This is an area that I have little experience in but, as I said, I am quite curious.
Genetic algorithms are a good way to find maximums, even when is not possible to test all solutions.
It's a wide spread technique and there are implementations in very programming languages.
Simulated annealing is useful for solving local maxima problems, but is not always guaranteed to find the global maxima. It basically uses random 'jumps' in an attempt to find a better location/value than its current, and this can speed up searches.
Related
Brute Force is essentially just searching every possible combination but how is minimax any different? Minimax searches every combination as well and then returns the best score?
I understand that when we used alpha beta pruning we take out ones that will have no effect on our min/max value but this occurs after we have already performed minimax so would this be considered brute force? Maybe I'm misunderstanding what I've been reading so far, so any help would be great!
Thank You!
Alpha-beta pruning does not occur "after" minimax, it occurs "during" minimax. And the pruning does in fact eliminate a vast number of "combinations" -- down to sqrt of the brute-force minimax number in the best case, n^(3/4) in the average case.
Minimax is a much more efficient way to walk through the decision tree than brute force. As only nodes that might still contain a better solution are visited, in contrary to brute force where all nodes are visited, without pruning of any kind.
Wiki:
Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search.
In Minimax you cut parts of the tree depending on the available moves and their score, meaning that you would not go over all the possibilities, and this makes it superior to brute-force approach.
The problem is simple: I need to find the optimal strategy to implement accurate HyperLogLog unions based on Redis' representation thereof--this includes handling their sparse/dense representations if the data structure is exported for use elsewhere.
Two Strategies
There are two strategies, one of which seems vastly simpler. I've looked at the actual Redis source and I'm having a bit of trouble (not big in C, myself) figuring out whether it's better from a precision and efficiency perspective to use their built-in structures/routines or develop my own. For what it's worth, I'm willing to sacrifice space and to some degree errors (stdev +-2%) in the pursuit of efficiency with extremely large sets.
1. Inclusion Principle
By far the simplest of the two--essentially I would just use the lossless union (PFMERGE) in combination with this principle to calculate an estimate of the overlap. Tests seem to show this running reliably in many cases, although I'm having trouble getting an accurate handle on in-the-wild efficiency and accuracy (some cases can produce errors of 20-40% which is unacceptable in this use case).
Basically:
aCardinality + bCardinality - intersectionCardinality
or, in the case of multiple sets...
aCardinality + (bCardinality x cCardinality) - intersectionCardinality
seems to work in many cases with good accuracy, but I don't know if I trust it. While Redis has many built-in low-cardinality modifiers designed to circumvent known HLL issues, I don't know if the issue of wild inaccuracy (using inclusion/exclusion) is still present with sets of high disparity in size...
2. Jaccard Index Intersection/MinHash
This way seems more interesting, but a part of me feels like it may computationally overlap with some of Redis' existing optimizations (ie, I'm not implementing my own HLL algorithm from scratch).
With this approach I'd use a random sampling of bins with a MinHash algorithm (I don't think an LSH implementation is worth the trouble). This would be a separate structure, but by using minhash to get the Jaccard index of the sets, you can then effectively multiply the union cardinality by that index for a more accurate count.
Problem is, I'm not very well versed in HLL's and while I'd love to dig into the Google paper I need a viable implementation in short order. Chances are I'm overlooking some basic considerations either of Redis' existing optimizations, or else in the algorithm itself that allows for computationally-cheap intersection estimates with pretty lax confidence bounds.
thus, my question:
How do I most effectively get a computationally-cheap intersection estimate of N huge (billions) sets, using redis, if I'm willing to sacrifice space (and to a small degree, accuracy)?
Read this paper some time back. Will probably answer most of your questions. Inclusion Principle inevitably compounds error margins a large number of sets. Min-Hash approach would be the way to go.
http://tech.adroll.com/media/hllminhash.pdf
There is a third strategy to estimate the intersection size of any two sets given as HyperLogLog sketches: Maximum likelihood estimation.
For more details see the paper available at
http://oertl.github.io/hyperloglog-sketch-estimation-paper/.
I am fairly new to Mixed Integer Linear Programming and I was hoping someone could clarify a performance question for me. Basically I am performing a calculation with about 34 decision variables and my calculation time is around 5 seconds. I would like to ideally get the calculation time down into the sub 1 second range.
Currently I am using the CBC solver & MATLAB, but as I understand it this is a single-threaded solver. Most of the MILP solvers I've seen seem to pride themselves on large project performance with 1k+ variables and knocking compute time down from days to hours but only a few of the expensive ones are even multi-threaded. Processor speed would seem to only go so far with such a problem so there has to be something that could be done on the software side.
In a situation like I have what factors play a role in the calculation time? In theory would a solution like Gurobi be able to ramp up and make a discernible difference over CBC on such a small problem?
When solving MIPs, multi-threading usually doesn't help that much, compared to other applications. One thing more sophisticated solvers do better than CBC is presolving. It may very well happen, that all decision variables can be eliminated/fixed in the root node. In general, it is not possible to predict the solving time of a MIP based on its size. The solving process is simply too complex.
To answer your question: I do suppose that you will see a significant speedup when trying another solver (Gurobi, Xpress, CPLEX, SCIP, etc.). It is not guaranteed though, and you might also witness a slowdown.
Most of the commercial MILP solvers would probably consider most problems with only 1000 variables to be tiny. It is very common to solve problems with hundreds of thousands or millions of variables. Note however, that there are also some really small problems that are considered to be very hard or still open in that nobody has solved them to a proven optimal solution. Basically, the time taken to solve these problems is enormously problem dependent.
If you want to see how fast another solver can solve your problem, try submitting it as e.g. an MPS file to the NEOS solver service at http://www.neos-server.org/neos/
Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is.
P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here.
P.P.S I saw the other question regarding the open source alternatives. That's not what I'm looking for. I'm looking for more efficient ones, if someone faced the same problem adn shifted to a better solver.
Hmmm...
Without further information, I'd guess that fmincon runs out of memory because it needs the Hessian (which, given that your decision variable is 10^4, will be 10^4 x numel(f(x1,x2,x3,....)) large).
It also takes a lot of time to determine the values of the Hessian, because fmincon normally uses finite differences for that if you don't specify derivatives explicitly.
There's a couple of things you can do to speed things up here.
If you know beforehand that there will be a lot of zeros in your Hessian, you can pass sparsity patterns of the Hessian matrix via HessPattern. This saves a lot of memory and computation time.
If it is fairly easy to come up with explicit formulae for the Hessian of your objective function, create a function that computes the Hessian and pass it on to fmincon via the HessFcn option in optimset.
The same holds for the gradients. The GradConstr (for your non-linear constraint functions) and/or GradObj (for your objective function) apply here.
There's probably a few options I forgot here, that could also help you. Just go through all the options in the optimization toolbox' optimset and see if they could help you.
If all this doesn't help, you'll really have to switch optimizers. Given that fmincon is the pride and joy of MATLAB's optimization toolbox, there really isn't anything much better readily available, and you'll have to search elsewhere.
TOMLAB is a very good commercial solution for MATLAB. If you don't mind going to C or C++...There's SNOPT (which is what TOMLAB/SNOPT is based on). And there's a bunch of things you could try in the GSL (although I haven't seen anything quite as advanced as SNOPT in there...).
I don't know on what version of MATLAB you have, but I know for a fact that in R2009b (and possibly also later), fmincon has a few real weaknesses for certain types of problems. I know this very well, because I once lost a very prestigious competition (the GTOC) because of it. Our approach turned out to be exactly the same as that of the winners, except that they had access to SNOPT which made their few-million variable optimization problem converge in a couple of iterations, whereas fmincon could not be brought to converge at all, whatever we tried (and trust me, WE TRIED). To this day I still don't know exactly why this happens, but I verified it myself when I had access to SNOPT. Once, when I have an infinite amount of time, I'll find this out and report this to the MathWorks. But until then...I lost a bit of trust in fmincon :)
I find genetic algorithm simulations like this to be incredibly entrancing and I think it'd be fun to make my own. But the problem with most simulations like this is that they're usually just hill climbing to a predictable ideal result that could have been crafted with human guidance pretty easily. An interesting simulation would have countless different solutions that would be significantly different from each other and surprising to the human observing them.
So how would I go about trying to create something like that? Is it even reasonable to expect to achieve what I'm describing? Are there any "standard" simulations (in the sense that the game of life is sort of standardized) that I could draw inspiration from?
Depends on what you mean by interesting. That's a pretty subjective term. I once programmed a graph analyzer for fun. The program would first let you plot any f(x) of your choice and set the bounds. The second step was creating a tree holding the most common binary operators (+-*/) in a random generated function of x. The program would create a pool of such random functions, test how well they fit to the original curve in question, then crossbreed and mutate some of the functions in the pool.
The results were quite cool. A totally weird function would often be a pretty good approximation to the query function. Perhaps not the most useful program, but fun nonetheless.
Well, for starters that genetic algorithm is not doing hill-climbing, otherwise it would get stuck at the first local maxima/minima.
Also, how can you say it doesn't produce surprising results? Look at this vehicle here for example produced around generation 7 for one of the runs I tried. It's a very old model of a bicycle. How can you say that's not a surprising result when it took humans millennia to come up with the same model?
To get interesting emergent behavior (that is unpredictable yet useful) it is probably necessary to give the genetic algorithm an interesting task to learn and not just a simple optimisation problem.
For instance, the Car Builder that you referred to (although quite nice in itself) is just using a fixed road as the fitness function. This makes it easy for the genetic algorithm to find an optimal solution, however if the road would change slightly, that optimal solution may not work anymore because the fitness of a solution may have grown dependent on trivially small details in the landscape and not be robust to changes to it. In real, cars did not evolve on one fixed test road either but on many different roads and terrains. Using an ever changing road as the (dynamic) fitness function, generated by random factors but within certain realistic boundaries for slopes etc. would be a more realistic and useful fitness function.
I think EvoLisa is a GA that produces interesting results. In one sense, the output is predictable, as you are trying to match a known image. On the other hand, the details of the output are pretty cool.