Can we get top 3 best scores using constraints in Optaplanner?
For eg i have a use case where i need to show top 3 results which has highest score to user so that user can select the solution according to their need.
Sounds like pareto optimization (see docs). Not yet supported in OptaPlanner officially.
But users have hacked it before, by implementing their own BestSolutionRecaller (= that class that holds the best solution(s)) and replacing the DefaultSolver's bestSolutionRecaller with it. This implies "taking the red pill" and "following the rabbit hole down to wonderland". Good luck :)
Important note: Pareto optimization goes much further than just remember the n best solutions. It's about remember the n best solutions which aren't dominated by one of the other best solutions. So it entails changing the score comparison (and breaking the transitive aspect of score comparison).
Related
I have estimated a complex hierarchical model with many random effects, but don't really know what the best approach is to checking for convergend. I have complex longitudinal data from a few hundred individuals and estimate quite a few parameters for every individual. Because of that, I have way to many traceplots to inspect visually. Or should I really spend a day going through all the traceplots? What would be a better way to check for convergence? Do I have to calculate Gelman and Rubin's Rhat for every parameter on the person level? And when can I conclude that the model converged? When absolutely all of the thousends of parameters reached convergence? Is it even sensible to expect that? Or is there something like "overall convergence"? And what does it mean when some person-level parameters did not converge? Does it make sense to use autorun.jags from the R2jags package with such a model or will it just run for ever? I know, these are a lot of question, but I just don't know how to approach that.
The measure I am using for convergence is a potential scale reduction factor (psrf)* using the gelman.diag function from the R package coda.
But nevertheless, I am also quickly visually inspecting all the traceplots, even though I also have tens/hundreds of them. It can be really fast if you put them in PNG files and then quickly go through them using e.g. IrfanView (let me know if you need me to expand on this).
The reason you should inspect the traceplots is pretty well described by an example from Marc Kery (author of great Bayesian books): see "Never blindly trust Rhat for convergence in a Bayesian analysis", here I include a self explanatory image from this email:
This is related to Rhat statistics while I use psrf, but it's pretty likely that psrf suffers from this too... and better to check the chains.
*) Gelman, A. & Rubin, D. B. Inference from iterative simulation using multiple sequences. Stat. Sci. 7, 457–472 (1992).
At the moment, my problem has four metrics. Each of these measures something entirely different (each has different units, a different range, etc.) and each is weighted externally. I am using Drools for scoring.
I only have only one score level (SimpleLongScore) and I have to find a way to appropriately combine the individual scores of these metrics onto one long value
The most significant problem at the moment is that the range of values for the metrics can be wildly different.
So if, for example, after a move the score of a metric with a small possible range improves by, say, 10%, that could be completely dwarfed by an alternate move which improves the metric with a larger range's score by only 1% because OptaPlanner only considers the actual score value rather than the possible range of values and how changes affect them proportionally (to my knowledge).
So, is there a way to handle this cleanly which is already part of OptaPlanner that I cannot find?
Is the only feasible solution to implement Pareto scoring? Because that seems like a hack-y nightmare.
So far I have code/math to compute the best-possible and worst-possible scores for a metric that I access from within the Drools and then I can compute where in that range a move puts us, but this also feel quite hack-y and will cause issues with incremental scoring if we want to scale non-linearly within that range.
I keep coming back to thinking I should just just bite the bullet and implement Pareto scoring.
Thanks!
Take a look at #ConstraintConfiguration and #ConstraintWeight in the docs.
Also take a look at the chapter "explaning the score", which can exactly tell you which constraint had which score impact on the best solution found.
If, however, you need pareto optimization, so you need multiple best solutions that don't dominate each other, then know that OptaPlanner doesn't support that yet, but I know of 2 cases that implemented it in OptaPlanner by hacking BestSolutionRecaller.
That being said, 99% of the cases that think of pareto optimization, are 100% happy with #ConstraintWeight instead, because users don't want multiple best solutions (except during simulations), they just want one in production.
In Minizinc, is it possible to sample the domain ? lets say my domain has many solutions, running --all-solutions will initially return very similar solutions.
1) is there a way to sample the domain ? perhaps BFS ? the purpose is for follow up solutions analysis.
2) Is there any methods to estimate search domain size in CP?
my domain is a Staff Rostering Problem
Regards,
H
It is not possible to choose BFS in MiniZinc but there is search annotations. With the search annotations you can choose in which order the variables should be branched on. You can also choose which value will be branched on. Unfortunately, MiniZinc does not support random variable search.
In your case I would branch on a dom_w_deg with a random value but any other variable selection can work, try them.
solve::seq_search([int_search(some_array, dom_w_deg, indomain_random,complete)]) satisfy;
Do note that not all solvers support the usage of search annotations.
Other alternatives are to add constraints that remove the similar results.
You can always calculate the number of permutations you can have in your solution, the number of variables multiplied with their domain. This will not consider any constraints and the real search space can be much smaller.
Another way of visualizing the search is by using gist or other programs to visualize the search.
(source: marco at www.imada.sdu.dk)
You can expand and retract parts of the search tree and see which variables have been branched on.
Another OpenCV question;
Without me having to implement 2 versions - can anyone enlighten me to what the differences are between cvPOSTIT and cvFindExtrinsicCameraParams2 and maybe the advantages of each.
The inputs and outputs appear to be the same.
From my experience, cvFindExtrinsicCameraParams2() works for coplanar points (so it is probably an implementation of http://dl.acm.org/citation.cfm?id=228149), while cvPOSIT() doesn't. But I am not 100% sure.
It appears that cvPOSIT() only exists in OpenCV's old C API and not in the new C++ API. Conversely, cvFindExtrinsicCameraParams2() is in both. While not a perfect indicator, my best guess is that they both implement the POSIT algorithm with minor modifications and the former exists only for legacy reasons.
Beyond that, your guess is good as mine. If you want a definitive answer, I suggest asking on the OpenCV mailing list.
I've used cvPOSIT already. It only works on 3D non-coplanar points on the object. Because it bases on the algorithm from "DAVIS, D. F. D. A. L. S. 1995. Model-Based Object Pose in 25 Lines of Code". So you will have to find a way around for coplanar features
With cvFindExtrinsicCameraParams2(), it also works on planar features, solve the transformation using cvFindHomography and then refine the result by levenberg-marquardt approximation. For non-coplanar points, the preprocessing is done by a different method DLT (Direct Linear Transformation) (not ".. 25 lines of Code" article anymore)
I'm not pretty sure about thier performance, which one is faster. As I know, ".. 25 lines of code" is very fast, and suitable for realtime vision up to now.
I'm currently writing an optimization algorithm in MATLAB, at which I completely suck, therefore I could really use your help. I'm really struggling to find a good way of representing a graph (or well more like a tree with several roots) which would look more or less like this:
alt text http://img100.imageshack.us/img100/3232/graphe.png
Basically 11/12/13 are our roots (stage 0), 2x is stage1, 3x stage2 and 4x stage3. As you can see nodes from stageX are only connected to several nodes from stage(X+1) (so they don't have to be connected to all of them).
Important: each node has to hold several values (at least 3-4), one will be it's number and at least two other variables (which will be used to optimize the decisions).
I do have a simple representation using matrices but it's really hard to maintain, so I was wondering is there a good way to do it?
Second question: when I'm done with that representation I need to calculate how good each route (from roots to the end) is (like let's say I need to compare is 11-21-31-41 the best or is 11-21-31-42 better) to do that I will be using the variables that each node holds. But the values will have to be calculated recursively, let's say we start at 11 but to calcultate how good 11-21-31-41 is we first need to go to 41, do some calculations, go to 31, do some calculations, go to 21 do some calculations and then we can calculate 11 using all the previous calculations. Same with 11-21-31-42 (we start with 42 then 31->21->11). I need to check all the possible routes that way. And here's the question, how to do it? Maybe a BFS/DFS? But I'm not quite sure how to store all the results.
Those are some lengthy questions, but I hope I'm not asking you for doing my homework (as I got all the algorithms, it's just that I'm not really good at matlab and my teacher wouldn't let me to do it in java).
Granted, it may not be the most efficient solution, but if you have access to Matlab 2008+, you can define a node class to represent your graph.
The Matlab documentation has a nice example on linked lists, which you can use as a template.
Basically, a node would have a property 'linksTo', which points to the index of the node it links to, and a method to calculate the cost of each of the links (possibly with some additional property that describe each link). Then, all you need is a function that moves down each link, and brings the cost(s) with it when it moves back up.