So far I am using the routing package of ortools with the arc cost evaluator:
costCallbackIndex = model.registerTransitCallback(this::costCallback);
model.setArcCostEvaluatorOfAllVehicles(costCallbackIndex);
But I have realized that I am more interested in maximizing the number of pickups and deliveries (they are optional, using model.addDisjunction with a drop penalty) than the overall sum of meters travelled. Since I am dealing with a lot of pickups and deliveries I don't want to put the extra strain of minimizing arc costs on the solver.
One option is to define a arc cost callback that always returns 0, but this might confuse the solver. I could also not call the method setArcCostEvaluatorOfAllVehicles and hence suggest to the solver that I am not interested in arc costs.
What is the recommended approach?
Why don't you just refrain from using setArcCostEvaluatorOfAllVehicles ?
Related
I have a problem when I want to optimize the global Cost of the vehicules' route. So for that I register a Cost Dimension for returning the cost associated to each arc. But at the same time I have restrictions in another dimensions such as Time & Distance. How can I achieve this? Maybe I just only have to use AddDimension for each callback, but I don't know how to set the Objective function in RoutingModel.
All in your question: Does restriction aka constraint should be part of the objective cost ? or this is just, well constraints the solver need to fulfill ?
Adding new dimension using AddDimension() is a good start IMHO.
note: You may also add the span or global span of any dimension to the objective cost by setting the corresponding coefficient (zero by default thus dimension won't participate to the objective cost and only add "constraint" to the solution)
ref: https://github.com/google/or-tools/blob/b37d9c786b69128f3505f15beca09e89bf078a89/ortools/constraint_solver/routing.h#L2482-L2496
At the moment, my problem has four metrics. Each of these measures something entirely different (each has different units, a different range, etc.) and each is weighted externally. I am using Drools for scoring.
I only have only one score level (SimpleLongScore) and I have to find a way to appropriately combine the individual scores of these metrics onto one long value
The most significant problem at the moment is that the range of values for the metrics can be wildly different.
So if, for example, after a move the score of a metric with a small possible range improves by, say, 10%, that could be completely dwarfed by an alternate move which improves the metric with a larger range's score by only 1% because OptaPlanner only considers the actual score value rather than the possible range of values and how changes affect them proportionally (to my knowledge).
So, is there a way to handle this cleanly which is already part of OptaPlanner that I cannot find?
Is the only feasible solution to implement Pareto scoring? Because that seems like a hack-y nightmare.
So far I have code/math to compute the best-possible and worst-possible scores for a metric that I access from within the Drools and then I can compute where in that range a move puts us, but this also feel quite hack-y and will cause issues with incremental scoring if we want to scale non-linearly within that range.
I keep coming back to thinking I should just just bite the bullet and implement Pareto scoring.
Thanks!
Take a look at #ConstraintConfiguration and #ConstraintWeight in the docs.
Also take a look at the chapter "explaning the score", which can exactly tell you which constraint had which score impact on the best solution found.
If, however, you need pareto optimization, so you need multiple best solutions that don't dominate each other, then know that OptaPlanner doesn't support that yet, but I know of 2 cases that implemented it in OptaPlanner by hacking BestSolutionRecaller.
That being said, 99% of the cases that think of pareto optimization, are 100% happy with #ConstraintWeight instead, because users don't want multiple best solutions (except during simulations), they just want one in production.
Setup and Goal:
I want to solve a vehicle routing problem with time windows and optional stops. I am looking for a solution that serves as many stops as possible. There is no need to minimize arc costs, I am only interested in the number of serviced stops. For each stop I have defined a cost for not servicing it. I define the cost of an arc (i,j) to be the transit time in seconds because I my understanding is that this is useful in problems with time windows.
Problem:
The arc costs interfere with the penalties for dropping stops. The solver might drop a stop because getting there is too expensive. But I don't care about the cost of getting to a stop.
Possible Solutions:
Multiply the stop drop penalties by a large number so that is much more expensive to drop a stop than to travel between stops.
Remove the arc cost from the objective function but keep the arc cost available for the first solution strategy.
Question:
Is 2) possible in or-tools? What's the best practice for guiding the first solution strategy but not accumulating costs in doing so?
I am using MATLAB gamultiobj optimization
as I have 6 to 12 objective functions; the gamultiobj function inefficiently handling the problem, always terminated because the number of generations exceeded, not because the changes of the objective functions become smaller
I looked at the gamultiobj options documentations, but it didn't help
http://www.mathworks.com/help/gads/examples/multiobjective-genetic-algorithm-options.html
1- how can I increase the capability of gamultiobj function to handle this number of objective functions?
2- are there a better way at all (using MATLAB)?
Well,
this is my update:
1- I increased the number of generations, the population size, and assigned proper initial population using the common ga options, it worked better (I didn't know that they are working with gamultiobj too, but I knew, it isn't stated anywhere in the documentation explicitly).
2- after running and inspecting the results I realized that gamultiobj can handle many objective functions efficiently providing that they are independent. As long as the objective functions are strongly dependent (which is the case of my problem, unfortunately) the gamultiobj solver's efficiency dramatically decreases.
thanks !
You should increase the number of generations, possibly play with the options such as crossover, mutation, the constraint bounds in which you're going to get the solution.
The bounds are to specified correctly. and the initial population is also pretty much needed to get it to the correct set of parameters that you want to optimize
My question is specific to iPhone, iPod, and iPad, since I am assuming that the architecture makes a big difference. I'm hoping there is either a specification somewhere (for the various chips perhaps), or a reliable way to measure T for each specific instruction. I know I can use any number of tools to measure aggregate processor time used, memory used, etc. I want to quantify at a lower level.
So, I'm able to figure out how many times I go through the main part of the algorithm. For example, I iterate n * (n-1) times in a naive implementation, and between n (best case) and n + n * (n-1) (worst case) in another. I can also make a reasonable count of the total number of instructions (+ - = % * /, and logic statements), and I can compare those counts, but that's assuming the weight of each operation is the same. Also, I don't have any idea how to weight the actual time value of a logic statement (if, else, for, while) vs a mathematical operator... is "if" as much work as "+" each time I use it? I would love to know where to find this information.
So, for clarity, my goal is to discover how much processor time I am demanding of the CPU (or GPU or any U) so that I can design an optimal algorithm around processor time. Can someone give me an idea of where to start for iOS hardware?
Edit: This link to ClockServices.c and SIMD stuff in the developer portal might be a good start for people interested in this. A few more cups of coffee tonight and I might get through it ;)
On a modern platform, processor time isn't the only limiting factor. Often, memory access is.
Still, processor time:
Your basic approach at an estimation for the processor load is OK, though, and is sensible: Make a rough estimate of the cost based on your knowledge of typical platforms.
In this article, Table 1 shows the times for typical primitive operations in .NET. While your platform may vary, the relative time is usually very similar. Maybe you can find - or even make - one for iStuff.
(I haven't come across one so thorough for other platforms, except processor / instruction set manuals, but they deal with assembly instructions)
memory locality:
A cache miss can cost you hundreds of cycles, a disk access a thousand times as much. So controlling your memory access patterns (i.e. reducing the working set, restructuring and accessing data in a cache-friendly way) is an important part of evaluating an algorithm.
xCode has instruments to measure performance of each function/operation, you can simply use them.