Interested in finding k shortest path in opl cplex by dijkstra algorithm - dijkstra

I am interested in finding k shortest path from source node to destination node by dijkstra's algorithm. i have solved the same problem with the help of dvar boolean to declear a boolean variables for the link which can take value 1 if the link is selected for path and 0 otherwise but the problem there this variable calculate the shortest path for each flow which is much time consuming. Now i am interested to get rid of flow conservation constraint and use some type of algorithm for column generation approaches which can resolve the problem at once not to calculate the path for each variable. I am looking forward to hearing from you

If you are concerned about performance then it is probably not a good idea to attack this problem with a general integer programming solver. As you can see for example here, there are dedicated algorithms available to solve the k-shortest path problem efficiently.
If you want to stick with OPL then you could implement those algorithms using OPLScript.

Related

Solving non-convex optimization with global optimization algorithm using MATLAB

I have a simple unconstrained non-convex optimization problem. Since problems of these type have multiple local minima, I am looking for global optimization algorithm that yields a unique/global minimum. In the internet I came across global optimization algorithms like genetic algorithms, simulated annealing, etc but for solving a simple one variable unconstrained non-convex optimization problem, I think using these high level algorithms doesn't seem to be a good idea. Could anyone recommend me a simple global algorithm for solving such simple one variable unconstrained non-convex optimization problem? I would highly appreciate ideas on this.
"Since problems of these type have multiple local minima". It's not true, the real situation is the following:
Maybe you have one local minimum
Maybe you have infinite set of local miminums
Maybe you have finite number of local minimums
Maybe minimum is not attained
Maybe problem is unbounded below
Also big picture is that there are really true methods which really solve problems (numerically and they slow), but there is a slang to call method which is not nessesary find minumum value of function also call as "solve".
In fact M^n~M for any finite n and any infinite set M. So the fact that you problem has one dimension is nothing. It is still hard as problem with 1000000 parameters which are drawn from the set M from theoretical point of view.
If you interesting how approximately solve problem with known precision epsilon in domain - then split you domain into 1/espsilon regions, sample value(evalute function) at middle point, and select minimum
Method which I will describe below is precise method, and other methods: particle estimation, sequent.convex.programming, alternative direction, particle swarm, Neidler-Mead simplex method, mutlistart gradient/subgradient descend or any descend algorithm like Newton Method or coordinate descend, they all has no gurantess for non-convex problems and some of them even can no be applied if function is nonconvex.
If you interesting in really solve with some precision on function value then then you can take attention into method, which is called branch-and-bound and which truly found minimum, algorithms which you described I don't think so that they solve problem and find minimum in strong sense:
Basic idea of branch and bound - partition domain into convex sets and improve lower/upper bound, in your case it is intervals.
You should have a routine to find upper bound of optimal (min.) value: you can do it e.g. just by sampling subdomain and take smallest or use local optimization method start from random point.
But also you should have lower bound of optimal (min.) value by some principle and this is hard part:
convex relaxation of integer variables to make them real variables
use Lagrange Dual function
use Lipshitc constant on function, etc.
This is sophisticaed step.
If this two values are near - we're done in other case partion or refine partition.
Get info about lower and upper bound of child subproblems and then take min. of upper bounds and min. of lower bounds of children. If child returns more worse lower bound it can be upgraded by parent.
References:
For more great explanation please look into:
EE364B, Lecture 18, prof. Stephen Boyd, Stanford University. It's available on youtube and in ITunes University. If you new to this area I recommend you to look EE263, EE364A, EE364B courses of Stephen P. Boyd. You will love it
Since this is a one dimensional problem, things are easier.
A simple steepest descend procedure may be used as follows.
Suppose the interval of search is a<x<b.
Start the SD from a minimizing your function say f(x). You recover the first minimum Xm1. You should use a fine step, not too large.
Shift this point by adding a positive small constant Xm1+ε. Then maximize f or minimize -f, starting from this point. You get a max of f, you distort it by ε and start from there a minimization, and so on so forth.

alternatives to lsqlin MATLAB

Ok so I have a script that runs among other things lsqlin optimization function millions of times. To speed up this code I "codegen" it (basically automatically creates some mex files). This is a followup of Linear systems of inequations.
The problem here is that lsqlin as well as other optimization functions are not transformed and need to be called externally, which leads to loss of efficient.
I already found the MINQ toolbox but could not understand how to translate from lsqlin to this. Also found the QPC toolbox which requires a licence, which I am currently waiting.
Does anyone suggest another toolbox and how to convert from lsqlin to that?
General idea to codegen a lsqlin script (as can be seen a link is called and not a full conversion).
CODE:
function main_script()
coder.extrinsic('lsqlin_script')
for i=1:10^7
X=lsqlin_script(A,b,X0)
...
end
end
function X=lsqlin_script(A,b,X0)
X=lsqlin(eye(2),X0, A, b,[],[],[],[],X0, optimoptions('lsqlin','Display','Off'));
end
RUN:
codegen main_script.m
main_script_mex(INPUTS)
If you would describe your original problem I think you could expect more answers.
A possible approach to avoid lsqlin:
Calculate the orthogonal projection of Pxyz onto every plane defined by A and b.
Check whether the projections satisfy the inequality requirements. From those which satisfy select the closest point to Pxyz. If no valid point is found then the closest point will be on the intersection of planes. Calculate the shortest distance from Pxyz to every intersection lines... follow the steps applied for projection on planes..
As you can see it is not fully elaborated, you should work out the details if you think it may solve your problem.
For these calculations you do not need optimization function.

Is BFS best search algorithm?

I know about Dijkstra's agorithm, Floyd-Warshall algorithm and Bellman-Ford algorithm for finding the cheepest paths between 2 vertices in graphs.
But when all the edges have the same cost, the cheapest path is the path with minimal number of edges? Am I right? There is no reason to implement Dijkstra or Floyd-Warshall, the best algorithm is Breadth-First search from source, until we reach the target? In the worst case, you will have to traverse all the vertices, so the complexity is O(V)? There is no better solution? Am I right?
But there are tons of articles on the internet, which talk about shortest paths in grids with obstacles and they mention Dijkstra or A*. Even on StackOverfow - Algorithm to find the shortest path, with obstacles
or here http://qiao.github.io/PathFinding.js/visual/
So, are all those people stupid? Or am I stupid? Why do they recommend so complicated things like Dijkstra to beginners, who just want to move their enemies to the main character in a regular grid? It is like when someone asks how to find the minimum number in a list, and you recommend him to implement heap sort and then take the first element from sorted array.
BFS (Breadth-first search) is just a way of travelling a graph. It's goal is to visit all the vertices. That's all. Another way of travelling the graph can be for example DFS.
Dijkstra is an algorithm, which goal is to find the shortest path from given vertex v to all other vertices.
Dijkstra is not so complicated, even for beginners. It travels the graph, using BFS + doing something more. This something more is storing and updating the information about the shortest path to the currently visited vertex.
If you want to find shortest path between 2 vertices v and q, you can do that with a little modification of Dijkstra. Just stop when you reach the vertex q.
The last algorithm - A* is somewhat the most clever (and probably the most difficult). It uses a heuristic, a magic fairy, which advises you where to go. If you have a good heuristic function, this algorithm outperforms BFS and Dijkstra. A* can be seen as an extension of Dijktra's algorithm (heuristic function is an extension).
But when all the edges have the same cost, the cheapest path is the
path with minimal number of edges? Am I right?
Right.
There is no reason to implement Dijkstra or Floyd-Warshall, the best
algorithm is Breadth-First search? Am I right?
When it comes to such a simple case where all edges have the same weight - you can use whatever method you like, everything will work. However, A* with good heuristic should be faster then BFS and Dijkstra. In the simulation you mentioned, you can observe that.
So, are all those people stupid? Or am I stupid? Why do they recommend so complicated things like Dijkstra to beginners, who just want to move their enemies to the main character in a regular grid?
They have a different problem, which changes the solution. Read the problem description very carefully:
(...) The catch being any point (excluding A and B) can have an
obstacle obstructing the path, and thus must be detoured.
Enemies can have obstacles on the way to the main character. So, for example A* is a good choice in such case.
BFS is like a "brute-force" to find the shortest path in an unweighted graph. Dijkstra's is like a "brute-force" for weighted graphs. If you were to use Dijkstra's on an unweighted graph, it would be exactly equivalent to BFS.
So, Dijkstra's can be considered an extension of BFS. It is not really a "complicated" algorithm; it's only slightly more complex than BFS.
A* is an extension to Dijkstra's that uses a heuristic to speed up the pathfinding.
But when all the edges have the same cost, the cheapest path is the path with minimal number of edges?. Yes
If you really understood the post that you linked, you would have noticed that the problem they want to solve is different than yours.

Dijkstra's algorithm Vs Uniform Cost Search (Time comlexity)

My question is as follows: According to different sources, Dijkstra's algorithm is nothing but a variant of Uniform Cost Search. We know that Dijkstra's algorithm finds the shortest path between a source and all destinations ( single-source ). However, we can always modify Dijkstra to find the the shortest path between a START and a GOAL state ( when the goal is popped from the priority queue, we simply stop); but doing so, the worst case scenario will be still finding the shortest path from START to all other nodes ( suppose the goal is the furthest node in the graph).
If we implement Dijkstra's algorithm using a min-priority heap, the running time will be
O(V log V +E) , where E is the number of edges and V the number of vertices.
Since Uniform Cost Search is the same as Dijkstra ( slightly different implementation), then the running time of UCS should be similar to Dijkstra, right? However, according to my AI class, Uniform Cost Search is exponential at the worst case, and it takes O(b1 + [C*/ε]), where C* is the cost of the optimal solution. ( b is the branching factor)
How can both algorithms be the same while they have different running times? Is the running time the same, but the way we look at it is different?
I would appreciate your help :):) Thank you
Is the running time the same, but the way we look at it is different?
Yes. Uniform cost search can be used on infinitely large graphs, on which Dijkstra's original algorithm would never terminate. In such situations, it's no use defining complexity in terms of V and E as both might be infinite and the resulting big-O figure meaningless.

Optimal solution for: All possible acyclic paths in a graph problem

I am dealing with undirected graph. I need to find all possible acyclic paths within a graph:
with G(V,E)
find all subsets of V that are acyclic paths
I am using either python scipy or matlab - whichever would be appropriate.
Is there any clever solution for this?
I'm trying to achieve it with a breadth-first search (see wiki)
I also have this toolbox in matlab: http://www.mathworks.com/matlabcentral/fileexchange/4266-grtheory-graph-theory-toolbox but it seems there's no straightforward solution for my problem.
PS. The problem practically is stated as: Transit Network Design Problem: Find such a transport network that minimizes cost of passangers and operators (i.e. optimal subway network for urban area)
Thanks in advance
Rafal
I think the problem as stated in your PS may be a NP problem. If so, there are straightforward solutions only for graphs with very limited numbers of nodes (N ~ <= 20). Other solutions will be approximate, giving rise to only local optimums. The solution to your problem as stated in the question will simply be to calculate all the permutations of the node orders. Again this will become computationally infeasible with comparatively low numbers of nodes (possibly higher than 20 but not much).
Do you need only the shortest paths between all pairs of vertices, or really all paths?