Dijkstra's algorithm when all edges have same weight - dijkstra

If all edges had the same weight in a given graph, will Dijkstra's algorithm still find the shortest path between 2 vertices?
Thanks!

Yes dijkstra algorithm can find the shortest path even when all edges have the same weight. dijkstra has time complexity O((V+E)logV).Instead you should choose BFS algorithm to do the same thing,because BFS has time complexity O(V+E),so BFS is asymptotically faster than dijkstra.

Yes it would, But you might want to take a look at Breadth-first search, wich solves the case you are refering to.
To find the path, you can make a recursive function that starts in the destiny node with flagged distance n, and moves to one of the neightbour nodes with flagged distance n-1

Related

find the Shortest Path between all the vertices in a graph without giving start or end points

i know about the traveling salesman problem, but is there any other algorithm/problem which better fits my needs/description? I need to describe my problem with the help of such a mathematical description.
I have a set of nodes with known start- and endpoint. So i just need to calculate the shortest way to visit all the three points between that two. Dijkstra and similar algorithms try to find the shortest path between two points, so here they probably won´t visit all points between. Or is there a algorithm which finds shortest way and visit all points between two points?
You can achieve it using Ant colony optimization algorithms. Refer Ant colony optimization algorithms.
The complexity of the general case of your problem is at least as high as for the Travelling Salesman problem. Just imagine the case where your two endpoints are basically in the same position, then your problem becomes equivalent to the Travelling Salesman.
If you never expect more than five points in your graph though, do you really need to bother with fancy algorithms? You could just do an exhaustive search for the best solution. Since the only decision is the order in which you visit the three points in the middle, you will only have to test 3! = 6 different paths. Even if I misunderstand you and you want the overall shortest open path that visits all nodes, that would still only be 5! = 120 different paths to test (60 if distances are the same in both directions).

Is BFS best search algorithm?

I know about Dijkstra's agorithm, Floyd-Warshall algorithm and Bellman-Ford algorithm for finding the cheepest paths between 2 vertices in graphs.
But when all the edges have the same cost, the cheapest path is the path with minimal number of edges? Am I right? There is no reason to implement Dijkstra or Floyd-Warshall, the best algorithm is Breadth-First search from source, until we reach the target? In the worst case, you will have to traverse all the vertices, so the complexity is O(V)? There is no better solution? Am I right?
But there are tons of articles on the internet, which talk about shortest paths in grids with obstacles and they mention Dijkstra or A*. Even on StackOverfow - Algorithm to find the shortest path, with obstacles
or here http://qiao.github.io/PathFinding.js/visual/
So, are all those people stupid? Or am I stupid? Why do they recommend so complicated things like Dijkstra to beginners, who just want to move their enemies to the main character in a regular grid? It is like when someone asks how to find the minimum number in a list, and you recommend him to implement heap sort and then take the first element from sorted array.
BFS (Breadth-first search) is just a way of travelling a graph. It's goal is to visit all the vertices. That's all. Another way of travelling the graph can be for example DFS.
Dijkstra is an algorithm, which goal is to find the shortest path from given vertex v to all other vertices.
Dijkstra is not so complicated, even for beginners. It travels the graph, using BFS + doing something more. This something more is storing and updating the information about the shortest path to the currently visited vertex.
If you want to find shortest path between 2 vertices v and q, you can do that with a little modification of Dijkstra. Just stop when you reach the vertex q.
The last algorithm - A* is somewhat the most clever (and probably the most difficult). It uses a heuristic, a magic fairy, which advises you where to go. If you have a good heuristic function, this algorithm outperforms BFS and Dijkstra. A* can be seen as an extension of Dijktra's algorithm (heuristic function is an extension).
But when all the edges have the same cost, the cheapest path is the
path with minimal number of edges? Am I right?
Right.
There is no reason to implement Dijkstra or Floyd-Warshall, the best
algorithm is Breadth-First search? Am I right?
When it comes to such a simple case where all edges have the same weight - you can use whatever method you like, everything will work. However, A* with good heuristic should be faster then BFS and Dijkstra. In the simulation you mentioned, you can observe that.
So, are all those people stupid? Or am I stupid? Why do they recommend so complicated things like Dijkstra to beginners, who just want to move their enemies to the main character in a regular grid?
They have a different problem, which changes the solution. Read the problem description very carefully:
(...) The catch being any point (excluding A and B) can have an
obstacle obstructing the path, and thus must be detoured.
Enemies can have obstacles on the way to the main character. So, for example A* is a good choice in such case.
BFS is like a "brute-force" to find the shortest path in an unweighted graph. Dijkstra's is like a "brute-force" for weighted graphs. If you were to use Dijkstra's on an unweighted graph, it would be exactly equivalent to BFS.
So, Dijkstra's can be considered an extension of BFS. It is not really a "complicated" algorithm; it's only slightly more complex than BFS.
A* is an extension to Dijkstra's that uses a heuristic to speed up the pathfinding.
But when all the edges have the same cost, the cheapest path is the path with minimal number of edges?. Yes
If you really understood the post that you linked, you would have noticed that the problem they want to solve is different than yours.

Dijkstra's algorithm Vs Uniform Cost Search (Time comlexity)

My question is as follows: According to different sources, Dijkstra's algorithm is nothing but a variant of Uniform Cost Search. We know that Dijkstra's algorithm finds the shortest path between a source and all destinations ( single-source ). However, we can always modify Dijkstra to find the the shortest path between a START and a GOAL state ( when the goal is popped from the priority queue, we simply stop); but doing so, the worst case scenario will be still finding the shortest path from START to all other nodes ( suppose the goal is the furthest node in the graph).
If we implement Dijkstra's algorithm using a min-priority heap, the running time will be
O(V log V +E) , where E is the number of edges and V the number of vertices.
Since Uniform Cost Search is the same as Dijkstra ( slightly different implementation), then the running time of UCS should be similar to Dijkstra, right? However, according to my AI class, Uniform Cost Search is exponential at the worst case, and it takes O(b1 + [C*/ε]), where C* is the cost of the optimal solution. ( b is the branching factor)
How can both algorithms be the same while they have different running times? Is the running time the same, but the way we look at it is different?
I would appreciate your help :):) Thank you
Is the running time the same, but the way we look at it is different?
Yes. Uniform cost search can be used on infinitely large graphs, on which Dijkstra's original algorithm would never terminate. In such situations, it's no use defining complexity in terms of V and E as both might be infinite and the resulting big-O figure meaningless.

Finding the "tightest" subset in Euclidean space

I am given at of points x_1, x_2, ... x_n \in R^d. I wish to find a subset of k points such that the sum of the distances between these k points is minimal. Naively this is an O(n choose k) problem, but I am looking for a faster algorithm.
I can think of two alternative equivalent formulations:
The minimal edge weight clique problem: think of the points as a graph, edge weights are the distances, and finding the minimal weight clique. This is equivalent to maximal edge weight problem, which is known to be NP-complete. However, I have the benefit of knowing that my graph is embedded in R^d, and that all the weights are positive, so perhaps that might help?
The minimal unconstrained sub-matrix problem: I am given the symmetric distance matrix, and I want to find a kXk minor with minimal sum.
I'd appreciate any help in this.
The most obvious optimization doesn't really require any different formula.
Just greedily find a near-optimal candidate first. Try to refine it in linear time by swapping members. Then do an exhaustive search but stop whenever the new candidates are worse than the greedy-candidate to prune the search space.
E.g.
Compute the mean
Order objects by squared distance from mean
Test all n-k intervals of length k in this order, choose the best
For any non-chosen object, try to swap it with one of the chosen objects, if it improves the score
Now you should have a reasonably good candidate for pruning.
Then do an exhaustive search, and stop whenever it is worse than this candidate.
Note: steps 1-3 are an inspiration taken from fast convex hull algorithms.

Why can't we apply Dijkstra's algorithm for a graph with negative weights?

Why can't we apply Dijkstra's algorithm for a graph with negative weights?
What does it mean to find the least expensive path from A to B, if every time you travel from C to D you get paid?
If there is a negative weight between two nodes, the "shortest path" is to loop backwards and forwards between those two nodes forever. The more hops, the "shorter" the path gets.
This is nothing to do with the algorithm, and all to do with the impossibility of answering such a question.
Edit:
The above claim assumes bidirectional links. If there is no cycles which have an overall negative weight, you do not have a way to loop around forever, being paid.
In such a case, Dijkstra's algorithm may still fail:
Consider two paths:
an optimal path that racks up a cost of 100, before crossing the final edge which has a -25 weight, giving a total of 75, and
a suboptimal path that has no negatively-weighted edges with a total cost of 90.
Dijkstra's algorithm will investigate the suboptimal path first, and will declare itself finished when it finds it. It will never follow up the subpath that is worse than the first solution found
I will give you an counterexample. Consider following graph
http://img853.imageshack.us/img853/7318/3fk8.png
Suppose you begun in vertex A and you want shortest path to D. Dijkstra's algorithm would do following steps:
Mark A as visited and add vertices B and C to queue
Fetch from queue vertex with minimal distance. It is B
Mark B as visited and add vertex D to queue.
Fetch from queue. Not it is vertex D.
Mark D as visited
Dijkstra says shortest path from A to D has length 2 but it is obviously not true.
Imagine you had a directed graph in it with a directed cycle, and the total "distance" around that was a negative weight. If on your way from the Start to the End vertex you could pass through that directed cycle, you could simply go around and around the directed cycle an arbitrary number of times.
And that means you could make you path across the graph have an infinitely negative distance (or effectively so).
However, as long as there are no directed cycles around your graph, you could get away with using Dijkstra's Algorithm without anything exploding on you.
All that being said, there if you have a graph with negative weights, you could use the Belman-Ford algorithm. Because of the generality of this algorithm, however, it is a bit slower. The Bellman-Ford algorithm takes O(V·E), where the Dijkstra's takes O(E + VlogV) time