Amortized analysis - Dijkstra's algorithm
The initialization step takes O(n) operations to set n distance estimate values to infinity and one to 0. Then, within the while loop, we make O(n) calls to finding node with min(d[x]) from F (this is called FindMin) as nodes are never re-inserted into F. Additionally, there will be O(n) calls to deleting min(d[x])node from F (DeleteMin). Finally, each edge will be relaxed at most once, each in constant time, for a total of O(m) work. During these relaxations, there will be O(m) potential decreases of distance estimates d[u] (DecreaseKey).
Thus, the total runtime of Dijkstra’s algorithm depends on how quickly our data structure can support each call to FindMin, DeleteMin, and DecreaseKey. That is, the runtime is on the order of
n · (F indM in(n) + DeleteM in(n)) + m · DecreaseKey(n).
Consider implementing Dijkstra using the following data structures.
• Store F as an array where each slot corresponds to a node and where F(j) = d[j] if j ∈ F, else NIL.
DecreaseKey runs in O(1) as key is indexed.
FindMin and DeleteMin run in O(n) as the array is not sorted.
The total runtime is O(m + n)^2 = O(n^2)
• Store F as a Red-Black tree.
DecreaseKey and FindMin run in O(log n) time. You implement DecreaseKey by deleting and rein-serting with the new key.
Total runtime is O((m+n) log n), which improves over the array implementation, except in the special
case when m = Θ(n^2)
• Store F as a Fibonacci Heap. Fibonacci Heaps are a complex data structure, which is able to support the operations Insert in O(1), FindMin in O(1), DecreaseKey in O(1) and DeleteMin in O(log n) “amortized” time, over a sequence of calls to each operation. The meaning of amortized time in this
case is as follows: starting from an empty Fibonacci Heaps data structure, any sequence of operations that includes a Inserts, b FindMins, c DecreaseKeys and d DeleteMins take O(a + b + c + d log n) time.
Thus, the outcome is that Dijkstra’s algorithm can be implemented to run in O(m + n log n) time.
Thus, the total runtime of Dijkstra’s algorithm depends on how quickly our data structure can support each call to FindMin, DeleteMin, and DecreaseKey. That is, the runtime is on the order of
n · (F indM in(n) + DeleteM in(n)) + m · DecreaseKey(n).
Consider implementing Dijkstra using the following data structures.
• Store F as an array where each slot corresponds to a node and where F(j) = d[j] if j ∈ F, else NIL.
DecreaseKey runs in O(1) as key is indexed.
FindMin and DeleteMin run in O(n) as the array is not sorted.
The total runtime is O(m + n)^2 = O(n^2)
• Store F as a Red-Black tree.
DecreaseKey and FindMin run in O(log n) time. You implement DecreaseKey by deleting and rein-serting with the new key.
Total runtime is O((m+n) log n), which improves over the array implementation, except in the special
case when m = Θ(n^2)
• Store F as a Fibonacci Heap. Fibonacci Heaps are a complex data structure, which is able to support the operations Insert in O(1), FindMin in O(1), DecreaseKey in O(1) and DeleteMin in O(log n) “amortized” time, over a sequence of calls to each operation. The meaning of amortized time in this
case is as follows: starting from an empty Fibonacci Heaps data structure, any sequence of operations that includes a Inserts, b FindMins, c DecreaseKeys and d DeleteMins take O(a + b + c + d log n) time.
Thus, the outcome is that Dijkstra’s algorithm can be implemented to run in O(m + n log n) time.
Comments
Post a Comment