Posts

Showing posts from April, 2020

Naive algorithm for pattern searching

Image
Naive algorithm for Pattern Searching Given a text  txt[0..n-1]  and a pattern  pat[0..m-1] , write a function  search(char pat[], char txt[])  that prints all occurrences of  pat[]  in  txt[] . You may assume that  n > m . Examples: Input: txt[] = "THIS IS A TEST TEXT" pat[] = "TEST" Output: Pattern found at index 10 Input: txt[] = "AABAACAADAABAABA" pat[] = "AABA" Output: Pattern found at index 0 Pattern found at index 9 Pattern found at index 12 Pattern searching is an important problem in computer science. When we do search for a string in notepad/word file or browser or database, pattern searching algorithms are used to show the search results. What is the best case? The best case occurs when the first character of the pattern is not present in text at all. txt[] = "AABCCAADDEE" ; pat[] = "FAA" ; The number of comparisons in best case is O(n). What

Rabin-karp algorithm

Image
Rabin-Karp Algorithm for Pattern Searching Given a text  txt[0..n-1]  and a pattern  pat[0..m-1] , write a function  search(char pat[], char txt[])  that prints all occurrences of  pat[]  in  txt[] . You may assume that n > m. Examples: Input: txt[] = "THIS IS A TEST TEXT" pat[] = "TEST" Output: Pattern found at index 10 Input: txt[] = "AABAACAADAABAABA" pat[] = "AABA" Output: Pattern found at index 0 Pattern found at index 9 Pattern found at index 12 The  Naive String Matching  algorithm slides the pattern one by one. After each slide, it one by one checks characters at the current shift and if all characters match then prints the match. Like the Naive Algorithm, Rabin-Karp algorithm also slides the pattern one by one. But unlike the Naive algorithm, Rabin Karp algorithm matches the hash value of the pattern with the hash value of current substring of text, and if the hash values match then

Multithreaded Merge Sort

Merge-Sort : The Serial Version We have Merge − Sort (A, p, r) Observation: Sort elements in A [p...r] 1. if (p < r) then 2. q = (p+r)/2 3. Merge − Sort (A, p, q) 4. Merge − Sort (A, q + 1, r) 5. Merge (A, p, q, r) Merge-Sort : The Parallel Version We have Merge − Sort (A, p, r) Observation: Sort elements in A [p...r] 1. if (p < r) then 2. q = (p+r)/2 3. spawn Merge − Sort (A, p, q) 4. Merge − Sort (A, q + 1, r) // Not necessary to spawn this 5. sync 6. Merge (A, p, q, r)

Multithreaded Matrix Multiplication

Pseudo-code of Matrix-Multiply Matrix − Multiply(C, A, B, n) // The result of A × B in C with n a power of 2 for simplicity 1. if (n == 1) 2. C [1, 1] = A [1, 1] + B [1, 1] 3. else 4. allocate a temporary matrix T [1...n, 1...n] 5. partition A, B, C, T into n/2 *n/2 submatrices 6. spawn Matrix − Multiply (C11, A11,     B11, n/2) 7. spawn Matrix − Multiply (C12, A11,     B12, n/2) 8. spawn Matrix − Multiply (C21, A21,     B11, n/2) 9. spawn Matrix − Multiply (C22, A21,     B12, n/2) 10. spawn Matrix − Multiply (T11, A12,   B21, n/2) 11. spawn Matrix − Multiply (T12, A12,   B21, n/2) 12. spawn Matrix − Multiply (T21, A22,   B21, n/2) 13. spawn Matrix − Multiply (T22, A22,   B22, n/2) 14. sync 15.Matrix − Add(C, T, n) Matrix Add Code Matrix − Add(C, T, n) // Add matrices C and T in-place to produce C = C + T 1. if (n == 1) 2. C [1, 1] = C [1, 1] + T [1, 1] 3. else 4. Partition C and T into sub-matrices n/2 * n/2 5. spawn Matrix − Add (C11, T11, n/2) 6. spawn M

Tracable and Non Tracable problems

Tractable and Intractable Problems So far, almost all of the problems that we have studied have had complexities that are polynomial, i.e. whose running time T(n) has been O(n^k) for some fixed value of k. Typically k has been small, 3 or less. We will let P denote the class of all problems whose solution can be computed in polynomial time, i.e. O(n^k) or some fixed k, whether it is 3, 100, or something else. We consider all such problems efficiently solvable, or tractable. Notice that this is a very relaxed definition of tractability, but our goal in this lecture and the next few ones is to understand which problems are intractable, a notion that we formalize as not being solvable in polynomial time. Notice how not being in P is certainly a strong way of being intractable. We will focus on a class of problems, called the NP-complete problems, which is a class of very diverse problems, that share the following properties: we only know how to solve those problems in time much larger

Amortized analysis- Splay Tree

A splay tree is an efficient implementation of binary search trees that takes advantage of locality in the incoming lookup requests. Locality in this context is a tendency to look for the same element multiple times. A stream of requests exhibits no locality if every element is equally likely to be accessed at each point. For many applications, there is locality, and elements tend to be accessed repeatedly. A good example of an application with this property is a network router. Routers must decide on which outgoing wire to route the incoming packets, based on the IP address in the packets. The router needs a big table (a map) that can be used to look up an IP address and find out which outgoing connection to use. If an IP address has been used once, it is likely to be used again, perhaps many times. Splay trees are designed to provide good performance in this situation. In addition, splay trees offer amortized O(lg n) performance. That is, a sequence of M operations on an n-node s

Amortized analysis - Dijkstra's algorithm

The initialization step takes O(n) operations to set n distance estimate values to infinity and one to 0. Then, within the while loop, we make O(n) calls to finding node with min(d[x]) from F (this is called FindMin) as nodes are never re-inserted into F. Additionally, there will be O(n) calls to deleting min(d[x])node from F (DeleteMin). Finally, each edge will be relaxed at most once, each in constant time, for a total of O(m) work. During these relaxations, there will be O(m) potential decreases of distance estimates d[u] (DecreaseKey). Thus, the total runtime of Dijkstra’s algorithm depends on how quickly our data structure can support each call to FindMin, DeleteMin, and DecreaseKey. That is, the runtime is on the order of n · (F indM in(n) + DeleteM in(n)) + m · DecreaseKey(n). Consider implementing Dijkstra using the following data structures. • Store F as an array where each slot corresponds to a node and where F(j) = d[j] if j ∈ F, else NIL. DecreaseKey runs in O(1) as k

Binomial Heap

Image
Binomial Heap The main application of  Binary Heap  is as implement priority queue. Binomial Heap is an extension of  Binary Heap  that provides faster union or merge operation together with other operations provided by Binary Heap. A Binomial Heap is a collection of Binomial Trees What is a Binomial Tree? A Binomial Tree of order 0 has 1 node. A Binomial Tree of order k can be constructed by taking two binomial trees of order k-1 and making one as leftmost child or other. A Binomial Tree of order k has following properties. a) It has exactly 2 k  nodes. b) It has depth as k. c) There are exactly  k C i  nodes at depth i for i = 0, 1, . . . , k. d) The root has degree k and children of root are themselves Binomial Trees with order k-1, k-2,.. 0 from left to right. Binomial Heap: A Binomial Heap is a set of Binomial Trees where each Binomial Tree follows Min Heap property. And there can be at most one Binomial Tree of any degree. A Binomial Heap with 12 nodes. It is a