DATA STRUCTURES AND ALGORITHMS MADE EASY NARASIMHA KARUMANCHI PDF
Designed by Narasimha Karumanchi teaching Data Structures and Algorithms at their training centers. If you read as an, you will give better lectures with easy go approach and as a System defined data types (Primitive data types). please check theses below links Data Structures and Algorithms - Narasimha cittadelmonte.info Data Structures And Algorithms Made Easy Narasimha Karumanchi Pdf And Algorithms Made Easy In Java Narasimha Karumanchi Pdf >>>CLICK HERE.
|Language:||English, Spanish, Arabic|
|Genre:||Health & Fitness|
|ePub File Size:||27.40 MB|
|PDF File Size:||12.58 MB|
|Distribution:||Free* [*Regsitration Required]|
Data Structures And Algorithms Made Easy -To All My Readers. By Narasimha Karumanchi. Designed by Narasimha Karumanchi. Copyright© myBooks/Data Structures and Algorithms Made Easy in Java - Narasimha cittadelmonte.info Find file Copy path. @sivaprak sivaprak updated - 25 May . Data Structures and Algorithms - Narasimha cittadelmonte.info Structures and Algorithms for GATE Data Structures and Aigorithms Made Easy in Java Coding.
Skip to main content. Log In Sign Up. Data Structures and Algorithms - Narasimha Karumanchi. Chandramani Kamal. All rights reserved.
That means any problem that can be solved recursively can also be solved iteratively. By the time you complete reading the entire book, you will encounter many recursion problems. The Towers of Hanoi is a mathematical puzzle. It consists of three rods or pegs or towers , and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks on one rod in ascending order of size, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, satisfying the following rules: Once we solve Towers of Hanoi with three disks, we can solve it with any number of disks with the above algorithm. Space Complexity: O n for recursive stack space. Backtracking is an improvement of the brute force approach. It systematically searches for a solution to a problem among all available options. In backtracking, we start with one possible option out of many available options and try to solve the problem if we are able to solve the problem with the selected move then we will print the solution else we will backtrack and select some other option and try to solve it.
If none if the options work out we will claim that there is no solution for the problem. Backtracking is a form of recursion. The usual scenario is that you are faced with a number of options, and you must choose one of these. This procedure is repeated over and over until you reach a final state. The tree is a way of representing some initial starting position the root node and a final goal state one of the leaves.
Backtracking allows us to deal with situations in which a raw brute-force approach would explode into an impossible number of options to consider. Backtracking is a sort of refined brute force.
Data Structures and Algorithms - Narasimha Karumanchi.pdf
At each node, we eliminate choices that are obviously not possible and proceed to recursively check only those that have potential. In general, that will be at the most recent decision point. Eventually, more and more of these decision points will have been fully explored, and we will have to backtrack further and further.
If we backtrack all the way to our initial state and have explored all alternatives from there, we can conclude the particular problem is unsolvable. In such a case, we will have done all the work of the exhaustive recursion and known that there is no viable solution possible. Assume A[ Let T n be the running time of binary n.
Assume function printf takes time O 1. Using Subtraction and Conquer Master theorem we get: This means the algorithm for generating bit-strings is optimal. Let us assume we keep current k-ary string in an array A[ Call function k- string n, k: Let T n be the running time of k — string n.
For more problems, refer to String Algorithms chapter. Given a matrix, each of which may be 1 or 0. The filled cells that are connected form a region. Two cells are said to be connected if they are adjacent to each other horizontally, vertically or diagonally. There may be several regions in the matrix.
How do you find the largest region in terms of number of cells in the matrix? The simplest idea is: Sample Call: At each level of the recurrence tree, the number of problems is double from the previous level, while the amount of work being done in each problem is half from the previous level. Formally, the ith level has 2i problems, each requiring 2n—i work.
Thus the ith level requires exactly 2n work. The depth of this tree is n, because at the ith level, the originating call will be T n — i. Thus the total complexity for T n is T n2n. A linked list is a data structure used for storing collections of data. A linked list has the following properties. It allocates memory as list grows. There are many other data structures that do the same thing as linked lists. Before discussing linked lists it is important to understand the difference between linked lists and arrays.
Both linked lists and arrays are used to store collections of data, and since both are used for the same purpose, we need to differentiate their usage.
Data Structures And Algorithms Made Easy Narasimha Karumanchi Pdf >>>CLICK HERE
That means in which cases arrays are suitable and in which cases linked lists are suitable. The array elements can be accessed in constant time by using the index of the particular element as the subscript. To access an array element, the address of an element is computed as an offset from the base address of the array and one multiplication is needed to compute what is supposed to be added to the base address to get the memory address of the element.
First the size of an element of that data type is calculated and then it is multiplied with the index of the element to get the value to be added to the base address. This process takes one multiplication and one addition. Since these two operations take constant time, we can say the array access can be performed in constant time. The size of the array is static specify the array size before using it. To allocate the array itself at the beginning, sometimes it may not be possible to get the memory for the complete array if the array size is big.
To insert an element at a given position, we may need to shift the existing elements. This will create a position for us to insert the new element at the desired position. If the position at which we want to add an element is at the beginning, then the shifting operation is more expensive.
Dynamic Arrays Dynamic array also called as growable array, resizable array, dynamic table, or array list is a random access, variable-size list data structure that allows elements to be added or removed. One simple way of implementing dynamic arrays is to initially start with some fixed size array. As soon as that array becomes full, create the new array double the size of the original array.
Similarly, reduce the array size to half if the elements in the array are less than half. We will see the implementation for dynamic arrays in the Stacks, Queues and Hashing chapters.
Advantages of Linked Lists Linked lists have both advantages and disadvantages. The advantage of linked lists is that they can be expanded in constant time. To create an array, we must allocate memory for a certain number of elements.
To add more elements to the array when full, we must create a new array and copy the old array into the new array. This can take a lot of time. We can prevent this by allocating lots of space initially but then we might allocate more than we need and waste memory. With a linked list, we can start with space for just one allocated element and add on new elements easily without the need to do any copying and reallocating. Issues with Linked Lists Disadvantages There are a number of issues with linked lists.
The main disadvantage of linked lists is access time to individual elements. Array is random-access, which means it takes O 1 to access any element in the array. Linked lists take O n for access to an element in the list in the worst case. Another advantage of arrays in access time is spacial locality in memory. Arrays are defined as contiguous blocks of memory, and so any array element will be physically near its neighbors. This greatly benefits from modern CPU caching methods.
Although the dynamic allocation of storage is a great advantage, the overhead with storing and retrieving data can make a big difference. Sometimes linked lists are hard to manipulate. If the last item is deleted, the last but one must then have its pointer changed to hold a NULL reference.
This requires that the list is traversed to find the last but one link, and its pointer set to a NULL reference. Finally, linked lists waste memory in terms of extra reference points. This list consists of a number of nodes in which each node has a next pointer to the following element. The link of the last node in the list is NULL, which indicates the end of the list. Following is a type declaration for a linked list of integers: The ListLength function takes a linked list as input and counts the number of nodes in the list.
The function given below can be used for printing the list data with extra print function. O n , for scanning the list of size n. O 1 , for creating a temporary variable. Singly Linked List Insertion Insertion into a singly-linked list has three cases: To insert an element in the linked list at some position p, assume that after inserting the element the position of this new node is p.
Inserting a Node in Singly Linked List at the Beginning In this case, a new node is inserted before the current head node. Inserting a Node in Singly Linked List at the Ending In this case, we need to modify two next pointers last nodes next pointer and new nodes next pointer.
Inserting a Node in Singly Linked List at the Middle Let us assume that we are given a position where we want to insert the new node. In this case also, we need to modify two next pointers. That means we traverse 2 nodes and insert the new node. For simplicity let us assume that the second node is called position node.
The new node points to the next node of the position where we want to add this node. Let us write the code for all three cases. We must update the first element pointer in the calling function, not just in the called function.
For this reason we need to send a double pointer. The following code inserts a node in the singly linked list. We can implement the three variations of the insert operation separately. O n , since, in the worst case, we may need to insert the node at the end of the list. O 1 , for creating one temporary variable. It can be done in two steps: This operation is a bit trickier than removing the first node, because the algorithm should find a node, which is previous to the tail.
It can be done in three steps: By the time we reach the end of the list, we will have two pointers, one pointing to the tail node and the other pointing to the node before the tail node. Deleting an Intermediate Node in Singly Linked List In this case, the node to be removed is always located between two nodes.
Head and tail links are not updated in this case. Such a removal can be done in two steps: In the worst case, we may need to delete the node at the end of the list. O 1 , for one temporary variable.
After freeing the current node, go to the next node with a temporary variable and repeat this process for all nodes. O n , for scanning the complete list of size n. A node in a singly linked list cannot be removed unless we have the pointer to its predecessor. The primary disadvantages of doubly linked lists are: Similar to a singly linked list, let us implement the operations of a doubly linked list.
If you understand the singly linked list operations, then doubly linked list operations are obvious. Following is a type declaration for a doubly linked list of integers: Doubly Linked List Insertion Insertion into a doubly-linked list has three cases same as singly linked list: Previous and next pointers need to be modified and it can be done in two steps: Inserting a Node in Doubly Linked List at the Middle As discussed in singly linked lists, traverse the list to the position node and insert the new node.
Also, new node left pointer points to the position node. Now, let us write the code for all of these three cases. In the worst case, we may need to insert the node at the end of the list. Doubly Linked List Deletion Similar to singly linked list deletion, here we have three cases: Then, dispose of the temporary node. Deleting the Last Node in Doubly Linked List This operation is a bit trickier than removing the first node, because the algorithm should find a node, which is previous to the tail first.
This can be done in three steps: By the time we reach the end of the list, we will have two pointers, one pointing to the tail and the other pointing to the node before the tail. Deleting an Intermediate Node in Doubly Linked List In this case, the node to be removed is always located between two nodes, and the head and tail links are not updated. The removal can be done in two steps: But circular linked lists do not have ends. While traversing the circular linked lists we should be careful; otherwise we will be traversing the list infinitely.
In circular linked lists, each node has a successor. Note that unlike singly linked lists, there is no node with NULL pointer in a circularly linked list. In some situations, circular linked lists are useful. For example, when several processes are using the same computer resource CPU for the same amount of time, we have to assure that no process accesses the resource before all other processes do round robin algorithm.
The following is a type declaration for a circular linked list of integers: In a circular linked list, we access the elements using the head node similar to head node in singly linked list and doubly linked lists. Counting Nodes in a Circular Linked List The circular list is accessible through the node marked head. To count the nodes, the list has to be traversed from the node marked head, with the help of a dummy node current, and stop the counting when current reaches the starting node head.
Otherwise, set the current pointer to the first node, and keep on counting till the current pointer reaches the starting node. Printing the Contents of a Circular Linked List We assume here that the list is being accessed by its head node. Since all the nodes are arranged in a circular fashion, the tail node of the list will be the node previous to the head node. Let us assume we want to print the contents of the nodes starting with the head node. Print its contents, move to the next node and continue printing till we reach the head node again.
O 1 , for temporary variable. Inserting a Node at the End of a Circular Linked List Let us add a node containing data, at the end of a list circular list headed by head. The new node will be placed just after the tail node which is the last node of the list , which means it will have to be inserted in between the tail node and the first node. That means in a circular list we should stop at the node whose next node is head. Inserting a Node at the Front of a Circular Linked List The only difference between inserting a node at the beginning and at the end is that, after inserting the new node, we just need to update the pointer.
The steps for doing this are given below: That means in a circular list we should stop at the node which is its previous node in the list. This has to be named as the tail node, and its next field has to point to the first node. Consider the following list. To delete the last node 40, the list has to be traversed till you reach 7. O 1 , for a temporary variable. Deleting the First Node in a Circular List The first node can be deleted by simply replacing the next field of the tail node with the next field of the first node.
Tail node is the previous node to the head node which we want to delete. Also, update the tail nodes next pointer to point to next node of head as shown below. Create a temporary node which will point to head.
Applications of Circular List Circular linked lists are used in managing the computing resources of a computer. We can use circular lists for implementing stacks and queues. That means elements in doubly linked list implementations consist of data, a pointer to the next node and a pointer to the previous node in the list as shown below. This implementation is based on pointer difference. Each node uses only one pointer field to traverse the list back and forth.
New Node Definition The ptrdiff pointer field contains the difference between the pointer to the next node and the pointer to the previous node. As an example, consider the following linked list.
A memory-efficient implementation of a doubly linked list is possible with minimal compromising of timing efficiency. However, it takes O n to search for an element in a linked list. There is a simple variation of the singly linked list called unrolled linked lists. An unrolled linked list stores multiple elements in each node let us call it a block for our convenience. In each block, a circular linked list is used to connect all nodes. Assume that there will be no more than n elements in the unrolled linked list at any time.
To simplify this problem, all blocks, except the last one, should contain exactly elements. Searching for an element in Unrolled Linked Lists In unrolled linked lists, we can find the kth element in O: Traverse the list of blocks to the one that contains the kth node, i. It takes O since we may find it by going through no more than blocks. Find the k mod th node in the circular linked list of this block.
It also takes O since there are no more than nodes in a single block. Suppose that we insert a node x after the ith node, and x should be placed in the jth block. Nodes in the jth block and in the blocks after the jth block have to be shifted toward the tail of the list so that each of them still have nodes. In addition, a new block needs to be added to the tail if the last block of the list is out of space, i.
Performing Shift Operation Note that each shift operation, which includes removing a node from the tail of the circular linked list in a block and inserting a node to the head of the circular linked list in the block after, takes only O 1.
The total time complexity of an insertion operation for unrolled linked lists is therefore O ; there are at most O blocks and therefore at most O shift operations. A temporary pointer is needed to store the tail of A. In block A, move the next pointer of the head node to point to the second-to-last node, so that the tail node of A can be removed.
Let the next pointer of the node, which will be shifted the tail node of A , point to the tail node of B. Let the next pointer of the head node of B point to the node temp points to. Finally, set the head pointer of B to point to the node temp points to.
Now the node temp points to becomes the new head node of B. We have completed the shift operation to move the original tail node of A to become the new head node of B. First, if the number of elements in each block is appropriately sized e.
Comparing Linked Lists and Unrolled Linked Lists To compare the overhead for an unrolled list, elements in doubly linked list implementations consist of data, a pointer to the next node, and a pointer to the previous node in the list, as shown below.
Assuming we have 4 byte pointers, each node is going to take 8 bytes. But the allocation overhead for the node could be anywhere between 8 and 16 bytes. So, if we want to store IK items in this list, we are going to have 16KB of overhead. It will look something like this: Thinking about our IK items from above, it would take about 4. Also, note that we can tune the array size to whatever gets us the best overhead for our application.
They work well when the elements are inserted in a random order. Some sequences of operations, such as inserting the elements in order, produce degenerate data structures that give very poor performance. If it were possible to randomly permute the list of items to be inserted, trees would work well with high probability for any input sequence. In most cases queries must be answered on-line, so randomly permuting the input is impractical.
Balanced tree algorithms re- arrange the tree as operations are performed to maintain certain balance conditions and assure good performance. Skip lists are a probabilistic alternative to balanced trees. Skip list is a data structure that can be used as an alternative to balanced binary trees refer to Trees chapter.
As compared to a binary tree, skip lists allow quick search, insertion and deletion of elements. This is achieved by using probabilistic balancing rather than strictly enforce balancing. It is basically a linked list with additional pointers such that intermediate nodes can be skipped.
It uses a random number generator to make some decisions. In an ordinary sorted linked list, search, insert, and delete are in O n because the list must be scanned node-by-node from the head to find the relevant node. If somehow we could scan down the list in bigger steps skip down, as it were , we would reduce the cost of scanning. This is the fundamental idea behind Skip Lists.
The find, insert, and remove operations on ordinary binary search trees are efficient, O logn , when the input data is random; but less efficient, O n , when the input data is ordered. Skip List performance for these same operations and for any data set is about as good as that of randomly- built binary search trees - namely O logn. The nodes in a Skip List have many next references also called forward references. We speak of a Skip List node having levels, one level per forward reference.
The number of levels in a node is called the size of the node. In an ordinary sorted list, insert, remove, and find operations require sequential traversal of the list. This results in O n performance per operation. Skip Lists allow intermediate nodes in the list to be skipped during a traversal - resulting in an expected performance of O logn per operation. Refer to Stacks chapter. Brute-Force Method: Start with the first node and count the number of nodes present after that node. Continue this until the numbers of nodes after current node are n — 1.
O n2 , for scanning the remaining list from current node for each node. Yes, using hash table. As an example consider the following list. That means, key is the position of the node in the list and value is the address of that node. Position in List Address of Node 1 Address of 5 node 2 Address of 1 node 3 Address of 17 node 4 Address of 4 node By the time we traverse the complete list for creating the hash table , we can find the list length.
Let us say the list length is M. Since we need to create a hash table of size m, O m. If we observe the Problem-3 solution, what we are actually doing is finding the size of the linked list. That means we are using the hash table to find the size of the linked list. We can find the length of the linked list just by starting at the head node and traversing the list. So, we can find the length of the list without creating the hash table.
This solution needs two scans: Hence, no need to create the hash table. Efficient Approach: Use two pointers pNthNode and pTemp. Initially, both point to head node of the list. From there both move forward until pTemp reaches the end of the list.
As a result pNthNode points to nth node from the end of the linked list. At any point of time both move one node at a time. Brute-Force Approach. As an example, consider the following linked list which has a loop in it. The difference between this list and the regular list is that, in this list, there are two nodes whose next pointers are the same. That means the repetition of next pointers indicates the existence of a loop.
If there is a node with the same address then that indicates that some other node is pointing to the current node and we can say a loop exists. Continue this process for all the nodes of the linked list. Does this method work? As per the algorithm, we are checking for the next pointer addresses, but how do we find the end of the linked list otherwise we will end up in an infinite loop?
If we start with a node in a loop, this method may work depending on the size of the loop. Using Hash Tables we can solve this problem. This is possible only if the given linked list has a loop in it.
Time Complexity; O n for scanning the linked list. Note that we are doing a scan of only the input. Space Complexity; O n for hash table.
Consider the following algorithm which is based on sorting. Time Complexity; O nlogn for sorting the next pointers array. Space Complexity; O n for the next pointers array. Problem with the above algorithm: The above algorithm works only if we can find the length of the list. But if the list has a loop then we may end up in an infinite loop.
Data Structures and Algorithms Made Easy-Narasimha Karumanchi
Due to this reason the algorithm fails. Efficient Approach Memoryless Approach: This problem was solved by Floyd.
The solution is named the Floyd cycle finding algorithm. It uses two pointers moving at different speeds to walk the linked list.
Once they enter the loop they are expected to meet, which denotes that there is a loop. This works because the only way a faster moving pointer would point to the same location as a slower moving pointer is if somehow the entire list or a part of it is circular. Think of a tortoise and a hare running on a track. The faster running hare will catch up with the tortoise if they are running in a loop.
As an example, consider the following example and trace out the Floyd algorithm. From the diagrams below we can see that after the final step they are meeting at some point in the loop which may not be the starting point of the loop.
There are two possibilities for L: Give an algorithm that tests whether a given list L is a snake or a snail. It is the same as Problem If there is a cycle find the start node of the loop.
The solution is an extension to the solution in Problem After finding the loop in the linked list, we initialize the slowPtr to the head of the linked list. From that point onwards both slowPtr and fastPtr move only one node at a time.
The point at which they meet is the start of the loop. Generally we use this method for removing the loops. This problem is at the heart of number theory. Furthermore, the tortoise is at the midpoint between the hare and the beginning of the sequence because of the way they move. Yes, but the complexity might be high. Trace out an example. If there is a cycle, find the length of the loop. This solution is also an extension of the basic cycle detection problem.
After finding the loop in the linked list, keep the slowPtr as it is. The fastPtr keeps on moving until it again comes back to slowPtr. While moving fastPtr, use a counter variable which increments at the rate of 1. Traverse the list and find a position for the element and insert it.
Recursive version: We will find it easier to start from the bottom up, by asking and answering tiny questions this is the approach in The Little Lisper: The element itself.
The reverse of the second element followed by the first element. O n ,for recursive stack. The head or start pointers of both the lists are known, but the intersecting node is not known.
Also, the number of nodes in each of the lists before they intersect is unknown and may be different in each list. Give an algorithm for finding the merging point. Brute-Force Approach: One easy solution is to compare every node pointer in the first list with every other node pointer in the second list by which the matching node pointers will lead us to the intersecting node.
But, the time complexity in this case will be O mn which will be high. Consider the following algorithm which is based on sorting and see why this algorithm fails. Any problem with the above algorithm? In the algorithm, we are storing all the node pointers of both the lists and sorting.
But we are forgetting the fact that there can be many repeated elements. This is because after the merging point, all node pointers are the same for both the lists. The algorithm works fine only in one case and it is when both lists have the ending node at their merge point.
By combining sorting and search techniques we can reduce the complexity. O Max m, n. For each of the node, count how many nodes are there in the list, and see whether it is the middle node of the list. The reasoning is the same as that of Problem Time for creating the hash table. Since we need to create a hash table of size n. Use two pointers. Move one pointer at twice the speed of the second.
When the first pointer reaches the end of the list, the second pointer will be pointing to the middle node. Traverse recursively till the end of the linked list. While coming back, start printing the elements. Use a 2x pointer. Take a pointer that moves at 2x [two nodes at a time]. At the end, if the length is even, then the pointer will be NULL; otherwise it will point to the last node. Assume the sizes of lists are m and n. Refer Trees chapter.
Refer Sorting chapter. If the number of nodes in the list are odd then make first list one node extra than second list.
As an example, consider the following circular list. After the split, the above list will look like: Circular Doubly Linked Lists. Get the middle of the linked list. Reverse the second half of the linked list. Compare the first half and second half. Construct the original linked list by reversing the second half again and attaching it back to the first half. Output for different K values: This is an extension of swapping nodes in a linked list. Else return. Otherwise, we can return the head.
Create a linked list and at the same time keep it in a hash table.
Data Structures and Algorithms Made Easy-Narasimha Karumanchi
For n elements we have to keep all the elements in a hash table which gives a preprocessing time of O n. Hence by using amortized analysis we can say that element access can be performed within O 1 time.
Time Complexity — O 1 [Amortized]. Space Complexity - O n for Hash Table.
N people have decided to elect a leader by arranging themselves in a circle and eliminating every Mth person around the circle, closing ranks as each person drops out. Find which person will be the last one remaining with rank 1. Assume the input is a circular linked list with N nodes and each node has a number range 1 to N associated with it. The head node has number 1 as data. Give an algorithm for cloning the list. We can use a hash table to associate newly created nodes with the instances of node in the given list.
We scan the original list again and set the pointers building the new list. Delete that node from the linked list. So what do we do? We can easily get away by moving the data from the next node into the current node and then deleting the next node. To solve this problem, we can use the splitting logic. While traversing the list, split the linked list into two: Now, to get the final list, we can simply append the odd node linked list after the even node linked list.
To split the linked list, traverse the original linked list and move all odd nodes to a separate linked list of all odd nodes. At the end of the loop, the original list will have all the even nodes and the odd node list will have all the odd nodes. To keep the ordering of all nodes the same, we must insert all the odd nodes at the end of the odd node list.
For this problem the value of n is not known in advance. For this problem the value of n is not known in advance and it is the same as finding the kth element from the end of the the linked list. Given a singly linked list, write a function to find the element, where n is the number of elements in the list.
Assume the value of n is not known in advance. Merge them into the third list in ascending order so that the merged list will be: The while loop takes O min n,m time as it will run for min n,m times. The other steps run in O 1. Therefore the total time complexity is O min n,m.
Median is the middle number in a sorted list of numbers if we have an odd number of elements. If we have an even number of elements, the median is the average of two middle numbers in a sorted list of numbers. We can solve this problem with linked lists with both sorted and unsorted linked lists. First, let us try with an unsorted linked list. In an unsorted linked list, we can insert the element either at the head or at the tail.
The disadvantage with this approach is that finding the median takes O n. Also, the insertion operation takes O 1. Now, let us try with a sorted linked list. Insertion to a particular location is also O 1 in any linked list. For an efficient algorithm refer to the Priority Queues and Heaps chapter.
The result should be stored in the third linked list. Also note that the head node contains the most significant digit of the number. Since the integer addition starts from the least significant digit, we first need to visit the last node of both lists and add them up, create a new node to store the result, take care of the carry if any, and link the resulting node to the node which will be added to the second least significant node and continue.
First of all, we need to take into account the difference in the number of digits in the two numbers. So before starting recursion, we need to do some calculation and move the longer list pointer to the appropriate place so that we need the last node of both lists at the same time.
The other thing we need to take care of is carry. If two digits add up to more than 10, we need to forward the carry to the next node and add it. If the most significant digit addition results in a carry, we need to create an extra node to store the carry. The function below is actually a wrapper function which does all the housekeeping like calculating lengths of lists, calling recursive implementation, creating an extra node for the carry in the most significant digit, and adding any remaining nodes left in the longer list.
O max List1 length,List2 length. O min List1 length, List1 length for recursive stack. It can also be solved using stacks. Simple Insertion sort is easily adabtable to singly linked lists. To insert an element, the linked list is traversed until the proper position is found, or until the end of the list is reached.
It is inserted into the list by merely adjusting the pointers without shifting any elements, unlike in the array. This reduces the time required for insertion but not the time required for searching for the proper position.
Find the middle of the linked list. We can do it by slow and fast pointer approach. After finding the middle node, we reverse the right halfl then we do a in place merge of the two halves of the linked list. The solution is based on merge sort logic. Assume the given two linked lists are: Since the elements are in sorted order, we run a loop till we reach the end of either of the list. We compare the values of list1 and list2. If the values are equal, we add it to the common list.
A stack is a simple data structure used for storing data similar to Linked Lists. In a stack, the order in which the data arrives is important. A pile of plates in a cafeteria is a good example of a stack. The plates are added to the stack as they are cleaned and they are placed on the top. When a plate, is required it is taken from the top of the stack. The first plate placed on the stack is the last one to be used.
A stack is an ordered list in which insertion and deletion are done at one end, called top. The last element inserted is the first one to be deleted.
Special names are given to the two changes that can be made to a stack. When an element is inserted in a stack, the concept is called push, and when an element is removed from the stack, the concept is called pop.
Trying to pop out an empty stack is called underflow and trying to push an element in a full stack is called overflow. Generally, we treat them as exceptions. Let us assume a developer is working on a long-term project. The manager then gives the developer a new task which is more important. The developer puts the long-term project aside and begins work on the new task.
The phone rings, and this is the highest priority as it must be answered immediately. The developer pushes the present task into the pending tray and answers the phone.
When the call is complete the task that was abandoned to answer the phone is retrieved from the pending tray and work progresses.
To take another call, it may have to be handled in the same manner, but eventually the new task will be finished, and the developer can draw the long-term project from the pending tray and continue with that. For simplicity, assume the data is an integer type. Inserts data onto stack. Removes and returns the last inserted element from the stack.
Returns the last inserted element without removing it. Returns the number of elements stored in the stack. Indicates whether any elements are stored in the stack or not. Indicates whether the stack is full or not. Exceptions Attempting the execution of an operation may sometimes cause an error condition, called an exception. In the Stack ADT, operations pop and top cannot be performed if the stack is empty. Attempting the execution of pop top on an empty stack throws an exception.
Trying to push an element in a full stack throws an exception. Simulating queues, refer Queues chapter 4. In the array, we add elements from left to right and use a variable to keep track of the index of the top element.
The array storing the stack elements may become full. A push operation will then throw a full stack exception. Similarly, if we try deleting an element from an empty stack it will throw stack empty exception. The complexities of stack operations with this representation can be given as: Trying to push a new element into a full stack causes an implementation-specific exception.
We took one index variable top which points to the index of the most recently inserted element in the stack. To insert or push an element, we increment top index and then place the new element at that index.
Similarly, to delete or pop an element we take the element at top index and then decrement the top index. We represent an empty queue with top value equal to —1.
The issue that still needs to be resolved is what we do when all the slots in the fixed size array stack are occupied? First try: What if we increment the size of the array by 1 every time the stack is full? This way of incrementing the array size is too expensive. Let us see the reason for this. Alternative Approach: Repeated Doubling Let us improve the complexity by using the array doubling technique.
If the array is full, create a new array of twice the size, and copy the items. With this approach, pushing n items takes time proportional to n not n2. That means, we do the doubling at 1,2,4,8, The other way of analyzing the same approach is: If we observe carefully, we are doing the doubling operation logn times. Now, let us generalize the discussion. For n push operations we double the array size logn times.
That means, we will have logn terms in the expression below. The total time T n of a series of n push operations is proportional to T n is O n and the amortized time of a push operation is O 1. Performance Let n be the number of elements in the stack. The complexities for operations with this representation can be given as: Too many doublings may cause memory overflow exception. Linked List Implementation The other way of implementing stacks is by using Linked lists.
Push operation is implemented by inserting element at the beginning of the list. We start with an empty stack represented by an array of size 1. For analysis, refer to the Implementation section. Stacks can be used to check whether the given expression has balanced symbols. This algorithm is very useful in compilers. Each time the parser reads one character at a time. The opening and closing delimiters are then compared. If they match, the parsing of the string continues.
If they do not match, the parser indicates that there is an error on the line. A linear-time O n algorithm based on stack can be given as: Otherwise pop the stack. For tracing the algorithm let us assume that the input is: Since we are scanning the input only once.
O n [for stack]. Before discussing the algorithm, first let us see the definitions of infix, prefix and postfix expressions. An infix expression is a single letter, or an operator, proceeded by one infix string and followed by another Infix string. A prefix expression is a single letter, or an operator, followed by two prefix strings. Every prefix string longer than a single variable contains an operator, first operand and second operand.
A postfix expression also called Reverse Polish Notation is a single letter or an operator, preceded by two postfix strings. Every postfix string longer than a single variable contains first and second operands followed by an operator. Prefix and postfix notions are methods of writing mathematical expressions without parenthesis. Time to evaluate a postfix and prefix expression is O n , where n is the number of elements in the array.
Now, let us focus on the algorithm. Therefore, for the infix to postfix conversion algorithm we have to define the operator precedence or priority inside the algorithm.
The table shows the precedence and their associativity order of evaluation among operators. Notice that between infix and postfix the order of the numbers or operands is unchanged. It is 2 3 4 in both cases. The stack that we use in the algorithm will be used to change the order of operators from infix to postfix.
Feb 22, Chitrank Dixit rated it it was amazing Shelves: Explains all the Data structures and Algorithms concepts. This books language is easy to understand and it has lots of problems to solve, to strengthen your knowledge.
Mar 03, Himanshu Goyal rated it liked it. Get it from https: At amazing discount, buy this from Ideakart. May 09, Himanshu Goyal rated it it was amazing. Its the best book to learn about Algorithms. Will recommend it to every software engineer. Feb 08, Rohit Kishor added it. Jan 01, Diwakar rated it it was amazing. However the book is best suitable to the students who want to prepare for campus placement or to the job seekers. The language used in the book is very clear and to the point.
This book can also be used as a reference book for revising algorithm. Aug 12, Ashish added it. May 21, Karn marked it as to-read. Oct 27, Prem Prakash marked it as to-read. Aug 23, Ashish Jindal added it. Aug 11, Piyush rated it liked it. Feb 17, Mayank Raj rated it it was amazing. Dec 02, JkhxvluiJK rated it it was amazing.
Mar 02, Vikram Singh added it. Jul 29, Teja marked it as to-read. Jan 13, Palash Maran is currently reading it. Aug 02, Delhi Irc added it. Oct 18, Ashay rated it liked it. Want to read. Dec 13, Endive Goyal rated it it was amazing.
Jan 03, Manoj Shishodia added it. Talks to the Point. Many practical Examples with easily understandable code snippets. A definite read for an algorithm Engineer! Jun 02, Dharmendra Lodhi added it. Jul 13, Tran Ngoc added it. Kevin George Kakkanattu rated it liked it Jul 26, Nivedita Singh rated it it was amazing May 29, Pinky rated it it was ok Mar 04, Rohan Tilekar rated it liked it Aug 04, There are no discussion topics on this book yet. Readers Also Enjoyed.
Goodreads is hiring! If you like books and love to build cool products, we may be looking for you. About Narasimha Karumanchi.
- UNDERSTANDING NUTRITION 13TH EDITION EBOOK
- NUMERIA LAND OF FALLEN STARS PDF
- THE BURNING LAND EBOOK
- DAVIDSONS PRINCIPLES AND PRACTICE OF MEDICINE 22ST EDITION PDF
- ABDOMINAL X RAY MADE EASY PDF
- ECHOES OF SCOTLAND STREET EPUB
- JAVA DATABASE PROGRAMMING PDF
- ANDROID STUDIO COOKBOOK PDF
- SPARKS RISE ALEXANDRA BRACKEN EPUB
- GOOGLE BOOK ER EXTENSION FOR CHROME
- FEAST OF FOOLS EPUB
- 50 SHADES OF GREY PDF
- MS EXCEL MCQ BANK - PDF FILE
- INTRODUCTION TO STATISTICAL QUALITY CONTROL MONTGOMERY PDF