Techniques
Technique | Definition |
---|---|
0/1 Knapsack (Dynamic Programming) | Problem where you need to determine the most valuable combination of items to include in a knapsack without exceeding its weight capacity. Items can either be included wholly ( |
Backtracking | Backtracking is a depth-first search (DFS) algorithm used to solve computational problems by reversing steps when a solution is non-viable. It discards multiple solutions without fully examining them and continues by choosing another option until a solution is found or options are exhausted. It's efficient for large solution spaces (exponential), but can be inefficient if many partially built solutions are rejected |
Bitwise Operations | Bitwise operators are used in programming to manipulate individual bits in binary numbers. They include Bitwise AND (& ), OR (| ), XOR (^ ), NOT (~ ), Left Shift (<< ), and Right Shift (>> ). They operate at the binary level, comparing or shifting bits based on the operator used |
Dynamic Programming (DP) | Method that solves complex problems by breaking them down into simpler subproblems. It helps optimize computation time by storing the results of these subproblems in an array or table, which can be reused when the same subproblem reoccurs, a concept known as memoization |
Fast & Slow Pointers | It's a technique that primarily used in linked list data structures. It involves 2 pointers that move at different speeds through the list |
Island (Matrix Traversal) Pattern | Helps to identify, count or process distinct groups, known as "islands", in a matrix or 2D array. An island is a group of connected components that are horizontally or vertically adjacent. The pattern uses Depth-First Search (DFS) or Breadth-First Search (BFS) algorithms to traverse the matrix, starting from a point and moving to adjacent points until the entire island is covered |
K-way Merge | It's used to merge k number of sorted lists into 1 sorted list. It uses a min-heap or priority queue to efficiently combine the lists, resulting in a time complexity of O(n log k) , where n is the total number of elements and k is the number of sorted lists |
Memoization | It's a technique that speeds up programs by storing the results of expensive function calls or computations and reusing them when the same inputs occur. It's particularly useful in problems with overlapping subproblems, like dynamic programming. Although it enhances performance, it can also increase memory usage due to the cached values. Therefore, it represents a trade-off between time and space complexity |
Merge Intervals | Common algorithmic problem, which involves merging overlapping intervals from a given collection. The problem requires sorting the intervals based on start times and then iterating through, merging intervals where the end of one interval is greater than the start of the next. This problem is fundamental in areas like interval scheduling and computational geometry |
Monotonic Stack | Variation of Stack (FIFO) that only allows items to be pushed onto the stack or popped off the stack in a specific order |
Recursion | Function that calls itself to solve smaller instances of the same problem. It's used in repetitive structures like searching and sorting algorithms. A recursive algorithm progresses towards a base case to avoid indefinite recursion. It's a powerful tool, but should be used cautiously to prevent high memory usage and stack overflow errors |
Search |
|
Sliding Window | It's used in algorithms to limit data processed at one time, often in array or list manipulation. The 'window' slides along the array or list, providing control over memory usage and computational complexity |
Sorting |
|
Subsets | Subset, is a set that contains all, some, or none of the elements of another set, known as the superset |
Top 'K' Elements | Refers to identifying the 'K' highest-ranking or most common elements in a collection like an array or list. The value of 'K' can vary and the 'top' elements are determined based on factors such as numerical value, frequency, or a custom logic. This problem is often solved using data structures and algorithms such as priority queues, max-heaps, hash maps, or binary search trees |
Trie | Trie (prefix tree), is a tree-like data structure used for efficient key retrieval in a dataset of strings. Each node of the Trie holds a reference to its child nodes, and the key is represented by the string of characters from the root to that node |
Two Heaps | Involves max-heap and a min-heap, for efficient extreme value tracking. This setup is particularly useful in streaming data situations where you need to find the median value quickly |
Two Pointers | Involves two pointers to traverse through an array or linked list efficiently. This technique is valuable for reducing time and space complexity in problem-solving. It can be employed in various ways, including moving pointers towards each other, away from each other, or in the same direction, depending on the specific problem |