Backtracking (Subsets/Permutations) — O(2^n) subsets, O(n!) permutations
Category: Backtracking | Difficulty: Intermediate
**Backtracking** systematically explores all possible candidates for a solution. At each step, it makes a choice, recurses, then undoes the choice (backtracks) to try the next option. Early pruning skips invalid branches.
Key Points for Backtracking (Subsets/Permutations)
- Explore all candidates, backtrack when constraint is violated
- For subsets: at each index, choose to include or exclude
- For permutations: swap elements or use a visited array
- Prune early to avoid unnecessary exploration
- Use a path/current list, add to result when complete, then undo
Common Mistakes with Backtracking (Subsets/Permutations)
- Forgetting to undo the choice (backtrack) after recursion
- Not skipping duplicates when input has repeated elements
- Modifying the result list reference instead of copying it
Related Patterns
DFS Template, 2D Dynamic Programming
BFS Template — O(V + E) time, O(V) space
Category: Graph | Difficulty: Basic
**Breadth-First Search (BFS)** explores a graph level by level using a queue. It visits all neighbors at distance d before moving to distance d+1, guaranteeing the shortest path in unweighted graphs.
Key Points for BFS Template
- Use a queue (FIFO) to process nodes level by level
- Mark nodes as visited before enqueueing to avoid duplicates
- Track distance/level with a counter or tuple in the queue
- Guarantees shortest path in unweighted graphs
- Can be applied to grids by treating cells as nodes
Common Mistakes with BFS Template
- Marking visited after dequeue instead of before enqueue (causes duplicates)
- Not handling disconnected components
- Forgetting to check bounds in grid BFS
Related Patterns
DFS Template, Topological Sort, Tree BFS (Level Order)
Binary Search — O(log n) time, O(1) space
Category: Array & String | Difficulty: Basic
**Binary Search** halves the search space each iteration, finding a target or boundary condition in logarithmic time. It applies not only to sorted arrays but to any scenario with a monotonic predicate.
Key Points for Binary Search
- Requires sorted input or monotonic condition
- Use left + (right - left) // 2 to avoid overflow
- Decide which half to discard based on the condition
- Boundary variants: find leftmost/rightmost occurrence
- Can search on answer space (binary search on result)
Common Mistakes with Binary Search
- Integer overflow with (left + right) / 2
- Infinite loop from incorrect mid calculation or bounds update
- Off-by-one error: left <= right vs left < right
Related Patterns
Two Pointers, 1D Dynamic Programming
Bit Manipulation Patterns — O(1) per operation, O(n) for array scans
Category: Bit Manipulation | Difficulty: Intermediate
**Bit Manipulation** uses bitwise operators (AND, OR, XOR, NOT, shifts) to solve problems in O(1) per operation. Key tricks involve checking/setting/clearing individual bits, XOR for finding unique elements, and bitmask enumeration for subsets.
Key Points for Bit Manipulation Patterns
- x & (x - 1) clears the lowest set bit
- x & (-x) isolates the lowest set bit
- Check kth bit: (x >> k) & 1 returns 1 if set, 0 if not
- Set kth bit: x | (1 << k); Clear kth bit: x & ~(1 << k); Toggle: x ^ (1 << k)
- XOR of a number with itself is 0; XOR with 0 is the number
- XOR all elements to find the single unique number in a duplicate array
- Bit shifts: << multiplies by 2, >> divides by 2
- Swap without temp: a ^= b; b ^= a; a ^= b
Common Mistakes with Bit Manipulation Patterns
- Operator precedence: & and | have lower precedence than == in most languages
- Sign extension with right shift on negative numbers (use >>> in Java)
- Forgetting that Python integers have arbitrary precision (no overflow)
- Off-by-one on bit position: bit 0 is the rightmost (least significant) bit
Related Patterns
Binary Search, 1D Dynamic Programming
DFS Template — O(V + E) time, O(V) space
Category: Graph | Difficulty: Basic
**Depth-First Search (DFS)** explores as far as possible along each branch before backtracking. It uses either the call stack (recursive) or an explicit stack (iterative).
Key Points for DFS Template
- Use recursion (call stack) or an explicit stack (LIFO)
- Mark visited before recursing to avoid infinite loops
- Useful for detecting cycles, connected components, and paths
- Iterative DFS uses a stack; processes nodes in reverse neighbor order
- Can track entry/exit times for advanced applications
Common Mistakes with DFS Template
- Stack overflow on large graphs — use iterative DFS instead
- Not marking visited before recursion (causes infinite loops)
- Confusing DFS order with BFS shortest-path guarantees
Related Patterns
BFS Template, Topological Sort, Tree DFS (In/Pre/Post)
Dijkstra's Algorithm — O((V + E) log V) time, O(V) space
Category: Graph | Difficulty: Intermediate
**Dijkstra's Algorithm** finds the shortest path from a single source to all other vertices in a graph with non-negative edge weights. It greedily selects the unvisited node with the smallest tentative distance.
Key Points for Dijkstra's Algorithm
- Finds shortest path from source to all vertices in weighted graphs
- Uses a min-heap (priority queue) to always process the nearest unvisited node
- Does NOT work with negative edge weights — use Bellman-Ford instead
- Lazy deletion: skip nodes already finalized when popped from heap
- Can reconstruct path by tracking predecessors
Common Mistakes with Dijkstra's Algorithm
- Using with negative edge weights (incorrect results)
- Not using lazy deletion (processing stale entries from the heap)
- Forgetting to initialize distances to infinity
Related Patterns
BFS Template, Union-Find
1D Dynamic Programming — O(n) time, O(n) or O(1) space
Category: Dynamic Programming | Difficulty: Intermediate
**1D Dynamic Programming** solves optimization or counting problems by breaking them into overlapping subproblems along a single dimension. Each state dp[i] captures the optimal answer considering the first i elements.
Key Points for 1D Dynamic Programming
- Define state: dp[i] = optimal answer considering first i elements
- Find recurrence: dp[i] = f(dp[i-1], dp[i-2], ...)
- Identify base cases: dp[0], dp[1], etc.
- Optimize space if dp[i] only depends on a constant number of previous states
- Bottom-up (tabulation) avoids recursion overhead; top-down (memoization) is more intuitive
Common Mistakes with 1D Dynamic Programming
- Wrong base case initialization
- Off-by-one errors in the recurrence
- Not considering all transitions in the recurrence relation
Related Patterns
2D Dynamic Programming, Binary Search
2D Dynamic Programming — O(m*n) time, O(m*n) or O(n) space
Category: Dynamic Programming | Difficulty: Intermediate
**2D Dynamic Programming** extends DP to problems with two dimensions of state — such as comparing two strings, navigating grids, or choosing items with weight constraints. The table is filled based on transition dependencies.
Key Points for 2D Dynamic Programming
- State has two dimensions: dp[i][j] = optimal answer for subproblem (i, j)
- Common patterns: grid paths, string matching, knapsack
- Fill table row by row or diagonally depending on dependencies
- Space optimization: if row i depends only on row i-1, use two 1D arrays
- Base cases: first row and/or first column
Common Mistakes with 2D Dynamic Programming
- Wrong direction of table filling (must respect dependencies)
- Not initializing base cases for both dimensions
- Applying space optimization when state depends on more than the previous row
Related Patterns
1D Dynamic Programming, Backtracking (Subsets/Permutations)
Fast & Slow Pointers — O(n) time, O(1) space
Category: Linked List | Difficulty: Basic
**Fast & Slow Pointers** (Floyd's Tortoise and Hare) uses two pointers moving at different speeds to detect cycles, find midpoints, and solve linked list problems in constant space.
Key Points for Fast & Slow Pointers
- Slow moves 1 step, fast moves 2 steps per iteration
- If there's a cycle, fast and slow will meet
- When fast reaches end, slow is at the middle
- To find cycle start: reset one pointer to head after meeting
- Also known as Floyd's Tortoise and Hare algorithm
Common Mistakes with Fast & Slow Pointers
- Not checking fast and fast.next for null before advancing
- Confusing the cycle detection with cycle start detection
- Forgetting that for even-length lists, slow stops at the second middle
Related Patterns
Linked List Reversal, Two Pointers
Greedy (Intervals & Scheduling) — O(n log n) time, O(1) space
Category: Greedy | Difficulty: Intermediate
**Greedy algorithms** make locally optimal choices at each step, hoping to reach a globally optimal solution. For interval/scheduling problems, the greedy choice typically involves sorting by end time and selecting greedily.
Key Points for Greedy (Intervals & Scheduling)
- Sort by end time for maximum non-overlapping intervals
- Sort by start time for minimum meeting rooms
- Greedy choice: always pick the option that leaves the most room for future choices
- Proving greedy correctness: exchange argument or stays-ahead argument
- Often combined with sorting + heap for scheduling problems
Common Mistakes with Greedy (Intervals & Scheduling)
- Sorting by start time when you should sort by end time (or vice versa)
- Not proving the greedy choice is optimal — greedy doesn't always work
- Confusing activity selection (max count) with interval scheduling (min rooms)
Related Patterns
Merge Intervals, Heap / Top-K Pattern
Heap / Top-K Pattern — O(n log k) time, O(k) space
Category: Heap | Difficulty: Intermediate
The **Heap / Top-K** pattern uses a heap (priority queue) to efficiently track the K largest, smallest, or most frequent elements. A min-heap of size K naturally retains the K largest items — the smallest of the "top K" is always at the root.
Key Points for Heap / Top-K Pattern
- Use a min-heap of size k to find k largest elements
- Use a max-heap of size k to find k smallest elements
- Maintain heap invariant: push then pop if size exceeds k
- For median finding, use two heaps (max-heap for lower half, min-heap for upper half)
- Python heapq is a min-heap; negate values for max-heap behavior
Common Mistakes with Heap / Top-K Pattern
- Using a max-heap for top-K largest (should use min-heap of size k)
- Forgetting to negate values in Python for max-heap simulation
- Not handling the case when k > array length
Related Patterns
1D Dynamic Programming, Binary Search
Kadane's Algorithm — O(n) time, O(1) space
Category: Array & String | Difficulty: Basic
**Kadane's Algorithm** finds the contiguous subarray with the maximum sum in O(n) time. It's essentially 1D DP with space optimization — at each position, decide whether to extend the current subarray or start fresh.
Key Points for Kadane's Algorithm
- Track current subarray sum and global maximum
- Reset current sum to 0 (or current element) when it goes negative
- Works in a single pass — no need for prefix sums
- Can be extended to track the subarray boundaries (start/end indices)
- Handles all-negative arrays by initializing max to first element
Common Mistakes with Kadane's Algorithm
- Initializing max_sum to 0 instead of arr[0] (fails on all-negative arrays)
- Forgetting to handle the empty array edge case
- Not resetting current_sum correctly — should compare element vs current_sum + element
Related Patterns
Prefix Sum, Sliding Window, 1D Dynamic Programming
Linked List Reversal — O(n) time, O(1) space
Category: Linked List | Difficulty: Basic
**Linked List Reversal** is a fundamental operation that changes the direction of pointers in a linked list. It forms the basis for many interview problems including palindrome checks, reorder list, and reverse in k-groups.
Key Points for Linked List Reversal
- Use three pointers: prev, current, next
- Save next before breaking the link
- Iterative approach is preferred for O(1) space
- For partial reversal, track the node before the segment
- Recursive approach uses O(n) call stack space
Common Mistakes with Linked List Reversal
- Losing reference to the next node before reassigning
- Not returning the new head (previously the tail)
- Off-by-one when reversing a sub-section of the list
Related Patterns
Fast & Slow Pointers, Two Pointers
Lowest Common Ancestor — O(n) time, O(h) space
Category: Tree | Difficulty: Intermediate
**Lowest Common Ancestor (LCA)** finds the deepest node in a tree that is an ancestor of two given nodes. The recursive approach checks both subtrees — if both return non-null, the current node is the LCA.
Key Points for Lowest Common Ancestor
- LCA is the deepest node that is an ancestor of both target nodes
- Recursive approach: if current node is p or q, return it
- If left and right subtrees both return non-null, current node is the LCA
- For BST: use value comparisons to go left or right
- Binary lifting pre-processes for O(log n) per query with multiple queries
Common Mistakes with Lowest Common Ancestor
- Not handling the case where one node is an ancestor of the other
- Confusing the BST variant (compare values) with the general variant (check identity)
- Forgetting that both p and q are guaranteed to exist in the tree
Related Patterns
Tree DFS (In/Pre/Post), DFS Template
LRU Cache — O(1) get/put, O(capacity) space
Category: Design Patterns | Difficulty: Advanced
The **LRU (Least Recently Used) Cache** is a design pattern that evicts the least recently accessed item when the cache reaches capacity. It combines a hash map for O(1) key lookup with a doubly linked list for O(1) order maintenance.
Key Points for LRU Cache
- Combine a hash map with a doubly linked list
- Hash map provides O(1) lookup by key
- Doubly linked list maintains access order for O(1) eviction
- On access: move node to the front (most recently used)
- On capacity overflow: remove from the tail (least recently used)
Common Mistakes with LRU Cache
- Not updating the linked list on get (only on put)
- Forgetting to remove the evicted key from the hash map
- Off-by-one in capacity management
Related Patterns
Linked List Reversal, Heap / Top-K Pattern
Merge Intervals — O(n log n) time, O(n) space
Category: Array & String | Difficulty: Intermediate
**Merge Intervals** is a foundational pattern for handling overlapping ranges. Sort by start time, then greedily merge adjacent intervals whose ranges overlap.
Key Points for Merge Intervals
- Sort intervals by start time first
- Compare current interval's start with previous interval's end
- Merge overlapping intervals by extending the end time
- For insertion, find the correct position then merge surrounding overlaps
- Sweep line variant: process start/end events separately
Common Mistakes with Merge Intervals
- Forgetting to sort the intervals first
- Using start of next instead of end of current for overlap check
- Not handling the edge case where one interval fully contains another
Related Patterns
Two Pointers, Greedy (Intervals & Scheduling)
Monotonic Queue — O(n) time, O(k) space
Category: Stack & Queue | Difficulty: Intermediate
A **Monotonic Queue** (monotonic deque) maintains elements in sorted order within a sliding window, enabling O(1) access to the window's maximum or minimum. Elements are added at the back and can be removed from both ends.
Key Points for Monotonic Queue
- Deque maintains elements in decreasing (or increasing) order
- Front of deque always holds the current window's max (or min)
- Remove from back when a new element violates monotonic order
- Remove from front when the element falls outside the window
- Each element is added and removed at most once — amortized O(1)
Common Mistakes with Monotonic Queue
- Not removing expired elements from the front of the deque
- Storing values instead of indices (can't check window bounds)
- Confusing with monotonic stack — queue maintains sliding window, stack does not
Related Patterns
Monotonic Stack, Sliding Window
Monotonic Stack — O(n) time, O(n) space
Category: Stack & Queue | Difficulty: Intermediate
A **Monotonic Stack** maintains elements in sorted (increasing or decreasing) order. When a new element violates the order, elements are popped until the property is restored. Since each element enters and exits the stack at most once, the total work is O(n).
Key Points for Monotonic Stack
- Maintain a stack in strictly increasing or decreasing order
- Pop elements that violate the monotonic property
- Each element is pushed and popped at most once — total O(n)
- Useful for finding next greater/smaller element
- Can store indices instead of values for more information
Common Mistakes with Monotonic Stack
- Confusing when to use increasing vs decreasing stack
- Not handling remaining elements in the stack after iteration
- Using values instead of indices when position information is needed
Related Patterns
Sliding Window, Two Pointers
Prefix Sum — O(n) build, O(1) query
Category: Array & String | Difficulty: Basic
The **Prefix Sum** pattern pre-computes cumulative sums so that any range sum query can be answered in constant time. It extends beyond sums to any associative operation (XOR, product, etc.).
Key Points for Prefix Sum
- Build prefix array where prefix[i] = sum of arr[0..i-1]
- Range sum query: sum(l, r) = prefix[r+1] - prefix[l]
- Use hash map for subarray sum equals k problems
- Works with XOR, product, and other associative operations
Common Mistakes with Prefix Sum
- Off-by-one error in prefix array indexing
- Forgetting to initialize hash map with {0: 1} for subarray sum
- Not considering negative numbers in subarray sum problems
Related Patterns
Sliding Window, Two Pointers
Segment Tree — O(n) build, O(log n) query/update
Category: Segment Tree | Difficulty: Advanced
A **Segment Tree** is a binary tree that stores aggregate information (sum, min, max, GCD) for array segments. It supports point updates and range queries in O(log n), and with **lazy propagation**, range updates also run in O(log n).
Key Points for Segment Tree
- Binary tree where each node stores aggregate info for a range
- Build in O(n), query and point update in O(log n)
- Lazy propagation enables O(log n) range updates
- Tree is stored in a flat array of size 4*n for simplicity
- Supports sum, min, max, GCD, or any associative operation
Common Mistakes with Segment Tree
- Array size too small — use 4*n to be safe
- Forgetting to push down lazy values before querying children
- Off-by-one errors in the build/query range boundaries
Related Patterns
Prefix Sum, Binary Search, Tree DFS (In/Pre/Post)
Sliding Window — O(n) time, O(k) space
Category: Two Pointers & Sliding Window | Difficulty: Basic
The **Sliding Window** pattern maintains a contiguous subarray or substring that expands and contracts to find an optimal range satisfying given constraints.
Key Points for Sliding Window
- Maintain a window [left, right] that expands and contracts
- Expand right to include new elements, shrink left to maintain constraints
- Track window state with a hash map or counter
- Fixed-size windows slide by moving both pointers together
- Variable-size windows grow/shrink based on conditions
Common Mistakes with Sliding Window
- Not updating window state when shrinking from the left
- Off-by-one errors when calculating window size
- Forgetting to handle the empty input case
Related Patterns
Two Pointers, Prefix Sum
Topological Sort — O(V + E) time, O(V) space
Category: Graph | Difficulty: Intermediate
**Topological Sort** produces a linear ordering of vertices in a DAG such that for every directed edge (u, v), u comes before v. It is essential for dependency resolution.
Key Points for Topological Sort
- Only works on Directed Acyclic Graphs (DAGs)
- Kahn's algorithm uses in-degree counting + BFS
- DFS approach records nodes in reverse post-order
- If result size < number of nodes, a cycle exists
- Multiple valid orderings may exist
Common Mistakes with Topological Sort
- Applying to graphs with cycles (no valid ordering exists)
- Forgetting to detect cycles via result size check
- Not initializing in-degree array for all nodes
Related Patterns
BFS Template, DFS Template
Tree BFS (Level Order) — O(n) time, O(w) space
Category: Tree | Difficulty: Basic
**Tree BFS (Level Order)** traverses a binary tree one level at a time, processing all nodes at depth d before moving to depth d+1. It uses a queue and processes nodes in batches equal to the level size.
Key Points for Tree BFS (Level Order)
- Use a queue to process nodes level by level
- Track level boundaries using queue size at each level
- Width w is the maximum number of nodes at any level (up to n/2)
- Can collect values per level into sublists
- Useful for zigzag, right-side view, and average-per-level problems
Common Mistakes with Tree BFS (Level Order)
- Not capturing queue size before processing the level
- Mixing up BFS level processing with regular queue drain
- Forgetting to handle empty tree (null root)
Related Patterns
Tree DFS (In/Pre/Post), BFS Template
Tree DFS (In/Pre/Post) — O(n) time, O(h) space
Category: Tree | Difficulty: Basic
**Tree DFS** traversals visit every node in a binary tree using depth-first order. The three classic orderings — pre-order, in-order, and post-order — determine when the current node is processed relative to its subtrees.
Key Points for Tree DFS (In/Pre/Post)
- Pre-order: process node BEFORE children (root-left-right)
- In-order: process node BETWEEN children (left-root-right) — gives sorted order in BST
- Post-order: process node AFTER children (left-right-root)
- Recursive approach mirrors the traversal naturally
- Space is O(h) where h = tree height; O(log n) balanced, O(n) worst case
Common Mistakes with Tree DFS (In/Pre/Post)
- Confusing pre/in/post ordering for specific problems
- Not handling the null/None base case
- Forgetting that in-order on BST gives sorted output
Related Patterns
Tree BFS (Level Order), DFS Template
Trie — O(L) per operation, O(N*L) space
Category: Trie & Strings | Difficulty: Intermediate
A **Trie** (prefix tree) is a tree-like data structure that stores strings character by character. Each path from root to a marked node represents a word. Tries excel at prefix-based operations.
Key Points for Trie
- Each node stores children (hash map or array of 26 for lowercase)
- Mark end-of-word with a boolean flag
- Insert, search, and prefix search all run in O(L) where L = word length
- Space-efficient when many words share common prefixes
- Can be extended with counts, links to other data structures, etc.
Common Mistakes with Trie
- Not marking end-of-word (prefix 'app' would match as word 'app' even without the flag)
- Confusing search (exact match) with startsWith (prefix match)
- Memory overhead when words have few common prefixes
Related Patterns
Backtracking (Subsets/Permutations), DFS Template
Two Pointers — O(n) time, O(1) space
Category: Two Pointers & Sliding Window | Difficulty: Basic
The **Two Pointers** technique uses two references that traverse a data structure — typically from opposite ends or in the same direction — to solve problems in linear time without extra space.
Key Points for Two Pointers
- Initialize pointers at opposite ends or same start
- Move based on comparison with target
- Handle duplicates by skipping equal elements
- Works on sorted arrays for pair-finding problems
- Can also use same-direction pointers for partitioning
Common Mistakes with Two Pointers
- Infinite loop when both pointers aren't advancing
- Not handling duplicate elements properly
- Forgetting to sort the array when required
Related Patterns
Sliding Window, Binary Search
Union-Find — O(α(n)) per operation, O(n) space
Category: Graph | Difficulty: Intermediate
**Union-Find** (Disjoint Set Union) efficiently tracks dynamic connectivity — whether two elements belong to the same group — and merges groups. With path compression and union by rank, each operation runs in nearly constant time.
Key Points for Union-Find
- Path compression flattens the tree on find operations
- Union by rank/size keeps trees balanced
- Near-constant amortized time per operation — O(α(n))
- Great for dynamic connectivity queries
- Track number of connected components with a counter
Common Mistakes with Union-Find
- Forgetting path compression (degrades to O(n) per find)
- Not using union by rank/size (creates tall trees)
- Confusing find(x) with x — always use find to get the root
Related Patterns
BFS Template, DFS Template