LRU Cache
The Intuition
Picture a small desk that can hold exactly 5 books. You're studying, and whenever you need a book, you pull it out and place it on top of the stack. The book you used most recently is always on top. The book you haven't touched in the longest time slowly sinks to the bottom as other books pile on.
Now your desk is full, all 5 slots taken, and you need a new book from the shelf. Which book do you remove? The one at the very bottom, the one you haven't touched in the longest time. You toss it back on the shelf, put your new book on top, and keep going. That's the entire LRU eviction policy. "Least recently used" just means "the book that sank to the bottom."
A doubly linked list IS that stack of books. Moving a book to the top means unlinking it from wherever it sits and re-linking it at the head. Removing the bottom book means unlinking the tail. Both operations take constant time because you just rewire a few pointers. No shifting, no scanning.
But there's a problem. If someone asks "do you have the algorithms book on your desk?" you'd have to dig through the entire stack to find it. That's O(n) and defeats the purpose. So you keep a hashmap on the side: book title maps to its exact position in the stack. Now you can jump straight to any book in O(1), unlink it, and move it to the top. The hashmap gives you instant lookup. The linked list gives you instant reordering and eviction. Together, they make both get and put O(1).
LRU Cache design question, page replacement algorithms, implementing caching layers, or any problem requiring bounded-size storage with eviction of least-recently-used items.
Eviction should be frequency-based (use LFU), cache size is unlimited (just use a hash map), or need TTL-based expiration (use a different data structure with timers).
Variations:
- OrderedDict shortcut (Python): Python's OrderedDict handles ordering natively.
- Manual doubly linked list: Required in interviews to demonstrate understanding.
- LFU Cache: Evict the least frequently used item. Uses a frequency map plus linked lists per frequency.
- TTL extension: Add time-to-live for automatic expiration.
- Cache capacity is 1 (every new insert evicts the previous entry)
- Get called on a non-existent key (return -1)
- Put called with a key that already exists (update value, refresh order)
- Capacity exceeded by one insert (evict least recently used)
Key Points
- •Combine a hash map with a doubly linked list
- •Hash map provides O(1) lookup by key
- •Doubly linked list maintains access order for O(1) eviction
- •On access: move node to the front (most recently used)
- •On capacity overflow: remove from the tail (least recently used)
Code Template
1 from collections import OrderedDict
2
3 class LRUCache:
4 """LRU Cache using OrderedDict (Python shortcut)."""
5 def __init__(self, capacity):
6 self.cache = OrderedDict()
7 self.capacity = capacity
8
9 def get(self, key):
10 if key not in self.cache:
11 return -1
12 self.cache.move_to_end(key)
13 return self.cache[key]
14
15 def put(self, key, value):
16 if key in self.cache:
17 self.cache.move_to_end(key)
18 self.cache[key] = value
19 if len(self.cache) > self.capacity:
20 self.cache.popitem(last=False)
21
22 # Manual implementation with doubly linked list
23 class Node:
24 def __init__(self, key=0, val=0):
25 self.key, self.val = key, val
26 self.prev = self.next = None
27
28 class LRUCacheManual:
29 def __init__(self, capacity):
30 self.cap = capacity
31 self.cache = {}
32 self.head, self.tail = Node(), Node()
33 self.head.next, self.tail.prev = self.tail, self.head
34
35 def _remove(self, node):
36 node.prev.next, node.next.prev = node.next, node.prev
37
38 def _add_front(self, node):
39 node.next = self.head.next
40 node.prev = self.head
41 self.head.next.prev = node
42 self.head.next = node
43
44 def get(self, key):
45 if key not in self.cache:
46 return -1
47 node = self.cache[key]
48 self._remove(node)
49 self._add_front(node)
50 return node.val
51
52 def put(self, key, value):
53 if key in self.cache:
54 self._remove(self.cache[key])
55 node = Node(key, value)
56 self._add_front(node)
57 self.cache[key] = node
58 if len(self.cache) > self.cap:
59 lru = self.tail.prev
60 self._remove(lru)
61 del self.cache[lru.key]Common Mistakes
- Not updating the linked list on get (only on put)
- Forgetting to remove the evicted key from the hash map
- Off-by-one in capacity management