Inventory Reservation with Concurrent Holds
Booking system: 100 concurrent users grab the last seat. Only one wins. The losers see 'sold out' before completing checkout. Pattern: optimistic locking with version-stamped CAS, OR pessimistic per-item lock, OR atomic decrement on a Redis counter.
The problem
Build the reservation engine for concert tickets / hotel rooms / airline seats. 1000 concurrent users want the last 10 seats. Constraints:
- No oversells, never sell the same seat twice.
- No undersells, if 10 seats exist, all 10 should be sellable.
- Soft holds, user puts seat in cart for N minutes; releases if not confirmed.
- High concurrency, last-second buying is bursty.
What this tests Decomposing the problem into atomic primitives + a state machine (available → held → sold or expired). Senior candidates discuss optimistic vs pessimistic, soft holds, distributed locking, and idempotency.
The state machine
Every seat (or room, or any inventory unit) is in exactly one of three states. Reads are common (the seat map is shown to every user looking at the page), writes are rare but contended (last few seats with hundreds of users racing for them). Each state transition must be atomic: no in-between state where the seat is neither held nor available.
What each transition is doing:
| From | To | Trigger | Required guarantee |
|---|---|---|---|
| Available | Held | User clicks "reserve" | Exactly one user wins the seat under contention |
| Held | Sold | Payment succeeded for that hold | The seat must still belong to this user (TTL not expired) |
| Held | Available | TTL fired without a confirm | Time-based, runs even with no user action |
| Held | Available | User cancelled the hold explicitly | Same as expire, but user-initiated |
The implementation has to make every one of these transitions atomic against any other transition on the same seat. Two users cannot both move a seat from Available to Held. A confirm cannot succeed on a seat that has already expired. Any in-between state visible to a reader is a bug. The data structure also has to support concurrent reads (every page view reads the map) alongside atomic writes on the small number of hot seats that everyone is trying to grab.
Production patterns
What real ticketing services do
- Virtual queue: cap concurrent attempts at the inventory layer. 1M users → 50K in queue → 10K through to inventory.
- Soft holds with short TTL: 3-10 minutes typical. Forces commit-or-release.
- Idempotency keys: client-supplied; server stores first response keyed by idempotency. Repeats return the same answer.
- Optimistic on warm path, pessimistic on hot keys: Stripe, Shopify both adapt strategy based on observed contention.
When this is hard
When the inventory is in multiple stores (e.g., warehouses across regions). Now distributed coordination is required: saga pattern, two-phase commit, or eventual consistency with conflict resolution. Way beyond a single-node mutex.
What kills naive solutions "I'll use a single mutex for all inventory." Serializes every reservation through one lock. At 10K req/sec, the mutex is the bottleneck and tail latency explodes. Sharding by item ID lets unrelated items proceed in parallel.
Implementations
Each item has an AtomicInteger inventory count. To reserve: read, decrement, CAS. On CAS failure, retry. The classic compare-and-swap retry loop.
1 import java.util.concurrent.atomic.AtomicInteger;
2 import java.util.concurrent.ConcurrentHashMap;
3
4 class Inventory {
5 private final ConcurrentHashMap<String, AtomicInteger> stock = new ConcurrentHashMap<>();
6
7 public boolean reserve(String itemId, int quantity) {
8 AtomicInteger count = stock.computeIfAbsent(itemId, k -> new AtomicInteger(0));
9 while (true) {
10 int current = count.get();
11 if (current < quantity) return false;
12 if (count.compareAndSet(current, current - quantity)) return true;
13 // CAS failed; retry
14 }
15 }
16
17 public void release(String itemId, int quantity) {
18 stock.get(itemId).addAndGet(quantity);
19 }
20 }User adds to cart → reserve for 10 minutes. If they don't check out, release. Implemented as inventory with a per-hold expiry. Cleanup runs on a scheduled executor.
1 class BookingSystem {
2 private final ConcurrentHashMap<String, AtomicInteger> stock = new ConcurrentHashMap<>();
3 private final ConcurrentHashMap<UUID, Hold> holds = new ConcurrentHashMap<>();
4 private final ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
5
6 record Hold(String itemId, int quantity, long expiresAt) {}
7
8 public BookingSystem() {
9 scheduler.scheduleAtFixedRate(this::expireHolds, 1, 1, TimeUnit.SECONDS);
10 }
11
12 public Optional<UUID> reserve(String itemId, int quantity, long ttlMs) {
13 AtomicInteger count = stock.get(itemId);
14 while (true) {
15 int current = count.get();
16 if (current < quantity) return Optional.empty();
17 if (count.compareAndSet(current, current - quantity)) {
18 UUID holdId = UUID.randomUUID();
19 holds.put(holdId, new Hold(itemId, quantity, System.currentTimeMillis() + ttlMs));
20 return Optional.of(holdId);
21 }
22 }
23 }
24
25 public boolean confirm(UUID holdId) {
26 return holds.remove(holdId) != null; // hold consumed
27 }
28
29 private void expireHolds() {
30 long now = System.currentTimeMillis();
31 holds.entrySet().removeIf(e -> {
32 if (e.getValue().expiresAt > now) return false;
33 stock.get(e.getValue().itemId).addAndGet(e.getValue().quantity); // restore
34 return true;
35 });
36 }
37 }Key points
- •Optimistic: version + CAS, read inventory, attempt update, retry if version changed
- •Pessimistic: lock per item, serializes high-contention items but cleaner semantics
- •Soft holds: reserve for N minutes, expire if checkout doesn't complete
- •Distributed: Redis DECR, or Postgres SELECT FOR UPDATE, or two-phase commit