Airbnb — Booking & Reservations
Difficulty: Intermediate
Hosts, listings, bookings, and a calendar that never double-books. The nuance hotel management misses: each property is unique with its own availability, price, and rules.
Design Patterns for Airbnb
- Strategy Pattern
- State Pattern
- Observer Pattern
Key Abstractions in Airbnb
- Listing: A property a host offers — address, amenities, nightly price, house rules
- Calendar: Per-listing availability. Tracks booked ranges with O(log n) conflict detection.
- Booking: Guest-side reservation with state machine (requested, confirmed, checked-in, completed, cancelled)
- PricingStrategy: Base price plus dynamic rules — weekend premium, minimum stay, last-minute discount
- Review: Two-way: guest reviews host, host reviews guest. Both revealed only after both submit.
Key Points for Airbnb
- Per-listing calendar with sorted interval tree. Booking check is O(log n), not a scan of all reservations.
- Half-open date intervals [checkin, checkout). One night's checkout equals the next night's checkin without conflict.
- Price is a strategy, not a scalar. Airbnb's real revenue is in dynamic pricing rules.
- Reviews are two-blind — neither party sees the other's review until both submit, preventing retaliation.
Common Mistakes with Airbnb
- Using a global booking table keyed only by listing_id. Conflict check becomes a range scan over millions of rows.
- Storing prices as a single number on the listing. Weekends, peak season, and last-minute rules live nowhere.
- Letting hosts cancel confirmed bookings freely. Policies and penalties are load-bearing business logic.
- Modeling availability as 'free/booked' only. Real listings have blocked-by-host dates, maintenance windows, minimum stays.
Related Designs
Hotel Management, Meeting Scheduler, Car Rental
Airline Management — Booking & Reservations
Difficulty: Advanced
Flights, seats, bookings, cancellations. The hard part isn't any single piece — it's making concurrent seat selection atomic so two passengers never land on 3A.
Design Patterns for Airline Management
- Strategy Pattern
- State Pattern
- Factory Method
Key Abstractions in Airline Management
- Flight: A specific flight instance — route, date, aircraft, crew, status
- Aircraft: Seat map template. Multiple flights reuse the same map.
- Seat: Position on the aircraft. Class (economy/business/first), extras (exit row, aisle).
- Booking: Links passenger, flight, seat. Has its own state machine (held, confirmed, checked-in, cancelled).
- FareStrategy: Pricing rules — class-based, dynamic, promotional codes
- SeatHoldManager: Short-lived holds during payment to prevent double-booking
Key Points for Airline Management
- Two-phase seat booking: hold (short TTL, reversible) then confirm (payment succeeded).
- Per-flight lock on seat assignment. Two concurrent passengers never get the same seat.
- Booking is a state machine: HELD -> CONFIRMED -> CHECKED_IN, or HELD -> EXPIRED, or CONFIRMED -> CANCELLED.
- Aircraft defines the seat map; flights reuse it. A 737 configuration is created once.
Common Mistakes with Airline Management
- Storing seats inside the booking. Now finding 'who sits in 3A on flight XYZ' is a full-table scan.
- Confirming a booking before payment settles. Passenger holds a seat they never paid for.
- Skipping the hold phase. Two passengers hit confirm simultaneously; one gets a refund and a lifetime of anger.
- Forgetting to release holds on payment failure or timeout. Seat inventory bleeds away.
Related Designs
Hotel Management, Movie Ticket Booking, Concert Ticket Booking
Amazon Locker — Real-World Systems
Difficulty: Intermediate
Self-service package pickup with OTP codes and dimensional matching. A locker is a parking-spot-for-boxes: find-smallest-that-fits, reserve, unlock with a code.
Design Patterns for Amazon Locker
- Strategy Pattern
- State Pattern
- Factory Method
Key Abstractions in Amazon Locker
- Locker: One slot in a station. Size, state, current reservation.
- LockerStation: Bank of lockers at one location. Routes incoming packages to the right size.
- Package: Dimensions, weight, recipient, and delivery deadline
- Reservation: Links a package to a locker, generates an OTP, tracks expiry
- AllocationStrategy: Picks the best locker — smallest-that-fits is the default
- OtpService: Generates, verifies, and invalidates one-time pickup codes
Key Points for Amazon Locker
- Locker state machine: AVAILABLE → RESERVED → OCCUPIED → AVAILABLE, or OCCUPIED → EXPIRED for no-shows.
- Smallest-that-fits keeps large lockers free for packages that actually need them.
- OTP is hashed — the station verifies by hashing input, never storing plaintext.
- Expiry is lazy: packages past their pickup deadline get reclaimed by a periodic sweep.
Common Mistakes with Amazon Locker
- Assigning any available locker. A shoebox lands in a 3-foot locker and a fridge can't fit anywhere.
- Plaintext OTPs in the database. Compromise leaks every active pickup code.
- Forgetting a reservation step. Package gets assigned but then a race condition double-assigns the locker.
- No expiry flow. Packages pile up; lockers run out; the system silently degrades.
Related Designs
Parking Lot, Hotel Management, Notification System
AOP Framework — Foundational
Difficulty: Advanced
Cross-cutting concerns without touching business code. Build a method interception system like Spring AOP where logging, timing, caching, and retry wrap any method call through composable aspects.
Design Patterns for AOP Framework
- Proxy Pattern
- Decorator Pattern
- Chain of Responsibility
- Template Method
Key Abstractions in AOP Framework
- Aspect: Cross-cutting concern interface -- defines before, after, or around advice logic
- LoggingAspect / TimingAspect / CachingAspect / RetryAspect: Concrete aspects — each hooks into the around-advice lifecycle (before + invoke + after)
- Pointcut: Predicate selecting which methods on which targets get intercepted
- AspectChain: Chain of responsibility that stacks multiple aspects and invokes them in order
- ProxyFactory: Creates proxy objects that wrap real targets and route calls through the aspect chain
- JoinPoint: Runtime context about the intercepted call: target, method name, arguments, return value
Key Points for AOP Framework
- Proxies intercept method calls without modifying the target class. The target has no idea it is being wrapped.
- Pointcuts select which methods get advice. You might apply logging to every public method but caching only to read operations.
- Around advice controls the entire call lifecycle: it can inspect arguments, decide whether to proceed, modify the return value, or catch exceptions.
- Aspect ordering matters. If logging runs before timing, the log includes the timing overhead. The chain defines the execution order explicitly.
Common Mistakes with AOP Framework
- Applying aspects to every method regardless of need, adding interception overhead to hot paths and hurting performance
- Not preserving the method signature on the proxy, so callers get attribute errors or wrong return types
- Aspect order dependency bugs: changing the order silently changes behavior, and nobody notices until production
- Self-invocation bypassing the proxy: when a method calls this.otherMethod() internally, the call goes directly to the real object and skips all aspects
Related Designs
Middleware Pipeline, Logging Framework, Dependency Injection Container
API Gateway — Platform Scale
Difficulty: Advanced
Three structural patterns in one system. Proxy guards the door, Adapter translates protocols, and Facade hides the mess of microservices behind a single clean endpoint.
Design Patterns for API Gateway
- Proxy Pattern
- Adapter Pattern
- Facade Pattern
Key Abstractions in API Gateway
- Gateway: Proxy that intercepts all client requests and applies middleware before forwarding
- ServiceAdapter: Adapter interface wrapping different backend protocols behind a uniform handle() method
- ServiceRegistry: Maps route paths to their target service adapters for request routing
- Middleware: Pluggable request processor for auth, rate limiting, logging, and caching
- ResponseAggregator: Facade that orchestrates calls to multiple services and returns a single combined response
Key Points for API Gateway
- The Gateway is a proxy: same interface as backend services, but with added cross-cutting concerns.
- Adapters let you integrate REST, RPC, and legacy SOAP services without the gateway knowing the difference.
- Middleware chain is ordered. Auth runs first, then rate limit, then cache check, then forward.
- Facade aggregation means clients make one call instead of orchestrating three service calls themselves.
Common Mistakes with API Gateway
- Putting business logic in the gateway. The gateway routes and guards. Business logic belongs in the services.
- Hardcoding service URLs. Use a registry so services can be added, removed, or moved without code changes.
- Running middleware in the wrong order. Rate limiting before auth means unauthenticated users consume your rate limit budget.
- Not handling partial failures in aggregation. If one of three services fails, the facade should return partial data, not crash.
Related Designs
Rate Limiter, Payment System
ATM — Financial & Payments
Difficulty: Intermediate
State pattern for ATM lifecycle management, Chain of Responsibility for greedy bill dispensing across denominations, and clean separation between authentication, transaction, and cash handling concerns.
Design Patterns for ATM
- State Pattern
- Chain of Responsibility
Key Abstractions in ATM
- ATM: Context class that delegates all operations to its current state
- ATMState: Interface defining valid operations: insertCard, enterPin, withdraw, ejectCard
- IdleState: Waiting for card insertion. Rejects pin entry and withdrawal.
- CardInsertedState: Card present, awaiting PIN. Rejects withdrawal until authenticated.
- AuthenticatedState: PIN verified. Allows withdrawal and card ejection.
- CashDispenser: Chain of Responsibility handler for denomination-based bill dispensing
- Account: Holds balance, validates PIN, processes debits
- Transaction: Immutable record of a completed withdrawal
Key Points for ATM
- State pattern makes invalid operations per state a compile-time concern, not a runtime if/else maze
- Chain of Responsibility for dispensing: $100 -> $50 -> $20 -> $10. Greedy largest-first minimizes bill count.
- Each state only implements the operations valid for that state. Everything else throws an explicit error.
- Adding a new denomination means adding one handler to the chain. No existing code changes.
Common Mistakes with ATM
- Using if/else chains on an enum instead of the State pattern. You end up with every method checking state, and forgetting one check means dispensing cash without a PIN.
- Not validating withdrawal amount against available denominations. $35 can't be dispensed with $20 and $10 bills alone.
- Forgetting to reset ATM state after card ejection or transaction failure. The machine gets stuck in an invalid state.
- Dispensing without checking if the ATM's physical cash supply covers the withdrawal. Balance check alone isn't enough.
Related Designs
Vending Machine, Digital Wallet, Payment System
Bowling Alley — Games & Simulations
Difficulty: Intermediate
Ten frames, strikes, spares, bonus rolls. The trick is that a frame's score depends on rolls that haven't happened yet — a deferred scoring model solves it cleanly.
Design Patterns for Bowling Alley
- State Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Bowling Alley
- Frame: One turn with up to two rolls (three in the tenth). Knows its own pin state.
- Game: Ten frames plus bonus rolls. Drives frame transitions and enforces game end.
- ScoreCalculator: Computes running score with strike/spare bonuses. Deferred until future rolls land.
- Player: Owns a Game. Multi-player alley is a list of Players rotating turns.
Key Points for Bowling Alley
- Strike: 10 pins on first roll. Bonus = next two rolls. Frame ends immediately.
- Spare: 10 pins across two rolls. Bonus = next one roll. Frame ends after second roll.
- Tenth frame is special — a strike or spare earns one or two bonus rolls inside the same frame.
- ScoreCalculator reads all frames and walks the roll stream. Calculating partial scores is always safe.
Common Mistakes with Bowling Alley
- Calculating score inside Frame.addRoll(). Future rolls haven't happened yet; the score is a lie.
- Hardcoding the tenth-frame rules inside Game. State pattern for frames keeps the tenth frame isolated.
- Forgetting the perfect game bonus. 12 strikes, not 10. The tenth frame demands two more rolls on a strike.
- Counting a strike as two rolls. It's one roll for ten pins — the bonus is not the same as a second roll.
Related Designs
Snake and Ladder, Tic Tac Toe, Chess
Browser History — Foundational
Difficulty: Starter
Back, forward, and visit. Looks like two stacks but it's really one doubly-linked list with a cursor — and the cursor discards the forward branch whenever something new loads.
Design Patterns for Browser History
- Memento Pattern
- Iterator Pattern
Key Abstractions in Browser History
- HistoryEntry: One visited URL plus a timestamp and optional title
- History: Doubly-linked list with a current-node pointer. Back/forward move the pointer.
- Tab: Owns its own History. Each browser tab is independent.
- Browser: Manages tabs and the active tab
Key Points for Browser History
- One linked list with a cursor, not two stacks. Back is cursor.prev, forward is cursor.next.
- Visiting a new URL discards everything after the cursor. Just like undo/redo — new action branches the future.
- Back/forward are O(1) — just move the pointer. No copying.
- Cap history length from the head (oldest entries) when memory matters.
Common Mistakes with Browser History
- Implementing back with a stack of visited pages. Forward then breaks when someone visits mid-way.
- Mutating the current entry on forward. The entry is immutable; only the cursor moves.
- Sharing history across tabs. Each tab needs its own independent timeline.
- Forgetting to clear forward on visit. That leaves dead entries the user can never reach.
Related Designs
Undo/Redo Manager, Text Editor, Workflow Engine
Car Rental — Booking & Reservations
Difficulty: Intermediate
Strategy pattern for flexible pricing tiers and state pattern for vehicle lifecycle. Date-range availability search across a fleet of sedans, SUVs, and trucks.
Design Patterns for Car Rental
- Strategy Pattern
- State Pattern
- Observer Pattern
Key Abstractions in Car Rental
- RentalSystem: Orchestrator. Handles reservations, returns, and fleet queries through a single entry point.
- Vehicle: Entity with a type hierarchy (Sedan, SUV, Truck). Tracks plate, model, and current state.
- Reservation: Booking that binds a customer to a vehicle over a date range with a computed price.
- Customer: Renter profile with name, license, and contact details.
- PricingStrategy: Computes rental cost. Daily rate, weekly discount, insurance add-ons are all swappable strategies.
- VehicleState: State machine: Available, Reserved, Rented, Maintenance. Controls which operations are legal.
Key Points for Car Rental
- Strategy pattern for pricing: daily, weekly, and promotional rates are interchangeable without modifying reservation logic
- State pattern for vehicle lifecycle ensures illegal transitions (e.g., renting a vehicle under maintenance) are caught immediately
- Observer notifies the fleet dashboard when vehicle state changes, keeping availability search results accurate
- Date-range overlap check is the core of availability search. Two reservations conflict when start_a < end_b and start_b < end_a.
Common Mistakes with Car Rental
- Storing vehicle availability as a boolean. A vehicle can be available today but reserved for next week. You need date-range checks.
- Putting pricing logic inside Reservation. That makes it impossible to swap pricing strategies or add promotional rates without touching bookings.
- Skipping the state pattern and using string flags. You end up with scattered if-else blocks that miss edge cases like double-renting.
- Not handling reservation cancellations. A cancelled reservation must transition the vehicle back to Available for that date range.
Related Designs
Hotel Management, Parking Lot
Chat Room — Platform Scale
Difficulty: Intermediate
Users talk to the room, not to each other. The Mediator routes every message, so adding group chats, moderation, or message filtering never touches the User class.
Design Patterns for Chat Room
- Mediator Pattern
- Observer Pattern
Key Abstractions in Chat Room
- ChatRoom: Mediator that routes all messages between users and manages room membership
- User: Colleague that sends and receives messages only through the ChatRoom mediator
- Message: Immutable record with sender, content, timestamp, and optional target for DMs
- ChatObserver: Interface notified of room events like user joins, leaves, and new messages
- MessageFilter: Pluggable filter for profanity, spam, or content moderation in the mediator pipeline
Key Points for Chat Room
- Users never hold references to other users. All communication goes through the ChatRoom mediator.
- Adding DMs, group chats, or broadcast is a change inside ChatRoom only. User stays untouched.
- Observer handles room-level events (joins, leaves). Mediator handles message routing. Different concerns.
- Message history lives in the mediator, giving you search and pagination without touching User.
Common Mistakes with Chat Room
- Letting users reference each other directly. That creates O(n^2) coupling and breaks when users join or leave.
- Mixing message routing with event notification. Mediator routes messages. Observer broadcasts events. Keep them separate.
- Not handling disconnected users. The mediator should silently skip offline users, not throw errors.
- Storing messages in User objects. Messages belong to the room. Users come and go, but chat history persists.
Related Designs
Notification System, Pub/Sub System
Chess — Games & Simulations
Difficulty: Advanced
Each piece owns its move generation via Strategy. Command pattern wraps every move so you can undo it during check validation. Board stays clean.
Design Patterns for Chess
- Strategy Pattern
- Command Pattern
- Template Method
Key Abstractions in Chess
- Game: Turn management, game state transitions, coordinates between board and move validation
- Board: 8x8 grid, executes and undoes moves, tracks piece positions by color
- Piece: Abstract base with color, position, and delegated move generation
- King / Queen / Rook / Bishop / Knight / Pawn: Concrete pieces, each implementing its own legal move generation
- MoveValidator: Filters pseudo-legal moves by checking if they leave the king in check
- Move: Value object with from/to squares, captured piece, and special flags (castling, promotion)
- GameState: Enum: IN_PROGRESS, CHECK, CHECKMATE, STALEMATE
Key Points for Chess
- Each piece generates its own pseudo-legal moves. MoveValidator filters out ones that leave the king exposed.
- Command pattern on Move allows execute/undo. You need undo to test whether a candidate move leaves you in check.
- Board tracks all pieces by color for fast iteration when checking if any enemy piece attacks the king's square.
- GameState is computed after every move: if the opponent has no legal moves, it's either checkmate or stalemate.
Common Mistakes with Chess
- Putting move generation logic in the Board class instead of each piece. You end up with one massive switch statement.
- Not implementing undo on moves. Without it, check detection requires cloning the entire board for every candidate move.
- Using inheritance for color (WhiteKing, BlackKing). Color is data, not behavior. Use composition.
- Forgetting that a move is only legal if it doesn't leave your own king in check, including discovered checks.
Related Designs
Tic Tac Toe, Snake & Ladder
Circuit Breaker — Foundational
Difficulty: Intermediate
State machine resilience pattern : CLOSED allows calls, OPEN rejects fast with Null Object fallbacks, HALF_OPEN probes cautiously. Proxy wraps remote services transparently.
Design Patterns for Circuit Breaker
- State Pattern
- Proxy Pattern
- Null Object Pattern
- Observer Pattern
- Template Method
Key Abstractions in Circuit Breaker
- CircuitState: State interface : handle_request, can_proceed, record_success, record_failure
- ClosedState: Concrete state : allows all calls, counts failures, transitions to OpenState when threshold reached
- OpenState: Concrete state : rejects all calls immediately, transitions to HalfOpenState after cooldown period expires
- HalfOpenState: Concrete state : allows one probe call, transitions to ClosedState on success or OpenState on failure
- CircuitBreaker: Context that holds current state and delegates request handling to it
- ServiceProxy: Proxy : wraps remote service with circuit breaker, caller doesn't know circuit breaker exists
- NullResponse: Null Object : safe fallback when circuit is open: cached data, default value, empty result : no exception handling needed
- CircuitBreakerListener: Observer : notified on state transitions for monitoring dashboards and alerting
Key Points for Circuit Breaker
- State machine: CLOSED counts failures. When threshold hit, transitions to OPEN. OPEN rejects all calls for a cooldown period. After cooldown, transitions to HALF_OPEN. HALF_OPEN allows one probe call: success returns to CLOSED, failure returns to OPEN.
- Proxy: ServiceProxy.call() has the same signature as the real service. The caller is unaware of the circuit breaker wrapping their calls.
- Null Object: when OPEN, return a NullResponse instead of throwing. The caller gets graceful degradation (cached data, empty result, default message) with no exception handling needed.
- Observer: monitoring dashboards subscribe to state changes. Alert when circuit opens, celebrate when it closes.
Common Mistakes with Circuit Breaker
- No cooldown period: circuit stays open forever or flaps between open/closed rapidly
- Throwing exceptions when open: forces every caller to handle CircuitOpenException. Null Object is cleaner.
- Not counting only relevant failures: timeouts should trip the circuit, 404s should not
- Single failure threshold for all services: a flaky service needs different thresholds than a critical one
Related Designs
Rate Limiter, Connection Pool, API Gateway
Coffee Shop Ordering — Real-World Systems
Difficulty: Starter
Wrap a base drink with toppings, and each one adds its own cost and description. The Decorator pattern means you never modify the original drink class when adding new toppings.
Design Patterns for Coffee Shop Ordering
- Decorator Pattern
- Strategy Pattern
Key Abstractions in Coffee Shop Ordering
- Beverage: Component interface with cost() and description() that all drinks and decorators implement
- ToppingDecorator: Abstract decorator wrapping a Beverage, delegating cost and description to the wrapped drink
- PricingStrategy: Strategy interface for applying discounts like happy hour or loyalty pricing
- Order: Collection of beverages with a pricing strategy applied to calculate the final total
Key Points for Coffee Shop Ordering
- Decorators compose at runtime. A Latte with milk and caramel is three objects wrapping each other.
- New toppings never touch existing code. Write a new decorator class and you are done.
- Strategy swaps pricing logic without changing how beverages calculate their base cost.
- cost() recurses through the decorator chain. Each layer adds its price to the inner beverage's price.
Common Mistakes with Coffee Shop Ordering
- Making Beverage a concrete class instead of an interface/abstract class. Decorators need a common type to wrap.
- Putting topping logic inside the Beverage class with flags like hasWhipCream, hasMilk. That is a combinatorial explosion.
- Forgetting that decorator order can matter. Double milk charges twice because you have two MilkDecorators in the chain.
- Coupling pricing strategy to the beverage. Pricing is an order-level concern, not a drink-level one.
Related Designs
Restaurant Management, Online Shopping
Collaborative Editor — Platform Scale
Difficulty: Advanced
Google Docs at its core. Operational Transformation vs CRDTs — the real question is how two people typing at the same position converge to the same document.
Design Patterns for Collaborative Editor
- Command Pattern
- Observer Pattern
- Strategy Pattern
Key Abstractions in Collaborative Editor
- Operation: A single edit: insert(pos, text), delete(pos, len). Carries an author ID and vector clock.
- Document: The converged text state. Applies operations and exposes content.
- OTEngine: Transforms concurrent operations so they commute. Insert at 5 + insert at 3 → adjust the 5 to 6.
- Site: One client session. Owns its local doc, vector clock, and pending-op buffer.
- Server: Ordering authority. Receives ops from all sites, transforms, broadcasts.
Key Points for Collaborative Editor
- OT's core rule: transform(opA, opB) must satisfy opA·T(opB) ≡ opB·T(opA) — both users converge.
- Every op carries a vector clock so the server knows its causal context.
- Client applies local ops immediately, then reconciles when server confirms.
- Tie-break on concurrent inserts at the same position by author ID — deterministic, unambiguous.
Common Mistakes with Collaborative Editor
- Applying raw ops from another user without transformation. Doc states diverge immediately.
- Using server time as the ordering. Vector clocks capture causality correctly; wall time doesn't.
- Blocking local edits until server ACK. Typing feels laggy; collab editors live or die on local-first feel.
- Forgetting to reapply pending ops after a server-side rebase. Local user sees their last edit disappear.
Related Designs
Text Editor, Undo/Redo Manager, Pub-Sub System
Concert Ticket Booking — Booking & Reservations
Difficulty: Advanced
Seat locking with TTL prevents double-booking during the payment window. Optimistic concurrency for seat selection, pessimistic locks for confirmation. Tiered pricing by venue section.
Design Patterns for Concert Ticket Booking
- Strategy Pattern
- Observer Pattern
- State Pattern
Key Abstractions in Concert Ticket Booking
- BookingSystem: Orchestrator. Manages events, seat locking, booking confirmation, and cancellation flows.
- Event: Concert with a name, venue, date, and auto-generated seating layout from venue sections.
- Seat: Individual seat with section, row, number, and a state machine (Available/Locked/Booked).
- Booking: Confirmed reservation linking a user to specific seats. Tracks payment reference and total.
- PaymentGateway: Strategy interface for payment processing. Supports credit card, UPI, wallet backends.
- SeatLock: Temporary hold on seats during payment. Has a TTL. Expired locks auto-release seats.
Key Points for Concert Ticket Booking
- Temporary seat locking is the centerpiece. When a user selects seats and goes to payment, those seats are held for a configurable TTL. No one else can grab them during that window.
- Lock expiry prevents ghost holds. If payment is not completed within the timeout, seats automatically become available again. No manual cleanup needed.
- Strategy pattern for payment processing. Credit card, UPI, and digital wallet are different payment backends with the same interface: charge(amount) returns a receipt.
- Observer pattern for waitlist. When a sold-out event gets cancellations, waitlisted users are notified that seats opened up.
Common Mistakes with Concert Ticket Booking
- Skipping the lock step and going straight from available to booked. Payment takes seconds or minutes. During that gap, another user can book the same seat.
- Locks without expiry. A user locks seats, gets distracted, never pays. Those seats are stuck in limbo forever. Every lock needs a TTL.
- Checking availability and booking in separate non-atomic operations. Between the check and the write, another thread can grab the same seat.
- Flat seat list without sections. Concert venues have VIP, floor, and balcony tiers with different pricing. A flat list loses that structure entirely.
Related Designs
Movie Ticket Booking, Online Auction
Connection Pool — Foundational
Difficulty: Intermediate
Object Pool for reusing expensive database connections. Proxy wrappers return connections on close() instead of destroying them. Singleton per data source prevents resource exhaustion.
Design Patterns for Connection Pool
- Object Pool Pattern
- Proxy Pattern
- Factory Pattern
- State Pattern
- Singleton Pattern
Key Abstractions in Connection Pool
- Connection: Interface — execute query, close
- PooledConnection: Proxy — wraps real connection, overrides close() to return to pool
- ConnectionFactory: Factory — creates driver-specific connections: MySQL, Postgres
- ConnectionState: Enum: AVAILABLE, IN_USE, VALIDATING, EXPIRED
- ConnectionPool: Object Pool + Singleton — manages borrow/return lifecycle
- PoolConfig: Max size, min idle, max wait time, validation query
Key Points for Connection Pool
- Object Pool avoids repeated create/destroy of expensive resources: each connection costs a TCP handshake plus auth
- Proxy is transparent: calling code uses the same Connection interface and doesn't know about the pool
- State tracks each connection's lifecycle, preventing use of expired connections
- Singleton per data source ensures one pool per database, preventing connection count explosion
Common Mistakes with Connection Pool
- Not returning connections (connection leak): the pool exhausts and all threads block
- Skipping validation before borrowing: stale connections throw on first query
- No max wait timeout: threads block forever when pool is exhausted
- Creating multiple pools for the same data source: doubles connection count, hits DB max_connections
Related Designs
Rate Limiter, Multi-Level Cache, Circuit Breaker
Course Registration — Platform Scale
Difficulty: Intermediate
Prerequisite validation and capacity-based enrollment with pluggable waitlist strategies (FIFO vs priority). Schedule conflict detection prevents overlapping time slots before enrollment commits.
Design Patterns for Course Registration
- Observer Pattern
- Strategy Pattern
Key Abstractions in Course Registration
- RegistrationSystem: Orchestrator that coordinates enrollment, prerequisite checks, capacity limits, and waitlist management
- Course: Academic offering with a capacity cap, prerequisites list, schedule, and enrolled student roster
- Student: Enrollee with a set of completed courses, current enrollments, and a schedule
- Enrollment: Record linking a student to a course with status tracking: ENROLLED, WAITLISTED, DROPPED
- TimeSlot: Day-and-time interval; overlap check prevents students from enrolling in two concurrent courses
- WaitlistStrategy: Strategy for ordering waitlisted students: FIFO (first-come) or priority-based (GPA, seniority)
Key Points for Course Registration
- Prerequisite validation runs before capacity checks. No point reserving a seat if the student is not qualified.
- Schedule conflict detection happens before enrollment commits. Two courses at the same time slot are rejected upfront.
- Waitlist is strategy-driven. FIFO is default, but swapping to priority-based (by GPA or seniority) is a single strategy swap.
- Observer notifies waitlisted students when a seat opens up from a drop. The system auto-promotes the next eligible student.
Common Mistakes with Course Registration
- Checking capacity but not prerequisites. A student who has not passed Calc I should never get a seat in Calc II regardless of availability.
- Letting students enroll in time-conflicting courses. If two courses overlap on Tuesday 10-11am, the second enrollment must be rejected.
- Hardcoding FIFO waitlist logic. When the registrar wants priority-based promotion (seniors first), you have to rewrite the waitlist.
- Forgetting to notify waitlisted students on drop. Without observer, seats freed by drops sit empty until someone manually checks.
Related Designs
Library Management, Hotel Management
CricInfo — Platform Scale
Difficulty: Advanced
Observer for live score updates pushed to every connected client. State pattern manages match lifecycle transitions from NOT_STARTED through IN_PROGRESS and INNINGS_BREAK to COMPLETED.
Design Patterns for CricInfo
- Observer Pattern
- State Pattern
- Strategy Pattern
Key Abstractions in CricInfo
- CricketMatch: Top-level orchestrator holding teams, innings, toss result, lifecycle state, and observer management
- Team: Named squad of players with a batting order
- Player: Individual with accumulated batting stats (runs, balls faced) and bowling stats (wickets, runs conceded)
- Innings: One team's batting session containing overs, running total, and wicket count
- Over: Collection of up to 6 legal deliveries bowled by one bowler
- Scorecard: Live score tracker that computes batting card, bowling card, and summary from raw ball data
- MatchObserver: Observer interface for pushing live updates to subscribers: website, mobile app, TV overlay
Key Points for CricInfo
- Observer pattern pushes live score updates to website, mobile app, TV overlay, and push notifications simultaneously
- State pattern manages match lifecycle: NOT_STARTED to IN_PROGRESS to INNINGS_BREAK to COMPLETED with explicit transitions
- Scorecard recomputes from raw ball data on every call. No cached totals that can drift out of sync.
- Ball-by-ball event tracking is the atomic unit. Every aggregate (innings total, strike rate, economy) derives from individual deliveries.
Common Mistakes with CricInfo
- Caching total runs as a mutable field on Innings. One missed update and the scoreboard silently shows wrong numbers.
- Using string comparisons for match state transitions. An enum with a state machine prevents invalid jumps like COMPLETED to IN_PROGRESS.
- Putting all scoring logic inside the Match class. Match should delegate to Innings, which delegates to Over. Keep each level focused.
- Treating extras as regular runs. Wides and no-balls have different rules: a wide does not count as a legal delivery, so the over does not advance.
Related Designs
Notification System, Pub/Sub System
Deck of Cards — Games & Simulations
Difficulty: Starter
The canonical OOD warm-up. Build a deck that can shuffle, deal, and support any card game — Poker, Blackjack, Rummy — without rewriting the core.
Design Patterns for Deck of Cards
- Factory Method
- Strategy Pattern
- Template Method
Key Abstractions in Deck of Cards
- Card: Immutable value object holding suit and rank
- Deck: Collection of cards with shuffle and deal operations
- Hand: Cards held by a player, scored via a game-specific evaluator
- HandEvaluator: Strategy interface, so Poker ranks hands differently from Blackjack
- GameEngine: Template method that drives turn order, dealing, and winner selection
Key Points for Deck of Cards
- Card is immutable. Two cards with the same suit and rank are equal — no hidden state.
- Deck owns shuffling. Fisher-Yates in place keeps deal O(1) from the top.
- HandEvaluator is a Strategy so Poker, Blackjack, and Rummy can share the same Deck and Hand.
- Enums for Suit and Rank. String-typed cards cause silent bugs when a typo compiles fine.
Common Mistakes with Deck of Cards
- Making Card mutable. Once a card is dealt its identity shouldn't change — equality and hashing break.
- Hardcoding Poker scoring inside Hand. Reuse dies the moment Blackjack shows up.
- Using Math.random() without seeding in tests. Nondeterministic shuffles make bugs impossible to reproduce.
- Dealing from the end of a list with O(n) shifts. Deal from the top with an index pointer.
Related Designs
Chess, Snake and Ladder, Tic Tac Toe
Dependency Injection Container — Foundational
Difficulty: Advanced
A mini Spring-like IoC container. Register classes with scopes, resolve dependency graphs automatically, and swap implementations without changing client code.
Design Patterns for Dependency Injection Container
- Factory Pattern
- Singleton Pattern
- Prototype Pattern
- Proxy Pattern
- Registry Pattern
Key Abstractions in Dependency Injection Container
- Container: Central registry and resolver -- maps interfaces to implementations, drives resolution
- BeanDefinition: Metadata record: concrete class, scope, and declared dependencies
- Scope: Enum: Singleton (one shared instance) or Prototype (fresh instance per resolve)
- DependencyResolver: Walks constructor parameters, topologically resolves the full dependency graph
- BeanLifecycle: Init/destroy hooks called after construction and before container shutdown
Key Points for Dependency Injection Container
- The Registry pattern gives you O(1) lookup from an interface to its BeanDefinition, so the container never scans linearly
- Singleton scope stores one instance in a cache and returns it on every resolve call. Prototype scope calls the factory fresh each time.
- Constructor injection with automatic resolution means you declare what a class needs in its constructor, and the container figures out the build order by inspecting parameter types
- Circular dependency detection runs before any object is created. If A needs B and B needs A, you get a clear error instead of infinite recursion.
Common Mistakes with Dependency Injection Container
- Not detecting circular dependencies: the resolver recurses forever and blows the stack
- Creating a new instance on every resolve call even for singleton scope, breaking identity guarantees and wasting memory
- Tight coupling between the container internals and registered classes, so adding a new class forces changes inside the container
- Not supporting interface-to-implementation binding: callers must ask for concrete classes, defeating the whole point of inversion of control
Related Designs
Plugin Architecture, Connection Pool, Feature Flag System
Digital Wallet — Financial & Payments
Difficulty: Intermediate
Double-entry bookkeeping for every transaction, idempotency keys to prevent duplicate transfers, and a strategy pattern for payment methods so adding UPI or crypto never touches the transfer logic.
Design Patterns for Digital Wallet
- Strategy Pattern
- Observer Pattern
- Command Pattern
Key Abstractions in Digital Wallet
- Wallet: Account with a balance, supports credit and debit with concurrency safety
- Transaction: Immutable record of a credit or debit with amount, type, and timestamp
- TransferService: Orchestrates P2P transfers using double-entry bookkeeping
- PaymentMethod: Strategy interface for bank transfer, card, UPI, and other funding sources
- TransactionLog: Append-only audit trail of every wallet operation
Key Points for Digital Wallet
- Double-entry bookkeeping: every transfer creates two entries, one debit and one credit. Balances always sum to zero.
- Idempotency keys on every transfer request. Retries are safe because duplicate keys are rejected.
- Strategy pattern for payment methods. Adding a new funding source means one new class, zero changes to TransferService.
- Command pattern stores each operation. Undo, audit trail, and replay all come from the same log.
Common Mistakes with Digital Wallet
- Single-entry bookkeeping. If you only debit the sender without crediting the receiver atomically, a crash leaves money in limbo.
- Missing idempotency keys. Network retries will double-charge users. Every external-facing endpoint needs one.
- Using floating point for money. Use Decimal or BigDecimal. Floating point rounding errors compound over thousands of transactions.
- Not locking wallets in a consistent order during transfers. Two concurrent transfers between Alice and Bob can deadlock.
Related Designs
Payment System, Splitwise
Discount Engine — Financial & Payments
Difficulty: Intermediate
Composable eligibility rules via Specification pattern, discount strategies for different calculation types, Decorator-layered pricing, and Null Object for graceful no-discount handling.
Design Patterns for Discount Engine
- Specification Pattern
- Composite Pattern
- Strategy Pattern
- Decorator Pattern
- Null Object Pattern
Key Abstractions in Discount Engine
- Specification: Interface with is_satisfied_by(context) returning bool, plus and_spec/or_spec/not_spec combinators for composing business rules
- MinOrderAmountSpec, PremiumUserSpec, CategorySpec, DateRangeSpec: Concrete specifications that each encapsulate a single eligibility check against the order context
- AndSpecification, OrSpecification, NotSpecification: Composite specs that compose child specifications into boolean expression trees evaluated recursively
- DiscountStrategy: Strategy interface for discount calculation : PercentageDiscount, FixedAmountDiscount, BuyXGetYDiscount
- DiscountDecorator: Decorator that wraps a base discount to layer additional logic : LoyaltyBonus adds extra percentage, StackableDiscount layers multiple discounts
- NullDiscount: Null Object that returns the original price unchanged, eliminating null checks throughout the discount pipeline
- DiscountRule: Combines a Specification with a DiscountStrategy : if the spec matches the context, apply this discount
Key Points for Discount Engine
- Specification Pattern: business rules become first-class composable objects. MinOrderSpec(50).and_spec(PremiumUserSpec()) reads like a business rule. Each spec is independently testable.
- Null Object: when no discount applies, NullDiscount flows through the pipeline without friction. No if-null checks, no special-casing for when no discount is found.
- Composite: complex eligibility is a tree of AND/OR/NOT specs. The tree evaluates recursively, and new leaf specs snap in without touching existing composition logic.
- Strategy: discount calculation is decoupled from eligibility. The same spec can trigger different discount strategies depending on the promotion campaign.
Common Mistakes with Discount Engine
- Hardcoding eligibility rules in if/else chains: you can't compose rules, can't test independently, and adding a new rule means editing a growing conditional block.
- Returning null when no discount matches: forces null checks everywhere downstream and makes the pipeline brittle.
- Coupling eligibility logic to discount calculation: you can't reuse the premium user check for other features like priority support or early access.
- Not making specs immutable: shared specs modified in one context silently corrupt another, leading to non-deterministic discount behavior.
Related Designs
Payment System, Digital Wallet, Inventory Management
Document Builder — Real-World Systems
Difficulty: Intermediate
Composite document tree with Builder construction, Visitor-based rendering to multiple formats, and Prototype for cloning templates. A textbook example of structural patterns working together.
Design Patterns for Document Builder
- Builder Pattern
- Composite Pattern
- Visitor Pattern
- Prototype Pattern
- Iterator Pattern
Key Abstractions in Document Builder
- DocumentElement: Composite node interface. Every element in the document tree implements accept(visitor) for double-dispatch rendering.
- Section: Composite node that contains child elements. Sections can nest other sections, paragraphs, tables, and images.
- Paragraph / Table / Image: Leaf elements holding content. They accept visitors but have no children of their own.
- DocumentBuilder: Builds a document step-by-step with addSection, addParagraph, addTable. Separates construction from representation.
- DocumentVisitor: Interface with visitSection, visitParagraph, visitTable, visitImage. Adding a new output format means one new visitor class.
- HtmlRenderer / PlainTextRenderer: Concrete visitors that traverse the document tree and produce HTML or plain text output respectively.
- DocumentTemplate: Prototype that holds a pre-configured document structure. Clone it with deep copy to create customizable instances.
Key Points for Document Builder
- Builder separates construction from representation: the same build process creates different document structures without telescoping constructors.
- Composite tree lets you treat sections and leaves uniformly with recursive rendering. No special-casing for containers vs. content.
- Visitor adds new operations (HTML, PDF, word count) without modifying element classes. Double-dispatch keeps elements closed for modification.
- Prototype for templates avoids reconstructing common layouts from scratch. Deep clone keeps the original template immutable.
Common Mistakes with Document Builder
- Putting rendering logic inside element classes. Adding a new format like PDF means editing every single element class, which violates Open/Closed.
- Skipping Composite and handling sections vs. paragraphs vs. tables with separate code paths and type-checking scattered everywhere.
- Forgetting deep clone in Prototype. Shared mutable children mean modifying a cloned document silently corrupts the original template.
- Building documents with constructors instead of Builder. Optional sections become awkward and callers must know the full construction sequence.
Related Designs
Text Editor, In-Memory File System, Expression Evaluator
File Storage (Dropbox) — Platform Scale
Difficulty: Advanced
Chunked uploads, content-addressed storage, versioning. The win is deduplication — two users uploading the same file only store the chunks once.
Design Patterns for File Storage (Dropbox)
- Strategy Pattern
- Memento Pattern
- Composite Pattern
Key Abstractions in File Storage (Dropbox)
- File: User-facing entity — owner, path, versions, current head
- Version: A snapshot of the file — list of chunk hashes, timestamp, size
- Chunk: Fixed-size blob (4MB typical). Content-addressed by SHA-256 of its bytes.
- BlobStore: Low-level store keyed by chunk hash. Typically S3 or equivalent.
- MetadataStore: Tree of folders, files, and versions. The source of truth for listings.
- Uploader: Orchestrates chunking, dedup, upload, download; also exposes delete/undelete/purge for the trash flow
Key Points for File Storage (Dropbox)
- Content-addressed storage: hash the chunk, key the blob by hash. Same bytes uploaded twice = one blob.
- Chunking enables resumable uploads. A failed 100MB upload resumes from chunk N, not byte 0.
- Metadata and blob stores are separate. Metadata is small and transactional; blobs are huge and eventually consistent.
- Versions are immutable lists of chunk hashes. Restore a version by pointing file.head at the old version ID.
Common Mistakes with File Storage (Dropbox)
- Storing full files on every upload. At scale, duplication destroys the cost model.
- Letting the metadata store hold the bytes. Postgres doesn't want to be a blob store.
- Locking the whole file on write. Conflicting writes should produce a conflict copy, not block.
- Weak hashing (MD5, CRC). Collisions are rare but catastrophic — two different files sharing a chunk corrupts both.
Related Designs
URL Shortener, Notification System, Search Index
Elevator System — Real-World Systems
Difficulty: Intermediate
Multiple elevators with pluggable scheduling. Each elevator runs its own state machine while a central dispatcher assigns requests using the LOOK algorithm to prevent starvation.
Design Patterns for Elevator System
- Strategy Pattern
- Observer Pattern
- State Pattern
Key Abstractions in Elevator System
- ElevatorSystem: Facade that accepts floor requests and dispatches them to the best elevator
- Elevator: Manages its own state, current floor, direction, and pending stop queue
- Request: Captures a floor number and desired direction from a hall call
- Scheduler: Strategy for assigning incoming requests to elevators. Swappable.
- ElevatorState: Enum: IDLE, MOVING_UP, MOVING_DOWN, DOOR_OPEN. Governs valid transitions.
Key Points for Elevator System
- LOOK algorithm over naive nearest-elevator: it services requests in one sweep direction before reversing, preventing starvation of distant floors
- Each elevator is an independent unit with its own lock and stop queue. Parallelism comes naturally.
- Scheduler is decoupled from Elevator. Single responsibility: one picks, the other moves.
- State enum makes transitions explicit. An elevator can't go from DOOR_OPEN to MOVING_DOWN without closing first.
Common Mistakes with Elevator System
- Putting scheduling logic inside Elevator. Now every elevator duplicates the same decision code, and swapping algorithms means editing every elevator.
- Using a flat list of pending floors without direction awareness. That leads to unnecessary direction changes and wasted trips.
- Not handling concurrent requests. Two people pressing buttons at the same time can corrupt the stop queue without proper locking.
- Forgetting the IDLE state. An elevator with no pending requests should stop moving, not keep sweeping endlessly.
Related Designs
Parking Lot, Traffic Signal
Event Sourcing System — Platform Scale
Difficulty: Advanced
Immutable event log with Command-based changes, Mediator coordinating sources and sinks, Composite transformation pipelines, and swappable delivery strategies.
Design Patterns for Event Sourcing System
- Mediator Pattern
- Command Pattern
- Composite Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Event Sourcing System
- Event: Command — immutable data change with type, payload, timestamp, sequence number
- EventStore: Append-only log of events
- EventMediator: Mediator — coordinates between producers, transformers, and consumers
- CompositePipeline: Composite — Filter inside Map inside Batch; complex transforms are trees of simpler ones
- DeliveryStrategy: Strategy — at-least-once, at-most-once, exactly-once semantics
- EventSubscriber: Observer — consumes events, tracks offset
Key Points for Event Sourcing System
- Command: every state change is an immutable event object (InsertEvent, UpdateEvent, DeleteEvent). Events are replayable.
- Mediator: producers and consumers never know about each other. The mediator routes, buffers, and manages back-pressure.
- Composite: transformation pipelines compose. A FilterTransform wrapping a MapTransform wrapping a BatchTransform.
- Strategy: delivery guarantee is swappable. The same event stream can be consumed at-least-once by one subscriber and exactly-once by another.
Common Mistakes with Event Sourcing System
- Mutable events: defeats the entire event sourcing model. Events must be append-only.
- No sequence numbers: you can't detect gaps or duplicates without ordering
- Coupling producers to consumers: producers should fire-and-forget to the mediator
- Ignoring back-pressure: fast producers can overwhelm slow consumers
Related Designs
Pub/Sub System, Workflow Engine, Notification System
Expression Evaluator — Foundational
Difficulty: Advanced
Parse math expressions into an AST, then walk the tree with visitors for evaluation, pretty-printing, or optimization. Interpreter builds the structure, Visitor adds operations without modifying it.
Design Patterns for Expression Evaluator
- Interpreter Pattern
- Visitor Pattern
Key Abstractions in Expression Evaluator
- Expression: Abstract AST node with accept(visitor) for double dispatch to the right visit method
- NumberExpr / BinaryExpr / UnaryExpr: Concrete AST nodes representing literals, binary operations, and unary operations
- ExpressionVisitor: Visitor interface with a visit method per node type, enabling new operations without modifying nodes
- Parser: Recursive descent parser that tokenizes input and builds an AST respecting operator precedence
Key Points for Expression Evaluator
- Interpreter defines the grammar as a class hierarchy. Each node type is a rule in the grammar.
- Visitor adds new operations to the AST without modifying any Expression class. Open-closed principle.
- Double dispatch: Expression.accept(visitor) calls visitor.visit_X(self), routing to the correct visit method.
- Parser handles operator precedence through recursive descent: addition/subtraction at one level, multiplication/division at another.
Common Mistakes with Expression Evaluator
- Putting evaluate() directly on Expression nodes. It works until you need a second operation. Then you edit every node class again.
- Ignoring operator precedence in the parser. 2 + 3 * 4 should be 14, not 20.
- Not handling division by zero. The EvaluateVisitor should check the denominator before dividing.
- Using instanceof checks instead of the Visitor pattern. That defeats the purpose and scatters operation logic.
Related Designs
In-Memory File System, Logging Framework
Feature Flag System — Platform Scale
Difficulty: Advanced
Proxy-based gating keeps flag checks invisible to callers. Decorator-layered evaluation stacks kill-switches and overrides without combinatorial explosion. Swappable targeting strategies let you go from percentage rollouts to user-segment rules at config time.
Design Patterns for Feature Flag System
- Proxy Pattern
- Decorator Pattern
- Singleton Pattern
- Strategy Pattern
- Factory Pattern
- Observer Pattern
Key Abstractions in Feature Flag System
- FeatureFlag: Data class holding a flag's name, enabled state, and targeting rules that strategies evaluate against
- FlagEvaluator: Strategy interface with evaluate(context) -> bool. Implementations include PercentageEvaluator, UserSegmentEvaluator, etc.
- EvaluationDecorator: Wraps a base FlagEvaluator and adds a layer such as kill-switch or manual override before delegating
- FlagManager: Singleton that serves as the central registry for all feature flags, ensuring consistent state across the application
- FeatureProxy: Proxy that wraps a service call and transparently checks a flag before routing to the new or old implementation
- FlagChangeListener: Observer that gets notified whenever a flag's state changes, enabling cache invalidation, logging, or metric emission
Key Points for Feature Flag System
- Proxy pattern makes flag checks invisible to calling code. The caller invokes the same interface regardless of which implementation runs behind it.
- Decorator pattern lets you stack evaluation layers: kill-switch, override, percentage. Each decorator adds one concern, with no monolithic if/else chain.
- Singleton for FlagManager guarantees every part of the application reads from the same flag registry. Two registries means two sources of truth and subtle bugs.
- Strategy pattern for targeting means you can swap from percentage-based rollout to user-segment targeting by changing the evaluator, not the flag system.
Common Mistakes with Feature Flag System
- Not defaulting to 'off' on evaluation errors. If the evaluator throws or the flag is missing, the safe default is disabled. Failing open exposes unfinished features to everyone.
- Serving stale flags from a local cache without a TTL or push-based invalidation. A flag turned off in the dashboard stays on for minutes in production.
- Coupling flag evaluation logic directly into business code with scattered if/else checks. Flag removal becomes painful and leaves dead code everywhere.
- Missing an audit trail for flag changes. When an incident happens and a flag was toggled, you need to know who changed it, when, and what the previous state was.
Related Designs
Rate Limiter, API Gateway, Notification System
File Compression System — Real-World Systems
Difficulty: Intermediate
Swappable compression strategies, Decorator-layered stream processing (compress, encrypt, checksum), Composite archive tree, and Builder for step-by-step archive construction.
Design Patterns for File Compression System
- Strategy Pattern
- Decorator Pattern
- Composite Pattern
- Builder Pattern
- Template Method
Key Abstractions in File Compression System
- CompressionStrategy: Interface defining compress/decompress on raw bytes. Strategies are swappable at runtime.
- RLECompression / HuffmanCompression: Concrete strategies. RLE for repetitive data, Huffman for general text with frequency-based encoding.
- StreamDecorator: Wraps a data stream to add processing layers. EncryptionDecorator and ChecksumDecorator compose transparently.
- ArchiveEntry: Composite node. Files are leaves holding compressed data; directories contain children for recursive traversal.
- ArchiveBuilder: Fluent builder for archive construction. addFile, addDirectory, setCompression, setEncryption, build with validation.
- Compressor: Template Method skeleton: read chunk, compress, write output, update checksum. Subclasses override hooks.
Key Points for File Compression System
- Strategy makes compression algorithm swappable: choose RLE for repetitive data, Huffman for general text
- Decorator for stream processing layers that compose: encryption wraps compression wraps raw data
- Composite for archive structure: directories contain files and sub-directories, recursive traversal
- Builder validates constraints: encryption requires a key, compression level must be valid
Common Mistakes with File Compression System
- Hardcoding compression algorithm: you can't optimize per file type
- Applying encryption before compression: encrypted data doesn't compress well
- Not using Composite for directories: leads to flat file lists with path string parsing
- Loading entire file into memory: you need streaming for large archives
Related Designs
In-Memory File System, Document Builder, Multi-Level Cache
Food Delivery (Swiggy) — Platform Scale
Difficulty: Advanced
Order lifecycle via state machine, delivery agent assignment via strategy pattern, and real-time tracking via observer. A three-sided marketplace where coordination timing is everything.
Design Patterns for Food Delivery (Swiggy)
- Strategy Pattern
- Observer Pattern
- State Pattern
Key Abstractions in Food Delivery (Swiggy)
- DeliveryPlatform: Orchestrator that coordinates restaurants, customers, agents, and order lifecycle
- Restaurant: Menu provider that confirms or rejects incoming orders
- Customer: Orderer who places orders and receives delivery status updates
- DeliveryAgent: Driver with location and current load who picks up and delivers orders
- Order: Core entity with state machine: PLACED -> CONFIRMED -> PREPARING -> OUT_FOR_DELIVERY -> DELIVERED
- MenuItem: Individual food item with name, price, and availability flag
- AssignmentStrategy: Algorithm for picking a delivery agent: nearest, load-balanced, or rating-based
Key Points for Food Delivery (Swiggy)
- State machine on Order enforces valid transitions. An order cannot jump from PLACED to DELIVERED.
- Strategy pattern for agent assignment: swap between nearest-agent, load-balanced, or rating-based without changing order logic
- Observer notifies customer and restaurant on every status change for real-time tracking
- Restaurant confirmation is an explicit state. The order waits for the restaurant to accept before preparation begins.
Common Mistakes with Food Delivery (Swiggy)
- Skipping the CONFIRMED state. Without restaurant confirmation, you charge the customer before knowing if the food can be made.
- Assigning a delivery agent at order placement. The agent should be assigned when food is almost ready, not 30 minutes early.
- Hardcoding agent assignment logic. Load balancing during peak hours and nearest-agent during off-peak need different strategies.
- Not tracking agent load. Assigning five orders to one agent while others sit idle kills delivery times.
Related Designs
Ride Sharing, Restaurant Management
Form Validation Engine — Foundational
Difficulty: Starter
Composable validation chains with Builder construction, Composite form groups for recursive validation, and Strategy for fail-fast vs collect-all modes. A beginner-friendly structural pattern showcase.
Design Patterns for Form Validation Engine
- Chain of Responsibility
- Composite Pattern
- Builder Pattern
- Strategy Pattern
Key Abstractions in Form Validation Engine
- Validator: Chain of Responsibility : each validator checks one rule and delegates to next
- RequiredValidator, EmailValidator, MinLengthValidator, MaxLengthValidator, RegexValidator: Concrete validators : each encapsulates a single validation rule
- FormGroup: Composite : contains fields and sub-groups, validates recursively
- ValidationBuilder: Builder : fluent API: field("email").required().email().maxLength(255).build()
- ValidationMode: Strategy : FailFast stops at first error, CollectAll gathers every error
Key Points for Form Validation Engine
- Chain of Responsibility: validators link together. Required check passes to format check passes to length check. Each is independent.
- Composite: form groups nest. Address group (street, city, zip) inside a user form. Validating the form validates all groups recursively.
- Builder: fluent API makes rule construction readable and prevents invalid combinations. The builder produces an immutable validator chain.
- Strategy: fail-fast returns on first error (fast feedback), collect-all returns every error at once (form UX).
Common Mistakes with Form Validation Engine
- Giant if/else blocks for validation: adding a new rule requires editing the monolith
- Not supporting nested form groups: address fields validated separately from the form they belong to
- Hardcoding fail-fast behavior: some UIs need all errors at once for inline display
- Mutable validator chains: sharing a chain between forms and then modifying it corrupts both
Related Designs
Middleware Pipeline, Rate Limiter, Logging Framework
Game Framework — Games & Simulations
Difficulty: Advanced
A reusable game engine where adding a new game means plugging in rules, not rewriting the loop. Template Method defines the game lifecycle, Strategy swaps scoring and win conditions, and Command enables move history with undo.
Design Patterns for Game Framework
- Template Method
- Strategy Pattern
- State Pattern
- Command Pattern
- Observer Pattern
- Factory Pattern
Key Abstractions in Game Framework
- Game: Template Method: defines the game loop skeleton: initialize, play turns, check win, end. Subclasses never override the loop itself, only the hooks.
- GameRules: Strategy: validates moves, checks win conditions, calculates scores. Swap this to change the game entirely.
- GamePhase: State enum tracking whose turn it is and whether the game is running or finished. Prevents illegal transitions like playing after game over.
- Move: Command: encapsulates a player action with execute/undo. Enables move history and replay.
- Player: Participant with name, score, and game-specific attributes. The engine treats all players identically.
- Board: Generic board abstraction: grid-based or card-based. Games provide their own board type.
- GameEventListener: Observer: notified on move made, turn changed, game over. UI and logging subscribe here.
- GameFactory: Factory: creates the right Game+Rules+Board combo from a game type string.
Key Points for Game Framework
- Template Method keeps the game loop identical across all games: initialize board, loop (get move, validate, apply, check win, switch turn), end game. Each game only overrides the hooks.
- Strategy for rules means Tic-Tac-Toe's win check (3 in a row) and Chess's win check (checkmate) are both just implementations of the same GameRules interface.
- Command pattern for moves gives you undo for free. Every move stores enough state to reverse itself. Move history is just a list of commands.
- Factory hides game creation complexity. `GameFactory.create("tic-tac-toe")` returns a fully wired Game with the right board, rules, and player count.
Common Mistakes with Game Framework
- Putting game-specific logic in the engine. The engine should know nothing about chess rules or card hands. If you edit the engine to add a new game, the abstraction is broken.
- Skipping the Command pattern for moves. Without it, undo requires snapshots of the entire game state (expensive and fragile).
- Making the Board too specific. A chess board is 8x8, a card game has no grid. The board abstraction needs to accommodate both.
- Not using State for game phases. Without explicit states (SETUP, PLAYING, PAUSED, FINISHED), you get boolean flags like `isStarted`, `isOver`, `isPaused` that conflict with each other.
Related Designs
Chess, Tic Tac Toe, Snake & Ladder
Game World (Tile Engine) — Games & Simulations
Difficulty: Intermediate
A 1000x1000 map has a million tiles but only a handful of terrain types. Flyweight shares the heavy data across all tiles of the same type. Bridge decouples game entities from how they get rendered.
Design Patterns for Game World (Tile Engine)
- Flyweight Pattern
- Bridge Pattern
Key Abstractions in Game World (Tile Engine)
- TileType: Flyweight holding shared intrinsic state: terrain name, walkability, movement cost, and display symbol
- TileTypeFactory: Flyweight factory ensuring exactly one TileType instance exists per terrain type
- Tile: Map cell referencing a shared TileType plus its own position and items as extrinsic state
- GameEntity: Bridge abstraction for game objects, delegating visual output to a Renderer implementation
- Renderer: Bridge implementation interface for rendering entities and tiles in different formats
Key Points for Game World (Tile Engine)
- A million tiles share maybe 5 TileType objects. Memory drops from O(tiles * properties) to O(tiles * 1 reference + types * properties).
- TileTypeFactory is the gatekeeper. It caches flyweights and never creates duplicates.
- Bridge means adding a new renderer (WebGL, SVG) never touches Player, Enemy, or any entity class.
- Intrinsic state (terrain properties) is shared and immutable. Extrinsic state (position, items) is per-tile and mutable.
Common Mistakes with Game World (Tile Engine)
- Storing position inside TileType. Position is extrinsic state that varies per tile. Putting it in the flyweight defeats sharing.
- Creating a new TileType for every tile. Without the factory cache, you get a million objects instead of five.
- Hardcoding rendering inside GameEntity. That couples your domain model to a specific output format. Bridge keeps them independent.
- Making the flyweight mutable. If one tile changes the 'walkable' property on its TileType, every tile sharing that type changes too.
Related Designs
Chess, Tic Tac Toe
Hangman — Games & Simulations
Difficulty: Starter
Guess letters before the wrong guesses run out. Tiny in scope, which makes it the best warm-up for modeling game state and separating rules from rendering.
Design Patterns for Hangman
- State Pattern
- Strategy Pattern
- Template Method
Key Abstractions in Hangman
- Word: The target. Knows its letters, its masked view, and which positions a letter lives in.
- Game: Owns state — remaining guesses, revealed positions, guessed-set, status
- WordProvider: Strategy for picking the target — dictionary, fixed-for-tests, category-based
- GuessResult: Value object returned per guess — hit, miss, duplicate, invalid
- PlayerStats: Per-player wins, losses, current streak, best streak
- HangmanSession: Plays a sequence of Games for one Player; updates PlayerStats on each terminal status
Key Points for Hangman
- Track guessed letters in a set, not a list. Duplicate guesses are identifiable in O(1).
- Masked view is derived from Word + guessed set. Don't store the masked string — recompute.
- Game status is a state machine: IN_PROGRESS → WON (all letters guessed) → LOST (out of guesses).
- WordProvider as Strategy keeps the game deterministic in tests (FixedWordProvider) while production uses dictionary.
- Session wraps Game so stats tracking is outside the single-game core — same Game class, optional stats.
Common Mistakes with Hangman
- Storing the masked word as mutable state. Drift between target and masked is how bugs creep in.
- Treating the guess as 'correct or incorrect' — there's also 'already guessed' and 'invalid character' for UX.
- Case-insensitive matching but case-sensitive storage. Normalize at the boundary.
- Running out of guesses silently. Signal it through status + event, not just a remaining counter.
Related Designs
Tic Tac Toe, Snake and Ladder, Deck of Cards
HashMap — Foundational
Difficulty: Starter
An array of buckets, a hash function, and linked lists for collisions. Simple pieces that combine into the most used data structure in software engineering.
Design Patterns for HashMap
- Strategy Pattern
- Iterator Pattern
Key Abstractions in HashMap
- HashMap: Public API for put, get, remove. Manages capacity and triggers resizing.
- Entry: Key-value-hash triple. Stores the precomputed hash to avoid rehashing during resize.
- HashFunction: Strategy interface for computing hash codes. Swappable for different key types.
Key Points for HashMap
- Separate chaining handles collisions by appending to a linked list at each bucket index.
- Power-of-2 capacity lets you replace modulo with bitwise AND. index = hash & (capacity - 1) is faster.
- Rehashing at 0.75 load factor keeps average chain length short, preserving O(1) amortized access.
- Storing the hash in each Entry avoids recomputing it during resize. Small cost, big win at scale.
Common Mistakes with HashMap
- Using key equality without checking hash first. Compare hashes before calling equals() to short-circuit expensive comparisons.
- Forgetting to handle null keys or duplicate keys on put(). Duplicates should update the value, not create a second entry.
- Not resizing. Without resize, chains grow linearly and your O(1) map degrades to O(n).
- Rehashing with the old capacity instead of the new one. Every entry must be re-indexed against the new size.
Related Designs
LRU Cache, Rate Limiter
Hospital Management — Real-World Systems
Difficulty: Intermediate
Patients, doctors, appointments, and prescriptions. The scheduling side is an interval-tree variant; the medical record side is append-only for audit.
Design Patterns for Hospital Management
- State Pattern
- Observer Pattern
- Factory Method
Key Abstractions in Hospital Management
- Patient: Identity, contact, insurance. Owns a MedicalRecord (append-only).
- Doctor: Identity, specialty, availability schedule
- Appointment: Links doctor, patient, slot. State machine (requested, confirmed, checked-in, completed, cancelled).
- MedicalRecord: Patient's chart — append-only list of encounters, diagnoses, prescriptions
- Prescription: Medication, dosage, duration, prescriber, timestamp
- Encounter: What actually happened in an appointment — diagnoses, notes, prescriptions issued
Key Points for Hospital Management
- MedicalRecord is append-only. Corrections append a new entry referencing the original, never overwriting.
- Doctor availability modeled as intervals. Conflict check uses the same trick as meeting scheduler.
- Appointment state transitions audited — who changed what, when. Medical is audit-heavy by law.
- Prescription has both prescriber and dispenser — the same person isn't always both.
Common Mistakes with Hospital Management
- Mutable medical records. Deleting an old diagnosis is malpractice territory; history must be preserved.
- Doctors share a global calendar. Wrong model — each doctor has their own, plus hospital-wide blocking.
- Treating an appointment as a time slot alone. The encounter (what happened) is a separate object attached on check-in.
- Skipping insurance verification. A full real-world design has a claim lifecycle; interview can acknowledge it.
Related Designs
Meeting Scheduler, Hotel Management, Library Management
Hotel Management — Booking & Reservations
Difficulty: Intermediate
Room inventory with date-range bookings and pluggable pricing. Strategy handles seasonal rates and weekend premiums. Factory creates room types. Observer fires check-in/out events.
Design Patterns for Hotel Management
- Strategy Pattern
- Factory Method
- Observer Pattern
Key Abstractions in Hotel Management
- Hotel: Facade. Coordinates room inventory, bookings, and guest management.
- Room: Physical room with number, type, floor, and amenity list.
- RoomType: Enum: SINGLE, DOUBLE, SUITE. Drives pricing and capacity.
- Reservation: Binds a guest to a room for a check-in/check-out date range.
- Guest: Contact info and reservation history.
- PricingStrategy: Interface for rate calculation: standard, seasonal, dynamic.
- BookingService: Checks date-range availability and creates reservations with concurrency control.
Key Points for Hotel Management
- Date-range availability checking is the core complexity. A room is available only if no existing reservation overlaps the requested date range.
- Strategy pattern for pricing. Standard pricing uses a flat rate per room type. Seasonal pricing applies multipliers for holidays. Weekend pricing bumps Friday/Saturday rates.
- Factory method for room creation. Adding a PENTHOUSE type means one new case in the factory, not scattered construction logic.
- Observer fires events on check-in and check-out. Housekeeping subscribes to check-out. Billing subscribes to check-in for deposit capture.
Common Mistakes with Hotel Management
- Checking availability by a single date instead of a date range. A room booked Jan 5-8 is unavailable on Jan 6, even though no reservation starts on that date.
- Putting pricing logic inside the Room class. Room describes physical attributes. Pricing is a business rule that changes by season, by demand, by customer loyalty tier.
- No overlap detection on reservations. Two guests end up booked into the same room for overlapping dates. Classic concurrency bug.
- Using inheritance for room types instead of composition. SINGLE, DOUBLE, SUITE differ in attributes, not behavior. An enum plus a config record works better.
Related Designs
Restaurant Management, Concert Ticket Booking
In-Memory File System — Platform Scale
Difficulty: Intermediate
Composite pattern for hierarchical file/directory structure with path resolution. Files and directories share a common interface, so recursive operations like size calculation and deletion work uniformly across the tree.
Design Patterns for In-Memory File System
- Composite Pattern
- Iterator Pattern
Key Abstractions in In-Memory File System
- FileSystem: Facade with root directory and path resolution. All operations go through here.
- FileSystemEntry: Abstract base for File and Directory. Enables uniform treatment across the tree.
- File: Leaf node holding a name and string content
- Directory: Composite node with a children map for O(1) child lookup by name
Key Points for In-Memory File System
- Composite pattern lets files and directories share a common interface. Recursive operations like size() work without type-checking.
- Children stored as HashMap, not a list. Name-based lookups are O(1) instead of scanning every entry.
- Path resolution walks the tree from root by splitting on '/'. Missing segments fail fast with clear errors.
- FileSystemEntry is abstract with polymorphic behavior. No instanceof calls in client code.
Common Mistakes with In-Memory File System
- Using isinstance/instanceof checks everywhere instead of polymorphism. Adding a new entry type means touching every operation.
- Storing children in a list. Linear scan for every lookup is fine for 10 files, painful for 10,000.
- Forgetting that rm on a directory should be recursive. Leaving orphaned children is a silent resource leak.
- Not validating paths. Operations on '/foo/bar' should fail cleanly when '/foo' does not exist.
Related Designs
Stack Overflow, Logging Framework
Inventory Management — Real-World Systems
Difficulty: Advanced
DDD-inspired inventory system with Specification for composable product queries and reorder rules, Repository for clean data access, and Strategy for configurable replenishment algorithms.
Design Patterns for Inventory Management
- Specification Pattern
- Repository Pattern
- Observer Pattern
- Strategy Pattern
- State Pattern
Key Abstractions in Inventory Management
- Specification: Interface : is_satisfied_by(item) with and_spec/or_spec/not_spec combinators for composable queries
- InStockSpec, CategorySpec, PriceRangeSpec, BelowReorderPointSpec: Concrete specifications for filtering products by availability, category, price range, and reorder eligibility
- ProductRepository: Repository : add, find_by_id, find_matching(spec), find_all. Collection-like abstraction over data access
- Product: Entity : id, name, category, price, quantity, reorder_point. Core domain object
- ReplenishmentStrategy: Strategy : JustInTime orders exactly what's needed, SafetyStock adds a buffer, EconomicOrderQuantity optimizes batch size
- OrderState: State enum : RECEIVED, PICKING, PACKING, SHIPPED, DELIVERED. Tracks warehouse order lifecycle
- StockObserver: Observer : alerts on low stock events and triggers automatic reorder workflows
Key Points for Inventory Management
- Specification + Repository: `repo.find_matching(InStockSpec().and_spec(CategorySpec('electronics')))` accepts spec objects for querying. This pair shows up constantly in DDD for good reason.
- Two uses of Specification in one system: (1) product search/filtering (2) reorder eligibility rules. Same pattern, different domains.
- Repository: abstracts data access behind a collection-like interface. Swap InMemoryRepository for DatabaseRepository without changing business logic.
- Strategy: replenishment algorithms calculate different reorder quantities. JustInTime orders exactly what's needed, SafetyStock adds a buffer.
Common Mistakes with Inventory Management
- Embedding query logic in service classes: leads to duplicated filtering across different features
- Not separating specification from repository: putting WHERE clause logic inside the repository method names (findByPriceAndCategory)
- Mutable specifications: shared specs modified in one feature corrupt another
- Single replenishment strategy for all products: perishables need JustInTime, electronics need SafetyStock
Related Designs
Online Shopping, Discount Engine, Parking Lot
Invoice & Billing — Financial & Payments
Difficulty: Intermediate
Line items, tax rules, discounts, invoices. The model every SaaS eventually needs and most interview candidates model badly — floats, mutable invoices, missing audit.
Design Patterns for Invoice & Billing
- Strategy Pattern
- Composite Pattern
- State Pattern
Key Abstractions in Invoice & Billing
- Invoice: Immutable once finalized. Customer, line items, totals, tax, status.
- LineItem: Description, quantity, unit price, discount, tax rate
- TaxRule: Strategy — US sales tax, VAT, compound tax, zero-tax
- DiscountRule: Strategy — percent off, fixed off, BOGO, tiered
- Payment: One settlement against an invoice. An invoice may have many (partial payments).
- InvoiceStatus: State machine: DRAFT → ISSUED → PARTIALLY_PAID → PAID, or ISSUED → VOIDED
Key Points for Invoice & Billing
- Money in integer cents. Floats turn $0.10 + $0.20 into $0.30000000000000004 — audit nightmare.
- Invoice is immutable once issued. Corrections produce a credit memo; the original is preserved.
- Tax and discount are strategies. Every jurisdiction has its own rules; every promotion is its own logic.
- Totals computed once at finalize time. Avoids drift between stored total and recomputed sum.
Common Mistakes with Invoice & Billing
- Storing money as float/double. Catastrophic rounding errors at scale.
- Mutating issued invoices. Disconnects audit trail from customer-visible state.
- Calculating total at every render. Small perf issue, bigger is 'did you change the tax rate and invalidate historical reports?'
- Tax inside LineItem hardcoded. EU B2B, reverse-charge, and zero-rated exports all need pluggable rules.
Related Designs
Payment System, Stock Brokerage, Digital Wallet
Job Scheduler — Platform Scale
Difficulty: Advanced
Cron-like scheduler with retries, priorities, and cancellation. A min-heap keyed by next-run time plus a worker pool handles millions of jobs per node.
Design Patterns for Job Scheduler
- Strategy Pattern
- Command Pattern
- Observer Pattern
Key Abstractions in Job Scheduler
- Job: Value object: id, command, schedule, retry policy, status
- Schedule: Strategy returning the next fire time — cron, fixed-rate, fixed-delay, one-shot
- JobQueue: Min-heap keyed by next-run time. Worker threads block on the head via a condition variable.
- Scheduler: Facade. Register, cancel, and query jobs. Owns the queue, worker threads, and a bounded task executor for per-job timeouts.
Key Points for Job Scheduler
- Min-heap keyed by next-run timestamp. O(log n) insert, O(1) peek at soonest job.
- Schedule is a Strategy so cron, rate-based, and delay-based jobs share one queue.
- Retry policy on the Job itself (max attempts, backoff). Worker handles the mechanics.
- Cancellation sets a flag; the worker checks before executing. Avoids racing in-flight jobs.
Common Mistakes with Job Scheduler
- Busy-waiting on 'is the next job ready yet'. Use condition variables and sleep-until.
- Timer-per-job. A million jobs = a million threads. Heap + single dispatcher scales.
- Retrying forever. Max-attempts plus exponential backoff is the bare minimum.
- Losing jobs on crash. Production needs a durable store; in-memory is test-only.
Related Designs
Workflow Engine, Rate Limiter, Notification System
LFU Cache — Foundational
Difficulty: Intermediate
Least-frequently-used eviction in O(1). The trick is a doubly-linked list of frequency buckets — each bucket is itself a DLL of keys at that frequency.
Design Patterns for LFU Cache
- Strategy Pattern
- Composite Pattern
Key Abstractions in LFU Cache
- Entry: Key, value, frequency count, optional expiresAt for TTL
- FreqBucket: DLL of entries sharing the same access frequency. MRU at head, LRU at tail for O(1) tie-break.
- LFUCache: Public API. Maintains the key-to-entry index, orchestrates promotions, and tracks hit/miss/eviction stats.
Key Points for LFU Cache
- O(1) for both get and put using nested DLLs — one list of buckets, each bucket is a list of keys.
- Track minFreq explicitly so eviction lands on the right bucket instantly, no scanning.
- On tie within a frequency bucket, evict the least recently used entry — that's why buckets are DLLs not sets.
- Promotion: increment frequency, move entry from bucket f to bucket f+1. Create bucket f+1 if missing.
- TTL is lazy — get() removes an expired entry and reports a miss; no background reaper needed for correctness.
- Stats (hits, misses, evictions, expirations) are the signal for tuning capacity and TTL in production.
Common Mistakes with LFU Cache
- Using a min-heap keyed by frequency. O(log n) per op, and updating a heap entry is painful.
- Forgetting to update minFreq when a bucket empties. Next eviction hits the wrong bucket.
- Not breaking frequency ties with LRU. Pure LFU is ambiguous when two keys share min frequency.
- Resetting frequency to 1 on put of an existing key. That's not an update — it's a wipe.
Related Designs
LRU Cache, Multi-Level Cache, HashMap
Library Management — Real-World Systems
Difficulty: Intermediate
Separate the catalog from physical copies. A Book is metadata. A BookCopy is the thing on the shelf. This distinction drives every other design decision in the system.
Design Patterns for Library Management
- Observer Pattern
- Strategy Pattern
Key Abstractions in Library Management
- Library: Facade coordinating books, members, loans, and search
- Book: Catalog entity with title, author, ISBN. Has multiple physical copies.
- BookCopy: Individual physical copy with its own availability status
- Member: Borrower with checkout limits and active loan tracking
- Loan: Checkout record linking a member to a book copy with due date and fine calculation
- SearchStrategy: Strategy interface for searching by title, author, or ISBN
Key Points for Library Management
- Book vs BookCopy separation mirrors real libraries. Five people can request the same title because five copies exist independently.
- Strategy pattern on search lets you add new search criteria without modifying the Library class.
- Fine calculation is its own strategy. Flat rate per day, tiered rates, or grace periods are all just different implementations.
- Observer pattern notifies members when a reserved book becomes available. No polling required.
Common Mistakes with Library Management
- Treating Book and BookCopy as one entity. Then you cannot track which specific copy is checked out or damaged.
- Hardcoding checkout limits. Different membership tiers should have different limits.
- Calculating fines at checkout time instead of return time. Fines depend on how late the book actually comes back.
- Not handling the case where a member returns a book that someone else has reserved. The reservation queue needs to trigger automatically.
Related Designs
Hotel Management, Car Rental
LinkedIn — Platform Scale
Difficulty: Advanced
Connections, feed, messaging, and search. The connections graph is the hard part — 1st/2nd/3rd-degree expansion is where most designs fall apart at scale.
Design Patterns for LinkedIn
- Observer Pattern
- Strategy Pattern
- Facade Pattern
Key Abstractions in LinkedIn
- Profile: A member's identity: headline, experience, skills
- ConnectionGraph: Undirected graph of accepted connections. Backs 1st/2nd/3rd-degree queries.
- ConnectionRequest: State machine: pending → accepted / ignored / withdrawn
- Feed: Per-member ranked timeline of posts from connections
- MessageThread: Conversation between two or more members with delivered/read receipts
- FeedRanker: Strategy that scores posts — chronological, engagement-based, or affinity
Key Points for LinkedIn
- Connection is undirected — accepting an invite adds an edge both ways.
- Degree computation is BFS bounded to 3 levels. Past degree-3 the graph explodes.
- Feed is pull + cache. Build on read for inactive users, precompute for power users.
- Messaging is append-only per thread; receipts (delivered, read) update separately from messages.
Common Mistakes with LinkedIn
- Storing connections directed (follower/followed). Breaks the 'we are connected' symmetry every feature depends on.
- Computing degree-3 on every profile view. Millions of edges; blow up server CPU.
- Putting feed ranking inside the feed class. New ranking experiment = rewriting the feed.
- Treating messages as mutable — editing alters the conversation history. Use an edit flag plus history.
Related Designs
Social Media Feed, Notification System, Chat Room
Logging Framework — Foundational
Difficulty: Starter
Chain of Responsibility for handlers, Strategy for formatters, Singleton for the root logger. Three patterns solving three orthogonal concerns in one system.
Design Patterns for Logging Framework
- Chain of Responsibility
- Singleton Pattern
- Strategy Pattern
Key Abstractions in Logging Framework
- Logger: Singleton root logger. Accepts log calls and dispatches to the handler chain.
- LogLevel: Enum (DEBUG, INFO, WARN, ERROR) with ordinal comparison for filtering
- LogHandler: Abstract handler in the chain. Each handler decides whether to process and/or pass along.
- ConsoleHandler: Writes formatted log records to stdout
- FileHandler: Appends formatted log records to a file
- LogFormatter: Strategy interface for formatting. JSON, plain text, or custom formats.
- LogRecord: Immutable value object carrying timestamp, level, message, and logger name
Key Points for Logging Framework
- Chain of Responsibility lets you stack handlers. A log record flows through console, file, and remote handlers without the logger knowing how many exist.
- Each handler has its own minimum level. Console might show DEBUG, while file only captures WARN and above.
- Singleton ensures one logger instance across the entire application. All modules log through the same configured pipeline.
- LogRecord is immutable. Once created, handlers can read it freely without synchronization concerns.
Common Mistakes with Logging Framework
- Making the logger non-thread-safe. Multiple threads writing to the same handler without synchronization produces garbled output.
- Formatting inside the logger instead of delegating to the handler's formatter. That couples the logger to a specific output format.
- Forgetting to flush or close file handlers. Buffered writes get lost on crash if you never flush.
- Using string concatenation for log messages even when the level is filtered out. Build the message lazily or check the level first.
Related Designs
Notification System, Pub/Sub System
LRU Cache — Foundational
Difficulty: Starter
Doubly linked list plus hashmap. One gives you O(1) lookups, the other gives you O(1) eviction ordering. Together they solve the cache problem.
Design Patterns for LRU Cache
- Strategy Pattern
- Template Method
Key Abstractions in LRU Cache
- Cache: Public API for get and put, enforces capacity limits
- EvictionPolicy: Strategy interface so you can swap LRU for LFU or FIFO
- DoublyLinkedList: Maintains access ordering with O(1) move-to-front
- DLLNode: Wraps key-value pairs with prev/next pointers
Key Points for LRU Cache
- HashMap for O(1) lookup, DLL for O(1) eviction ordering. Together they cover both needs.
- Strategy pattern makes the eviction policy swappable without touching cache internals
- The DLL tail is always the eviction candidate. No scanning required.
- Move-to-front on both get() and put() keeps the ordering correct
Common Mistakes with LRU Cache
- Forgetting to update node position on get(), not just put(). Stale ordering breaks LRU correctness.
- Not removing the evicted key from the hashmap after DLL removal. That's a silent memory leak.
- Skipping thread safety. Production caches need locks or concurrent structures around both the map and list.
- Ignoring capacity=0 edge case. Should reject all puts gracefully.
Related Designs
Rate Limiter, HashMap, Notification System
Meeting Scheduler — Real-World Systems
Difficulty: Intermediate
Rooms, recurring series, RSVP, single-instance vs series cancel. The trick is modeling Meeting as a series and MeetingInstance as the occurrence — each attendee has their own calendar that a decline removes the slot from.
Design Patterns for Meeting Scheduler
- Strategy Pattern
- State Pattern
- Observer Pattern
- Facade Pattern
Key Abstractions in Meeting Scheduler
- Meeting: The series. Owns organizer, attendees, room, and a map of MeetingInstances keyed by instance id.
- MeetingInstance: One concrete occurrence. Start, end, per-attendee RSVP status, cancelled flag.
- Recurrence: Strategy — OneShot, Daily, Weekly. Expands a start + seriesEnd into a list of occurrence starts.
- Room: Physical room with capacity and a TreeMap of booked MeetingInstances for O(log n) conflict check.
- UserCalendar: Per-user TreeMap of busy MeetingInstances. A declined instance is removed from this calendar only.
- RoomSelectionStrategy: Picks a room that's free for every instance in the series — smallest-that-fits, nearest-floor
- Scheduler: Facade. Book, respond, cancel-instance, cancel-series, free-busy queries.
- NotificationBus: Publishes booked / response / cancelled events to calendar integrations.
Key Points for Meeting Scheduler
- Meeting is the series; MeetingInstance is the occurrence. Cancelling Tuesday in a weekly series is a one-instance operation.
- Each user has their own TreeMap calendar. Room conflict and user conflict are two separate O(log n) checks.
- RSVP is per-instance. DECLINED removes the instance from that user's calendar only — others still see it and the room stays booked.
- Half-open intervals [start, end). Back-to-back meetings don't conflict.
- Booking atomically checks every instance against the room and every participant's calendar — either all reserve or none do.
- Only the organizer may cancel, and cancellation flags the instance (preserves history) while removing it from every calendar.
Common Mistakes with Meeting Scheduler
- Treating a meeting as a single row. A weekly series with one cancelled Tuesday becomes impossible to represent.
- Keeping a declined attendee's slot on their calendar. Future availability queries return a wrong answer.
- Deleting cancelled meetings. Audit trail is gone and 'who cancelled what' questions can't be answered.
- Closed intervals [a, b]. A 10:00-11:00 meeting conflicts with an 11:00-12:00 meeting for no reason.
- Linear scan of all bookings. Fine at ten meetings, broken at ten thousand.
- Checking the room but not individual attendees. The room is free; Bob is double-booked; nobody notices until the meeting starts.
- Letting anyone cancel. Authorization belongs on the cancel path — only the organizer.
Related Designs
Hotel Management, Concert Ticket Booking, Workflow Engine
Middleware Pipeline — Platform Scale
Difficulty: Advanced
HTTP middleware chain where each link processes the request, optionally short-circuits, or delegates to the next. Builder assembles the pipeline fluently. Template Method defines the middleware skeleton.
Design Patterns for Middleware Pipeline
- Chain of Responsibility
- Decorator Pattern
- Builder Pattern
- Template Method
- Adapter Pattern
Key Abstractions in Middleware Pipeline
- HttpRequest / HttpResponse: Immutable data classes carrying method, path, headers, and body through the pipeline
- Middleware: Abstract base class using Template Method — defines handle() skeleton with pre_process and post_process hooks
- AuthMiddleware / LoggingMiddleware / CorsMiddleware / CompressionMiddleware: Concrete middleware implementations that override pre_process and post_process hooks
- ResponseDecorator: Wraps an HttpResponse to add headers or compress the body without modifying the original
- PipelineBuilder: Fluent builder that assembles middleware in explicit order via .use(auth).use(cors).use(compress).build()
- MiddlewareAdapter: Wraps a plain function or third-party handler to conform to the Middleware interface
Key Points for Middleware Pipeline
- Chain of Responsibility: each middleware decides to process and forward, or short-circuit (e.g., auth rejects with 401 and stops the chain).
- Builder: pipeline construction order matters. Auth before logging before compression. Builder makes this explicit and fluent.
- Template Method: Middleware base class defines handle() skeleton: pre_process, then next.handle, then post_process. Subclasses only override hooks.
- Decorator on responses: compression and header injection are response wrappers that compose without altering the original response object.
Common Mistakes with Middleware Pipeline
- Wrong middleware order: compression before auth means you compress error responses too, wasting CPU on 401s.
- Not calling next in the chain: silently drops the request and returns None or an empty response.
- Mutating the shared request object: race conditions with concurrent requests when headers or body are modified in place.
- Building the pipeline at runtime on every request instead of once at startup: unnecessary allocation and ordering bugs.
Related Designs
API Gateway, Rate Limiter, Feature Flag System
Minesweeper — Games & Simulations
Difficulty: Intermediate
Grid, mines, flood fill. The elegance is that every cell knows its neighbor-mine count at generation time — gameplay becomes pure state transitions.
Design Patterns for Minesweeper
- State Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Minesweeper
- Cell: One grid square. Holds mine-flag, adjacent-mine count, and reveal state.
- Board: Grid of cells, mine layout, and reveal logic (flood fill for zero-adjacency)
- MinePlacer: Strategy that places mines — random, fixed-seed, or first-click-safe
- Game: Owns the board, status, and timer. Routes reveal/flag/chord actions. Declares win/lose.
Key Points for Minesweeper
- Pre-compute adjacent-mine count once at board generation. Reveal becomes O(1) per cell.
- Flood fill on reveal when adjacent count is 0 — BFS from the clicked cell through zero-cells.
- First-click-safe: place mines *after* the first reveal and away from the clicked cell.
- Win condition: all non-mine cells revealed, regardless of flags. Flags are just a UI hint.
- Chord click: on a revealed number cell with matching flagged neighbors, auto-reveal the rest. Wrong flags → loss.
- Timer starts on first reveal and stops on terminal status — clock doesn't tick before the player commits.
Common Mistakes with Minesweeper
- Checking 'all flags on mines' for the win condition. That forces the user to flag perfectly — not how the game works.
- Placing mines at board creation. If the first click lands on a mine, the game ends before it starts — bad UX.
- Revealing recursively without a visited set. Infinite loop on the first zero-cell click.
- Storing only mine positions and recomputing counts per reveal. Slow and error-prone.
Related Designs
Sudoku Solver, Snake and Ladder, Tic Tac Toe
Movie Ticket Booking — Booking & Reservations
Difficulty: Advanced
Concurrent seat booking with TTL-based locks and dynamic pricing. Facade pattern wraps cinemas, shows, and seat selection into a clean BookMyShow-style API.
Design Patterns for Movie Ticket Booking
- Strategy Pattern
- Observer Pattern
- State Pattern
Key Abstractions in Movie Ticket Booking
- BookMyShow: Facade. Single entry point for searching shows, locking seats, and confirming bookings.
- Cinema: Theater with a name, location, and multiple screens. Each screen runs independent shows.
- Show: A movie playing on a specific screen at a specific time. Owns the seat map for that screening.
- Seat: Individual seat with type (Regular, Premium, VIP), row, number, and state machine.
- Booking: Confirmed reservation with status transitions: Pending, Confirmed, Cancelled.
- PricingStrategy: Dynamic pricing by show time, seat type, and day of week. Matinee vs evening, weekday vs weekend.
Key Points for Movie Ticket Booking
- Seat locking with TTL is non-negotiable for concurrent bookings. When a user selects seats and goes to checkout, those seats must be held. Without locking, two users pay for the same seat.
- Dynamic pricing via strategy pattern. A Friday night show charges more than a Tuesday matinee. VIP seats cost more than regular. These rules compose without touching the booking flow.
- Show scheduling per screen prevents overlaps. A screen cannot run two movies at the same time. The system validates time slots before creating a show.
- Observer notifies users when a sold-out show gets cancellations. Waitlisted users get a push notification with available seat count.
Common Mistakes with Movie Ticket Booking
- Making Cinema a God class that manages seats directly. Cinema owns screens, screens own shows, shows own seats. Three levels of delegation, not one.
- Flat pricing for all shows. A 10 AM Tuesday matinee and a 9 PM Friday premiere have very different demand curves. Pricing must reflect this.
- No TTL on seat locks. A user locks 4 seats for a blockbuster premiere, then puts their phone down. Those seats are gone for every other user until you manually intervene.
- Skipping show-time validation. Two shows on the same screen at overlapping times means double-selling the same physical seats.
Related Designs
Concert Ticket Booking, Restaurant Management
Multi-Level Cache — Foundational
Difficulty: Intermediate
L1/L2/L3 cache layers composed as decorators, each checking its own storage before delegating down. Eviction strategies (LRU, LFU) are swappable per level, so you can tune each tier independently without modifying the chain.
Design Patterns for Multi-Level Cache
- Decorator Pattern
- Proxy Pattern
- Mediator Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Multi-Level Cache
- CacheLayer: Interface defining get/put operations. Every cache level and decorator implements this contract.
- CacheDecorator: Wraps an inner CacheLayer, checks its own local storage first, and delegates to the wrapped layer on a miss.
- EvictionStrategy: Interface for swappable eviction policies. Implementations include LRU (recency-based) and LFU (frequency-based).
- CacheMediator: Coordinates invalidation and write-propagation across all cache levels, preventing direct coupling between layers.
- CacheProxy: Entry point for clients. Intercepts reads and writes, routing them through the cache hierarchy transparently.
- CacheListener: Observer that receives cache events (hit, miss, eviction) for monitoring, logging, and metrics collection.
Key Points for Multi-Level Cache
- Decorator pattern lets you stack L1 -> L2 -> L3 without any layer knowing about the others. Adding a new level means wrapping one more decorator. Zero changes to existing layers.
- The mediator centralizes cache coherence. When L1 invalidates a key, the mediator propagates that to L2 and L3 instead of each layer talking to every other layer directly.
- Strategy pattern for eviction means each level can use the policy that fits its profile: L1 (small, fast) uses LRU for recency, L2 (larger, slower) uses LFU for frequency.
- Write-through propagates writes to all levels immediately for consistency. Write-back defers lower-level writes for throughput. The choice is a consistency-vs-latency tradeoff.
Common Mistakes with Multi-Level Cache
- Coupling cache levels directly so L1 calls L2 which calls L3. Adding or removing a level breaks everything. Decorator composition eliminates this rigidity.
- Ignoring cache stampede. When a popular key expires simultaneously across levels, hundreds of threads hit the backing store at once. Use locking or request coalescing on miss.
- Setting TTL only on L1 and forgetting lower levels. Stale data persists in L2/L3 and gets promoted back to L1 on the next miss, serving outdated values indefinitely.
- Hardcoding the eviction policy into the cache layer. You can't tune per-level without forking the class. Strategy pattern keeps policies swappable.
Related Designs
LRU Cache, Rate Limiter, Database Connection Pool
Music Streaming (Spotify) — Platform Scale
Difficulty: Advanced
Iterator for playlist traversal with pluggable playback strategies (shuffle, repeat, sequential). Observer pushes now-playing updates to all connected UI components without the player knowing who is listening.
Design Patterns for Music Streaming (Spotify)
- Observer Pattern
- Strategy Pattern
- Iterator Pattern
Key Abstractions in Music Streaming (Spotify)
- MusicService: Facade coordinating user actions, playlist management, and playback
- Song: Immutable metadata (title, artist, album, duration) plus a reference to the audio source
- Playlist: Ordered collection of songs with an owner, add/remove, and iterator creation
- User: Listener with a subscription tier, owned playlists, and listening history
- Player: Playback controller managing current song, play/pause/skip state, and observer notifications
- PlaybackStrategy: Defines how the next song is picked: sequential, shuffle, or repeat
- RecommendationEngine: Suggests songs based on listening history and what the user has not heard yet
Key Points for Music Streaming (Spotify)
- Iterator pattern decouples playlist traversal from the playlist data structure. The player calls next() and does not care about order logic.
- Strategy pattern makes playback modes (shuffle, repeat, sequential) swappable at runtime without touching the Player class
- Observer pattern pushes now-playing updates to UI, social feeds, and scrobbling services simultaneously
- Playlist is a first-class entity with its own identity and encapsulation, not just a bare list of song IDs
- Song is immutable. It holds a reference to the audio, not the bytes. Safe to share across playlists without copying.
Common Mistakes with Music Streaming (Spotify)
- Baking shuffle logic directly into the Player. Adding repeat-one later means rewriting the play loop.
- Storing full audio data inside the Song object. Songs should reference audio, not contain it.
- Skipping an iterator and using raw index math for playlist traversal. Shuffle and repeat edge cases break immediately.
- Making Playlist a plain list with no encapsulation. You lose the ability to enforce ordering rules or notify on changes.
Related Designs
Notification System, Pub/Sub System
MVC Framework — Platform Scale
Difficulty: Advanced
A web framework from scratch. Front controller dispatches requests to route-matched handlers, middleware processes the pipeline, and adapters normalize different handler signatures. How Flask and Spring MVC work under the hood.
Design Patterns for MVC Framework
- Front Controller Pattern
- Strategy Pattern
- Adapter Pattern
- Template Method
- Chain of Responsibility
Key Abstractions in MVC Framework
- App: Single entry point for all incoming requests. Receives the raw request, runs middleware, delegates to the router, and returns the final response.
- Router: Maps URL patterns to handler functions. Supports path parameters like /users/:id by converting route patterns into regex and extracting named groups on match.
- HandlerAdapter: Normalizes different handler signatures to a common interface. A handler taking (request) and one taking (request, path_params) both work without the caller knowing the difference.
- Middleware: Chain of responsibility for cross-cutting concerns like auth, logging, and CORS. Each middleware runs before and after the handler, with the option to short-circuit.
- Request / Response: Data classes carrying method, path, headers, query parameters, and body through the framework. Response holds status code, body, and headers.
- ViewResolver: Template Method for rendering output. Subclasses decide the format: JSON serialization, HTML templating, or plain text. The handler returns data, the resolver turns it into a response body.
Key Points for MVC Framework
- Front Controller is the single entry point for every request. One object coordinates routing, middleware, and response rendering instead of scattering that logic across many servlets or handler files.
- Router uses pattern matching with path parameter extraction. The pattern /users/:id becomes a regex that matches /users/42 and pulls out id=42 as a named parameter.
- HandlerAdapter lets you register any function shape without forcing everything to implement one rigid interface. A simple health check taking zero args and a full handler taking request plus params both work through the same dispatch path.
- Middleware runs before and after the handler in a chain. Auth middleware can reject a request before it ever reaches the router. Logging middleware wraps the entire lifecycle to capture timing.
Common Mistakes with MVC Framework
- One endpoint class per URL instead of a central router. This works for five routes. At fifty, you have fifty files with duplicated setup logic and no way to see the full URL space at a glance.
- Hardcoding the response format so every handler builds its own JSON string. When you need XML or HTML, you rewrite every handler instead of swapping in a different ViewResolver.
- Tangling routing logic with business logic. The function that matches /users/:id should not also query the database. Separation means you can test routing and business logic independently.
- Applying middleware globally when it should be per-route. Auth on the health check endpoint means your load balancer needs credentials. Route-scoped middleware fixes this.
Related Designs
Middleware Pipeline, API Gateway, Plugin Architecture
Notification System — Platform Scale
Difficulty: Intermediate
Multi-channel notification delivery with user preferences and templated messages. Strategy pattern selects the channel, Template Method standardizes formatting, and Observer triggers notifications from domain events.
Design Patterns for Notification System
- Observer Pattern
- Strategy Pattern
- Template Method
Key Abstractions in Notification System
- NotificationService: Orchestrator that resolves user preferences, selects channels, renders templates, and dispatches delivery
- NotificationChannel: Strategy interface for delivery. Email, SMS, and Push each implement send() differently.
- Template: Message template with named placeholders. Renders final content by substituting context values.
- UserPreference: Per-user channel preferences. Controls which channels a user receives notifications on.
- NotificationQueue: Async delivery buffer with retry logic. Decouples notification creation from actual sending.
Key Points for Notification System
- Strategy pattern makes channels pluggable. Adding a new channel means one new class, zero changes to the orchestrator.
- Template Method standardizes message formatting. Every channel follows the same render-then-send flow, but each customizes the rendering step.
- User preferences are first-class. Never blast every channel. Respect what the user opted into.
- Async queue with retry handles transient failures. SMTP timeouts and push gateway errors are inevitable.
Common Mistakes with Notification System
- Hardcoding channel logic in the service. Every new channel turns into another if-else branch that nobody wants to touch.
- Skipping user preferences. Sending push notifications to users who disabled them is a fast path to uninstalls.
- Synchronous delivery in the request path. An SMTP timeout should not cause a 500 on your checkout endpoint.
- No retry mechanism. Transient failures in SMS gateways or push services are normal. Dropping notifications silently is not.
Related Designs
Pub/Sub System, Task Management
Online Auction — Platform Scale
Difficulty: Intermediate
Auction platform with time-based bidding, automatic expiry, and winner determination. State machine controls the auction lifecycle while Observer keeps bidders notified in real time.
Design Patterns for Online Auction
- State Pattern
- Observer Pattern
- Strategy Pattern
Key Abstractions in Online Auction
- AuctionService: Facade, single entry point for creating auctions, placing bids, and closing
- Auction: Core entity holding item, bids, time window, and current lifecycle state
- AuctionStatus: Enum for lifecycle states: SCHEDULED, ACTIVE, CLOSED, CANCELLED
- Bid: Immutable record of bidder, amount, and timestamp
- Item: What is being auctioned: name, description, starting price
- User: Participant who can be a seller or bidder
- BidValidator: Encapsulates per-auction-type bid rules (English, Dutch, sealed-bid)
Key Points for Online Auction
- State pattern for auction lifecycle prevents bids on closed or cancelled auctions at the type level
- Observer pattern notifies outbid users and announces auction results without coupling to notification delivery
- Strategy pattern supports multiple auction types: English ascending, Dutch descending, sealed-bid
- Bid validation is a separate concern from bid storage because business rules change independently
Common Mistakes with Online Auction
- Letting anyone bid on a CLOSED or CANCELLED auction. State pattern makes invalid transitions impossible.
- Not validating that a new bid exceeds the current highest. You end up with garbage data and angry sellers.
- Coupling notification logic into the Auction class. When you add SMS or push, you are rewriting auction code.
- Using strings for auction status instead of enums. One typo and your state transitions silently break.
Related Designs
Stock Brokerage, Concert Ticket Booking
Online Coding Judge — Platform Scale
Difficulty: Advanced
LeetCode-style judge. Problems, submissions, and a sandboxed runner that executes untrusted code against test cases without the judge catching fire.
Design Patterns for Online Coding Judge
- Strategy Pattern
- State Pattern
- Command Pattern
Key Abstractions in Online Coding Judge
- Problem: Statement, input/output schema, limits (time, memory), and test cases
- TestCase: Input plus expected output, with optional hidden flag for final grading
- Submission: Code + language + user + problem. Has state (queued, running, verdict).
- LanguageRunner: Strategy per language — compile(code) plus execute(code, stdin, limits) in a sandbox
- CompileResult: Value object — success flag plus compile errors. Default success for interpreted languages.
- Verdict: Accepted, WrongAnswer, TLE, MLE, RuntimeError, CompileError — one per submission
- Judge: Facade. Compiles once, then grades test cases fail-fast; produces the verdict.
Key Points for Online Coding Judge
- Sandbox isolation is non-negotiable — untrusted code can't touch filesystem, network, or other submissions.
- Language as a Strategy lets Python, Java, Go run the same submission contract with different toolchains.
- Test cases run serially per submission (fail-fast on first WA/TLE) but submissions run in parallel across workers.
- Verdicts are ordered by severity — compile error dominates WA dominates TLE dominates anything else.
Common Mistakes with Online Coding Judge
- Running untrusted code in the same process as the judge. One infinite loop or fork bomb kills the platform.
- Sharing stdin/stdout with the runner. Need pipes with strict byte limits to prevent output flooding.
- Running all test cases even after the first failure. Wastes CPU; verdict is decided.
- Comparing output with ==. Whitespace and line-ending differences cause spurious WA. Normalize first.
Related Designs
Job Scheduler, Workflow Engine, Rate Limiter
Online Shopping — Platform Scale
Difficulty: Advanced
Product catalog, shopping cart, and order pipeline with inventory management. Cart is mutable and references products by ID. Orders are immutable snapshots with reserved inventory.
Design Patterns for Online Shopping
- Strategy Pattern
- State Pattern
- Observer Pattern
Key Abstractions in Online Shopping
- ShoppingService: Facade, single entry point for browsing, cart operations, and order placement
- Product: Catalog item with name, price, description, and inventory count
- Cart: User's mutable collection of items and quantities before checkout
- CartItem: Links a product reference to a desired quantity in the cart
- Order: Immutable snapshot created from cart at checkout, with shipping and payment details
- OrderStatus: Enum for order lifecycle: PENDING, CONFIRMED, SHIPPED, DELIVERED, CANCELLED
- InventoryManager: Tracks stock levels and handles reservation on order placement
- DiscountStrategy: Pricing modifier: percentage off, flat discount, coupon-based
Key Points for Online Shopping
- Cart items reference products by ID, not by object, so price changes reflect at checkout time
- Inventory reservation happens at order placement to prevent overselling across concurrent checkouts
- Order is an immutable snapshot of the cart. Cart keeps changing after checkout, but the order does not.
- Observer pattern sends low-stock alerts and order status notifications without coupling to inventory logic
Common Mistakes with Online Shopping
- Storing product objects directly in the cart. If the price updates, the cart shows stale data.
- Not reserving inventory at checkout. Two users checking out the last item both succeed and one gets nothing.
- Making Order mutable after creation. Orders should be a frozen record of what the customer agreed to.
- Hardcoding discount logic. Promotions change weekly. Strategy pattern keeps pricing rules swappable.
Related Designs
Movie Ticket Booking, Payment System
ORM Data Mapper — Platform Scale
Difficulty: Advanced
Map objects to database rows with lazy-loading proxies, identity map to avoid duplicates, unit of work for batched writes, and a fluent query builder. A mini SQLAlchemy/Hibernate.
Design Patterns for ORM Data Mapper
- Proxy Pattern
- Repository Pattern
- Builder Pattern
- Identity Map Pattern
- Unit of Work Pattern
Key Abstractions in ORM Data Mapper
- EntityMapper: Maps class fields to table columns, handles row-to-object and object-to-row conversion
- IdentityMap: Cache ensuring one object per primary key, so the same row never becomes two different objects
- UnitOfWork: Tracks new, dirty, and deleted objects, flushes all changes as a single batch
- QueryBuilder: Fluent API: query(User).where("age", ">", 21).order_by("name") produces SQL without string concatenation
- LazyProxy: Proxy that loads related objects on first access, deferring expensive JOINs until actually needed
- Session: Facade combining IdentityMap, UnitOfWork, and QueryBuilder into one entry point
Key Points for ORM Data Mapper
- The Identity Map guarantees one object per primary key per session. Two queries for the same row return the same Python/Java object. Mutate it in one place, and you see the change everywhere. Without this, you get inconsistent copies and lost updates.
- Unit of Work collects all inserts, updates, and deletes and flushes them in one batch. Nothing hits the database until you call flush(). If any write fails, you roll back the entire batch. One transaction boundary, one round-trip, and no partial writes corrupting your data.
- Lazy loading defers expensive JOINs until you actually access the related field. If you load a User and never touch their orders, those orders are never fetched. The Proxy pattern makes this invisible to the caller.
- The Query Builder produces SQL through method chaining instead of string concatenation. No f-strings, no format calls, no injection risk. Each method returns self, so you compose queries fluently.
- Unit of Work acts like a local transaction log. You register_new(), register_dirty(), register_deleted() throughout your business logic. The session accumulates changes in memory. When you flush, it writes all changes inside a single database transaction. If anything fails, the session state stays dirty and you can retry or roll back cleanly.
Common Mistakes with ORM Data Mapper
- N+1 query problem: lazy loading in a loop loads N related objects one by one. If you iterate 100 users and access each user's orders, you get 1 query for users plus 100 queries for orders. Use eager loading or batch fetching when you know you will need the relation.
- Skipping the identity map: two queries for the same primary key return different objects. You mutate one and the other still has stale data. Updates fight each other and the last write wins silently.
- Flushing after every single change instead of batching: turns a 10-statement transaction into 10 separate round-trips. Latency adds up fast, especially over a network.
- Exposing raw SQL strings built with concatenation: opens the door to SQL injection. Always use parameterized queries or a builder that escapes values.
Related Designs
Inventory Management, Connection Pool, Search Index Engine
Parking Lot — Real-World Systems
Difficulty: Intermediate
Multi-floor, multi-vehicle-type parking with real-time availability. Layered domain model where each class owns exactly one responsibility.
Design Patterns for Parking Lot
- Strategy Pattern
- Factory Method
- Observer Pattern
Key Abstractions in Parking Lot
- ParkingLot: Facade, single entry point for park/unpark operations
- ParkingFloor: Manages spots on one level, knows availability by type
- ParkingSpot: Individual slot that knows its type, size, and occupant
- Vehicle: Abstract base. Car, Truck, Motorcycle with different size requirements.
- ParkingStrategy: Spot selection algorithm: nearest, spread-out, or type-optimized
- Ticket: Proof of parking, links vehicle to spot with entry time
Key Points for Parking Lot
- Strategy pattern for spot selection: nearest-to-entrance vs spread-load vs type-optimized
- Factory method for vehicle creation avoids switch statements scattered across the codebase
- Observer notifies display boards when availability changes, decoupled from parking logic
- Enum for SpotType, not strings. Compile-time safety on slot size matching.
Common Mistakes with Parking Lot
- Treating ParkingLot as a God class. It should delegate to floors, not manage individual spots.
- Using a flat list of spots instead of floor-based grouping. That kills O(1) availability lookups per floor.
- Hardcoding spot assignment logic makes it impossible to change allocation strategy without rewriting core
- Forgetting concurrency. Two cars arriving at the same time can be assigned the same spot without locks.
Related Designs
Elevator System, Traffic Signal, Car Rental
Pastebin — Platform Scale
Difficulty: Intermediate
Short ID, optional expiry, optional password. The scaling trick is generating unique IDs without a central counter — base62 of a random source does the job.
Design Patterns for Pastebin
- Strategy Pattern
- Factory Method
- Chain of Responsibility
Key Abstractions in Pastebin
- Paste: Immutable record: id, content, created, expires, visibility, password hash
- IdGenerator: Strategy for unique short IDs — base62 random, Snowflake, or counter
- Storage: Repository abstraction — in-memory, SQL, or blob store behind the same interface
- PasswordHasher: Salt + hash generator used to store OTP-style secrets without exposing the plaintext
- PastebinService: Facade. Create, fetch, and delete pastes.
Key Points for Pastebin
- Random base62 of 7 chars gives 62^7 = 3.5 trillion IDs. Collisions are vanishingly rare.
- Lazy expiry: delete on read, sweep periodically. No per-paste timer threads.
- Password is hashed (bcrypt/argon2), never stored in plaintext. Verify on read.
- Storage interface means the same service works in-memory for tests and against Postgres in prod.
Common Mistakes with Pastebin
- Auto-increment integer IDs. They're guessable — anyone can scrape every paste by iterating.
- Storing password in plaintext or with a weak hash. Compromise leaks everyone's secrets.
- Checking expiry only on the sweep thread. A just-expired paste can leak between ticks.
- Forgetting to return the same 'not found' response for missing, expired, and wrong-password pastes. Oracle attack.
Related Designs
URL Shortener, Notification System, Rate Limiter
Payment System — Financial & Payments
Difficulty: Advanced
Strategy pattern for payment methods so adding UPI or crypto is one new class, chain of responsibility for layered fraud checks, a state machine enforcing payment lifecycle transitions, and idempotency keys to prevent double charges.
Design Patterns for Payment System
- Strategy Pattern
- Chain of Responsibility
- Observer Pattern
Key Abstractions in Payment System
- PaymentProcessor: Orchestrator that validates, fraud-checks, and routes payments through the correct method
- PaymentMethod: Strategy interface with subtypes CreditCard, DebitCard, UPI, and NetBanking
- PaymentGateway: Adapter wrapping external payment providers behind a uniform charge/refund interface
- Transaction: Core entity with a state machine governing its lifecycle from INITIATED through COMPLETED or FAILED
- FraudChecker: Chain of responsibility for layered validation: velocity checks, amount limits, blacklist matching
Key Points for Payment System
- Strategy for payment methods: each method validates itself and talks to its own gateway. Adding a new method never touches PaymentProcessor.
- Chain of responsibility for fraud checks. Each checker either approves the payment and passes it along, or rejects it. New rules are new links in the chain.
- State machine on Transaction enforces legal transitions. PROCESSING cannot jump to REFUNDED. FAILED cannot become COMPLETED.
- Idempotency keys prevent double charges. If a key was already processed, return the existing result without re-executing.
Common Mistakes with Payment System
- Hardcoding payment methods with if/else chains. Every new method means modifying the processor, which means risk and merge conflicts.
- Running fraud checks after charging the customer. Validate before you touch their money, not after.
- Using a free-form status string instead of an enum with enforced transitions. You will eventually see FAILED payments magically become COMPLETED.
- Missing idempotency on the charge endpoint. Network retries are a fact of life, and without key-based deduplication, users get double-charged.
Related Designs
Digital Wallet, ATM
Plugin Architecture — Foundational
Difficulty: Intermediate
Extensible plugin system with Abstract Factory for component families, Adapter for third-party integration, and Facade to shield plugins from host internals. The blueprint behind VS Code extensions and Chrome add-ons.
Design Patterns for Plugin Architecture
- Abstract Factory Pattern
- Adapter Pattern
- Facade Pattern
- Observer Pattern
- Prototype Pattern
Key Abstractions in Plugin Architecture
- Plugin: Interface defining the plugin lifecycle: initialize, activate, deactivate
- PluginFactory: Abstract Factory that creates a family of components: commands, UI panels, event handlers
- PluginAdapter: Adapter that wraps third-party APIs to conform to the Plugin interface
- PluginSDK: Facade providing a simplified API over host internals: file system, UI, events
- PluginRegistry: Manages plugin lifecycle, discovery, loading, and unloading
- PluginEventBus: Observer that broadcasts host events like file_opened and build_started to plugins
- PluginTemplate: Prototype that clones pre-configured plugin scaffolds for rapid creation
Key Points for Plugin Architecture
- Abstract Factory ensures each plugin provides a coherent family of components. You can't accidentally mix a dark-theme's UI with a linter's commands.
- Adapter makes third-party code fit the Plugin interface without modifying it: wrap external APIs and the host never knows the difference.
- Facade protects plugins from internal changes. The host can refactor freely as long as the SDK stays stable.
- Prototype for scaffolding: clone a 'starter plugin' template rather than constructing from scratch.
Common Mistakes with Plugin Architecture
- Exposing host internals directly to plugins. Any internal refactor breaks all plugins.
- No plugin isolation. One broken plugin crashes the entire host.
- Tight coupling between plugins. Plugin A importing Plugin B's classes directly.
- Forgetting plugin lifecycle management. Plugins that leak resources when deactivated go unnoticed until production.
Related Designs
API Gateway, Notification System, Cross-Platform UI Toolkit
Printer Spooler — Foundational
Difficulty: Intermediate
Multi-printer queue with priorities and paper/toner constraints. A priority queue per printer plus a job router handles the routing; cancellation and retries handle the rest.
Design Patterns for Printer Spooler
- Strategy Pattern
- State Pattern
- Observer Pattern
Key Abstractions in Printer Spooler
- PrintJob: What to print — pages, copies, duplex, priority, submitting user
- Printer: One physical device. Owns its queue, paper/toner state, and current job.
- PrintQueue: Priority queue of jobs for a printer
- JobRouter: Strategy picking which printer a new job goes to — round-robin, least-loaded, nearest
- SpoolService: Facade. Submit, cancel, status. Manages printer pool and router.
Key Points for Printer Spooler
- Priority queue per printer, not a global queue. Printers work in parallel.
- Job state machine: QUEUED → PRINTING → DONE, or QUEUED → CANCELLED, or PRINTING → FAILED.
- Paper/toner modeled as consumables. Low-supply state stops accepting jobs and surfaces an alert.
- Router is a Strategy — round-robin for fairness, least-loaded for throughput, nearest for office floors.
Common Mistakes with Printer Spooler
- Single global queue with all printers pulling. Heavy contention and unfair routing under load.
- Cancellation that kills the currently-printing job. The sheet of paper is already halfway through — cancel only affects queued jobs.
- Forgetting to decrement paper count per page. Printer runs empty and users see 'printing' with nothing coming out.
- Priority as a float. Use an enum — HIGH, NORMAL, LOW. Comparable, predictable, debuggable.
Related Designs
Thread Pool, Rate Limiter, Workflow Engine
Pub/Sub System — Platform Scale
Difficulty: Advanced
Topic-based message broker with subscriber management. Observer pattern at its core, Strategy pattern for delivery semantics. Decouples producers from consumers completely.
Design Patterns for Pub/Sub System
- Observer Pattern
- Strategy Pattern
Key Abstractions in Pub/Sub System
- MessageBroker: Facade coordinating topic creation, subscriptions, and message routing
- Topic: Named channel that holds a subscriber list and dispatches messages
- Subscriber: Interface with an on_message callback. Any class can subscribe by implementing it.
- Message: Immutable payload with topic, content, timestamp, and unique ID for deduplication
- SubscriptionManager: Tracks topic-to-subscriber mappings, handles subscribe/unsubscribe lifecycle
Key Points for Pub/Sub System
- Observer pattern is the backbone. Topics maintain subscriber lists and notify on publish.
- Strategy pattern controls delivery semantics: synchronous vs asynchronous, at-most-once vs at-least-once.
- Message IDs enable deduplication. Without them, network retries cause duplicate processing.
- Topic-based routing chosen over content-based for predictability. Subscribers know exactly what they signed up for.
Common Mistakes with Pub/Sub System
- Making subscribers concrete classes instead of interfaces. Every new integration requires modifying the broker.
- No message IDs. Retries and redeliveries silently duplicate work without deduplication support.
- Synchronous-only delivery. One slow subscriber blocks all others on the same topic.
- Forgetting thread safety on the subscriber list. Concurrent subscribe and publish causes ConcurrentModificationException.
Related Designs
Notification System, Logging Framework
Rate Limiter — Foundational
Difficulty: Intermediate
Token bucket for smooth throughput, sliding window for strict counting. Strategy pattern lets you swap algorithms without rewiring the system.
Design Patterns for Rate Limiter
- Strategy Pattern
- Factory Pattern
Key Abstractions in Rate Limiter
- RateLimiter: Facade that checks if a request is allowed and delegates to the active strategy
- RateLimitStrategy: Interface for swappable algorithms. Each strategy decides allow or reject independently.
- TokenBucketStrategy: Refills tokens at a fixed rate, each request costs one token. Allows controlled bursts.
- SlidingWindowStrategy: Tracks timestamps in a rolling window. Strict count-based limiting with no burst allowance.
- RateLimitConfig: Immutable config holding capacity, refill rate, and window size parameters
Key Points for Rate Limiter
- Token bucket allows short bursts up to bucket capacity, then throttles to the refill rate. Good for APIs that tolerate spikes.
- Sliding window counts requests in a rolling time window. Stricter than token bucket but uses more memory per client.
- Strategy pattern means you pick the algorithm at config time. Changing from token bucket to sliding window requires zero code changes in the limiter itself.
- Lazy token refill avoids background threads. Calculate tokens earned since last request on each allow() call.
Common Mistakes with Rate Limiter
- Using a fixed window instead of sliding. Requests at the boundary of two windows can double your actual rate. A client can send N requests at 11:59 and N more at 12:00.
- Running a timer thread to refill tokens. Wasteful. Compute the refill lazily when allow() is called by checking elapsed time.
- Forgetting thread safety on the token count. Two concurrent requests can both read tokens=1 and both pass, exceeding the limit.
- Not cleaning up old timestamps in sliding window. Without pruning, memory grows without bound for high-traffic clients.
Related Designs
LRU Cache, URL Shortener, Notification System
RBAC Permission System — Platform Scale
Difficulty: Intermediate
Hierarchical permissions with Composite groups, Chain of Responsibility for evaluation cascading, Proxy for resource guarding, and Flyweight for shared permission objects.
Design Patterns for RBAC Permission System
- Composite Pattern
- Chain of Responsibility
- Proxy Pattern
- Flyweight Pattern
- Decorator Pattern
Key Abstractions in RBAC Permission System
- Permission: Flyweight : shared immutable objects like READ, WRITE, DELETE per resource
- PermissionGroup: Composite : contains permissions and sub-groups. 'Admin' includes 'Editor' includes 'Viewer'
- PermissionEvaluator: Chain of Responsibility : user override → role → group → default policy
- ResourceProxy: Proxy : wraps protected resources, checks authorization before delegating
- PermissionCheckDecorator: Decorator : TimeBasedDecorator restricts to business hours, IPBasedDecorator restricts by network. All wrap a PermissionEvaluator.
- Role, User: Domain objects with assigned permission groups
Key Points for RBAC Permission System
- Flyweight: Permission.READ is the same object everywhere. Thousands of users reference the same Permission instances, not clones.
- Composite: PermissionGroup.hasPermission() recursively checks self and all child groups. 'Admin' → 'Editor' → 'Viewer' inheritance is tree traversal.
- Chain of Responsibility: evaluation order matters. User-level overrides beat role-level which beats group defaults.
- Proxy: ResourceProxy.access() checks permissions transparently. The resource itself has zero security logic.
Common Mistakes with RBAC Permission System
- Checking permissions with string comparison ('admin' == role) instead of structured evaluation
- Not handling permission inheritance: you must check parent groups recursively
- Creating new Permission objects per check instead of reusing Flyweights: wastes memory in large systems
- Hardcoding time/IP constraints in the evaluator: Decorator lets you compose constraints flexibly
Related Designs
API Gateway, Feature Flag System, Plugin Architecture
Regex Matcher — Foundational
Difficulty: Advanced
Parse pattern into an AST, compile to NFA, simulate against input. Supports literals, '.', '*', '+', '?', character classes, and grouping — without catastrophic backtracking.
Design Patterns for Regex Matcher
- Interpreter Pattern
- Composite Pattern
- Visitor Pattern
Key Abstractions in Regex Matcher
- Node: AST node — Literal, AnyChar, CharClass, Star, Plus, Optional, Concat, Alternate, StartAnchor, EndAnchor
- Parser: Builds the AST. Handles precedence, grouping, and desugars bounded {n,m} into concat + Optional.
- NFABuilder: Thompson's construction — AST → NFA graph, including position-conditional edges for anchors
- NFA: State machine with epsilon and epsilon-pos transitions. Simulation runs all reachable states in lockstep.
- Matcher: Walks the input; epsilon closure is position-aware so ^ and $ fire only at boundaries.
Key Points for Regex Matcher
- Thompson's NFA is O(n*m) where n is pattern length and m is input length. Backtracking regex is exponential.
- Parse the pattern once, match many inputs. Separating compile from match is the key performance decision.
- State machine over the input — advance all active states per input char. No stack, no recursion.
- Epsilon transitions handle quantifiers cleanly: a* is 'go forward or loop back' as free state moves.
- Anchors are zero-width. They're position-conditional epsilons — fire only when the cursor is at start (^) or end ($).
- {n,m} is desugared in the parser into n copies of the child plus (m−n) Optional copies — no new NFA shape required.
Common Mistakes with Regex Matcher
- Implementing regex with recursion and backtracking. Works on simple patterns, explodes on `(a|a)*b` against `aaaaaa`.
- Mixing lexer, parser, and matcher. Every feature addition touches three places at once.
- Modeling '*' as 'zero or more' without the NFA framing. The implementation becomes a maze of nested loops.
- Forgetting anchors. `a` without `^`/`$` context must report whether it's substring-match or full-match.
Related Designs
Expression Evaluator, Text Editor, Form Validation
Restaurant Management — Booking & Reservations
Difficulty: Intermediate
Table allocation, order lifecycle, and kitchen queue coordination. Strategy picks tables, State drives orders through their lifecycle, and Observer wires the kitchen to the wait staff.
Design Patterns for Restaurant Management
- Strategy Pattern
- State Pattern
- Observer Pattern
Key Abstractions in Restaurant Management
- Restaurant: Facade. Entry point for reservations, ordering, and billing.
- Table: Physical seating unit with a capacity and occupancy status.
- Reservation: Binds a party to a future time slot and table assignment.
- Order: Collection of menu items moving through PLACED/PREPARING/READY/SERVED states.
- MenuItem: Name, price, category. Immutable value object.
- Kitchen: Processes order queue. Notifies observers when an order is ready.
- Bill: Aggregates order totals, applies tax and optional tip.
Key Points for Restaurant Management
- Strategy pattern for table allocation: first-fit grabs the first table that fits the party, best-fit picks the smallest adequate table to minimize wasted seats
- State pattern on Order so each status transition is explicit. PLACED can move to PREPARING, PREPARING to READY, READY to SERVED. No random jumps.
- Observer decouples Kitchen from wait staff. Kitchen fires an event when food is ready. The waiter listener handles delivery. Adding a notification to the customer's phone is just another observer.
- Enums for OrderStatus and TableStatus. No magic strings drifting through the codebase.
Common Mistakes with Restaurant Management
- Letting Restaurant manage individual order state transitions. That's the Order's job. Restaurant should delegate.
- Skipping the Kitchen abstraction and having Order prepare itself. That conflates what you ordered with how it gets made.
- Using strings for order status instead of an enum. One typo ('PREPRING') and your state machine silently breaks.
- Forgetting thread safety on table assignment. Two hosts seating parties simultaneously can double-book the same table.
Related Designs
Hotel Management, Food Delivery
Ride Sharing (Uber) — Platform Scale
Difficulty: Advanced
Driver matching via strategy pattern, ride lifecycle via state machine, and surge pricing that adapts to demand. The core of any ride-hailing platform in clean OOP.
Design Patterns for Ride Sharing (Uber)
- Strategy Pattern
- Observer Pattern
- State Pattern
Key Abstractions in Ride Sharing (Uber)
- RideService: Orchestrator that coordinates rider requests, driver matching, and ride lifecycle
- Rider: Requester who initiates a ride with pickup and dropoff locations
- Driver: Provider with real-time location, rating, and availability status
- Ride: Core entity with state machine: REQUESTED -> MATCHED -> EN_ROUTE -> IN_PROGRESS -> COMPLETED
- MatchingStrategy: Algorithm for selecting a driver: nearest-first, rating-based, or hybrid
- PricingStrategy: Fare calculation: distance-based flat rate or surge pricing during peak demand
- Location: Lat/lng coordinate pair with Haversine distance calculation
Key Points for Ride Sharing (Uber)
- Strategy pattern for driver matching: swap between nearest-driver, highest-rated, or load-balanced without touching core logic
- State machine on Ride enforces valid transitions. You cannot go from REQUESTED to IN_PROGRESS directly.
- Observer pattern notifies riders of driver location updates and ride status changes
- Pricing strategy is decoupled from ride logic. Surge pricing multiplier is a strategy swap, not an if-else chain.
Common Mistakes with Ride Sharing (Uber)
- Putting matching logic inside RideService directly. That becomes unmaintainable when you need A/B testing between matching algorithms.
- Using string-based ride status instead of an enum with enforced transitions. You will end up with rides in impossible states.
- Calculating distance with Euclidean formula instead of Haversine. Lat/lng coordinates are not Cartesian.
- Forgetting to mark a driver as unavailable after matching. Two riders can get matched to the same driver.
Related Designs
Food Delivery, Parking Lot
Search Index Engine — Platform Scale
Difficulty: Advanced
Inverted index with Composite field structure, Visitor-based query operations, swappable tokenization strategies, and Decorator for query enrichment. The engine behind Elasticsearch.
Design Patterns for Search Index Engine
- Composite Pattern
- Visitor Pattern
- Strategy Pattern
- Builder Pattern
- Decorator Pattern
Key Abstractions in Search Index Engine
- IndexNode: Composite : root index contains field-level sub-indices
- FieldIndex: Leaf : inverted index for a single field: term → doc IDs
- IndexVisitor: Visitor : EvalVisitor for search, StatsVisitor for index statistics
- TokenizerStrategy: Strategy : WhitespaceTokenizer, NgramTokenizer, LowercaseTokenizer
- IndexBuilder: Builder : addField with type, set analyzer, set boost, build
- QueryDecorator: Decorator : BoostDecorator multiplies scores, FilterDecorator excludes results
Key Points for Search Index Engine
- Composite: a SearchIndex contains multiple FieldIndex objects (title, body, tags). Querying the root traverses all relevant sub-indices.
- Visitor: search, scoring, statistics collection are visitors that traverse the index tree without modifying index classes.
- Strategy: tokenization is per-field. Title might use whitespace tokenizer, body uses ngram, tags use exact match.
- Builder: index schema is assembled step-by-step. Builder validates required fields exist and analyzers are set.
Common Mistakes with Search Index Engine
- Using a single flat index for all fields: you can't weight title matches higher than body matches
- Putting search logic inside index classes: adding TF-IDF scoring requires modifying FieldIndex
- Hardcoding tokenization: English whitespace doesn't work for CJK languages
- Not building the index schema upfront: leads to inconsistent field types across documents
Related Designs
Typeahead Autocomplete, Document Builder, Expression Evaluator
Smart Home Controller — Real-World Systems
Difficulty: Intermediate
SmartHomeHub mediates between devices with zero direct coupling. Commands enable scenes and undo. Composite models room/floor hierarchy. Facade exposes simple routines like goodNight().
Design Patterns for Smart Home Controller
- Mediator Pattern
- Command Pattern
- Composite Pattern
- Observer Pattern
- Facade Pattern
Key Abstractions in Smart Home Controller
- SmartDevice: Interface -- turnOn, turnOff, getStatus. All devices implement this contract.
- Light, Thermostat, DoorLock, SecurityCamera: Concrete devices with device-specific behavior (brightness, temperature, lock state, recording).
- SmartHomeHub: Mediator -- devices register here, hub orchestrates all inter-device interactions.
- DeviceCommand: Command -- encapsulates device actions with execute/undo for scenes, scheduling, and replay.
- DeviceGroup: Composite -- rooms contain devices, floors contain rooms. Recursive operations on the hierarchy.
- SmartHomeFacade: Facade -- goodMorning(), goodNight(), leaveHome() routines that coordinate dozens of device actions.
- SensorListener: Observer -- reacts to sensor events like motion detected or temperature threshold crossed.
Key Points for Smart Home Controller
- Mediator eliminates device-to-device coupling: thermostat tells hub "temp is 80F", hub tells AC to activate
- Command enables scene macros, scheduling, and undo: "good night" is a sequence of commands
- Composite for physical hierarchy: "turn off 2nd floor" recursively turns off all devices in all rooms
- Facade shields the user from complexity: one method triggers dozens of coordinated device actions
Common Mistakes with Smart Home Controller
- Devices directly referencing each other: thermostat imports AC class, motion sensor imports light class
- No command abstraction: can't undo, can't replay, can't schedule
- Flat device list instead of hierarchy: "turn off living room" requires filtering by room tag instead of tree traversal
- Missing error handling for offline devices: one failed device shouldn't block the entire routine
Related Designs
Elevator System, Traffic Signal, Chat Room
Snake & Ladder — Games & Simulations
Difficulty: Starter
Board with position jumps, configurable snakes and ladders, and a dice strategy so you can swap fair for loaded. Builder pattern keeps board construction clean.
Design Patterns for Snake & Ladder
- Strategy Pattern
- Builder Pattern
- Observer Pattern
Key Abstractions in Snake & Ladder
- Game: Orchestrator managing player turns, dice rolls, and win detection
- Board: Grid of cells with registered snakes and ladders, resolves final position after a move
- Player: Tracks name, current position, and move history
- Dice: Strategy interface for rolling. FairDice gives uniform distribution, LoadedDice can be weighted.
- Snake: Position jump where start > end. Landing on head sends you to tail.
- Ladder: Position jump where start < end. Landing on bottom sends you to top.
- GameState: Enum tracking IN_PROGRESS, WON, or DRAW
- BoardBuilder: Fluent builder for constructing boards with validated snake and ladder placement
Key Points for Snake & Ladder
- Snakes and ladders are both position jumps. A Snake has start > end, a Ladder has start < end. Same interface, different direction.
- Strategy pattern on Dice lets you inject fair, loaded, or deterministic dice for testing.
- Builder pattern for Board handles validation: no snake head at a ladder bottom, no cycles, no overlapping positions.
- Board is separate from Game so you can test position resolution without dealing with turns or players.
Common Mistakes with Snake & Ladder
- Putting snake/ladder logic directly in Game instead of Board. Makes it impossible to test board rules in isolation.
- Using a raw integer for dice without an abstraction. You lose the ability to inject test doubles.
- Forgetting to handle the case where a player rolls past position 100 and should stay put.
- Allowing a snake head and ladder bottom on the same cell. That's an ambiguous board state.
Related Designs
Tic Tac Toe, Chess
Social Media Feed — Platform Scale
Difficulty: Advanced
Social media platform with personalized feed generation, follow graph, and ranked content. Observer pattern for fan-out on post creation, Strategy pattern for swappable feed ranking (chronological vs engagement-weighted), and Iterator pattern for paginated feed traversal.
Design Patterns for Social Media Feed
- Observer Pattern
- Strategy Pattern
- Iterator Pattern
Key Abstractions in Social Media Feed
- FeedPlatform: Facade coordinating users, posts, follows, and feed generation
- User: Has profile, list of followers and following, and creates posts
- Post: Content with text, author, timestamp, likes count, and comments list
- Comment: Text response to a post with author and timestamp
- Feed: Aggregates and ranks posts from followed users, supports pagination
- FeedStrategy: Strategy interface for ranking feed posts (Chronological, Engagement)
- FollowManager: Manages follow/unfollow relationships and observer notifications
Key Points for Social Media Feed
- Fan-out-on-read vs fan-out-on-write is THE key architectural decision. Fan-out-on-read computes the feed at query time by merging posts from all followed users. Fan-out-on-write precomputes feeds when a post is created. This LLD uses fan-out-on-read for simplicity, but production systems use a hybrid.
- Strategy pattern makes feed ranking testable and swappable. Chronological is simple (sort by timestamp). Engagement-weighted factors in likes, comments, and recency. Adding a new ranking algorithm means one new class.
- Observer pattern decouples post creation from feed updates and notifications. When a user posts, followers don't need to poll. The system pushes updates.
- Pagination via iterator prevents loading entire feed into memory. Real feeds have thousands of posts. Load 20 at a time with a cursor.
Common Mistakes with Social Media Feed
- Loading all posts from all followed users into memory at once. With 500 follows each posting daily, that's thousands of posts to sort. Use pagination and limit the merge window.
- Not separating feed ranking from feed assembly. Hardcoding chronological sort means every new ranking experiment requires editing the feed generation code.
- Letting users like/comment without tracking who did it. Without deduplication, a user can like the same post infinite times.
- Circular follow relationships causing infinite notification loops. The observer must not trigger re-notification when updating derived state.
Related Designs
Notification System, Pub/Sub System
Splitwise — Financial & Payments
Difficulty: Advanced
Expense splitting with multiple strategies, debt simplification via a greedy graph algorithm, and a balance ledger that tracks net amounts instead of individual transactions.
Design Patterns for Splitwise
- Strategy Pattern
- Observer Pattern
- Command Pattern
Key Abstractions in Splitwise
- ExpenseService: Facade for creating expenses and querying balances
- Expense: Immutable record of who paid, how much, and the split breakdown
- SplitStrategy: Strategy interface for equal, exact, and percentage split calculations
- BalanceLedger: Net balance tracker with O(1) lookup per user pair
- DebtSimplifier: Greedy graph algorithm to minimize number of settlements
- User: Participant with name and unique ID
Key Points for Splitwise
- Strategy pattern for split logic: equal, exact, and percentage splits without if/else chains
- Balance ledger stores net amounts, not individual transactions. O(1) per-pair lookup.
- Debt simplification is a greedy graph problem. Sort creditors and debtors, match greedily.
- Command pattern records expense history, so undo support and audit trail come for free
Common Mistakes with Splitwise
- Storing gross amounts instead of net balances. That leads to O(n) scan on every 'who owes whom' query.
- Not validating that split amounts sum to the expense total. You get silent balance drift over time.
- Treating percentage splits as exact after rounding. The last person should absorb the rounding error.
- Forgetting that the payer can also be a participant in the split. Common off-by-one on share count.
Related Designs
Digital Wallet, Payment System, ATM
Spreadsheet Engine — Real-World Systems
Difficulty: Advanced
Reactive cell dependency graph with Observer propagation, Composite expression trees for formulas, Interpreter for parsing, and Memento for undo/redo. The Google Sheets interview classic.
Design Patterns for Spreadsheet Engine
- Observer Pattern
- Composite Pattern
- Interpreter Pattern
- Memento Pattern
- Visitor Pattern
Key Abstractions in Spreadsheet Engine
- Cell: Holds a raw value or formula, observes its dependencies for changes, and notifies dependent cells to recalculate
- Expression: Composite interface for formula AST nodes — supports evaluate(context) and accept(visitor) for double dispatch
- NumberLiteral / CellReference / BinaryOp / FunctionCall: Composite leaf and branch nodes representing constants, cell lookups, arithmetic, and aggregations like SUM
- FormulaParser: Interpreter that tokenizes and parses formula strings like '=A1+SUM(B1:B3)' into an expression tree
- SpreadsheetMemento: Snapshot of all cell values and formulas at a point in time, enabling full undo/redo without replaying history
- ExpressionVisitor: Visits the expression tree for evaluation, dependency extraction, and cycle detection — no modifications to expression classes
Key Points for Spreadsheet Engine
- Observer with topological ordering: when A1 changes, recalculate B1=A1+5 before C1=B1*2. You must evaluate the dependency graph in topological order so no cell reads a stale value.
- Composite: formulas are expression trees. SUM(A1, A2+3) is a FunctionCall containing a CellReference and a BinaryOp. Every node implements the same Expression interface.
- Interpreter: the formula mini-language (=, +, -, *, SUM, MAX) is parsed into an AST via recursive descent and evaluated by walking the tree.
- Visitor: evaluate(), getDependencies(), and detectCycles() are visitors over the expression tree. You can add new operations without modifying the expression node classes.
Common Mistakes with Spreadsheet Engine
- No cycle detection: A1=B1, B1=A1 causes infinite recalculation. You must detect cycles before accepting a formula.
- Evaluating dependents in arbitrary order: can read stale values. Topological sort of the dependency graph is required.
- Storing formatted strings instead of typed values: breaks arithmetic on cells displayed as currency or percentages.
- Not using Memento: undo requires replaying all edits from the start, which is O(n) per undo instead of O(1).
Related Designs
Expression Evaluator, Text Editor, Document Builder
Stack Overflow — Platform Scale
Difficulty: Advanced
Q&A platform with reputation system, voting mechanics, and search ranking. Observer pattern for reputation events, Strategy pattern for search, and State pattern for question lifecycle. Reputation thresholds gate privileged actions like voting and closing.
Design Patterns for Stack Overflow
- Observer Pattern
- Strategy Pattern
- State Pattern
Key Abstractions in Stack Overflow
- Platform: Facade coordinating users, questions, answers, votes, tags, and search
- Question: Holds title, body, tags, answers list, vote count, and open/closed state
- Answer: Response to a question with its own vote count and accepted/rejected status
- User: Author with reputation score that gates privileged actions like voting and closing
- Tag: Categorization label attached to questions for filtering and trending
- SearchStrategy: Strategy interface for ranking search results by relevance, votes, or recency
Key Points for Stack Overflow
- Reputation is the central mechanic. It gates who can vote, comment, edit, and close questions. Without thresholds, the system has no quality control.
- Observer pattern propagates reputation changes. When a vote happens, both the voter and the content author get reputation adjustments.
- Strategy pattern for search ranking. Users switch between relevance, votes, and date sorting without changing the search infrastructure.
- State pattern governs question lifecycle. Open questions accept answers. Closed questions reject them. The state object enforces this cleanly.
Common Mistakes with Stack Overflow
- Letting anyone vote without reputation thresholds. Sock puppet accounts and vote manipulation become trivial.
- Computing reputation on the fly from vote history every time. This is expensive. Store it as a running total and update incrementally.
- Hardcoding search ranking logic. Relevance, votes, and date sorting should be swappable strategies, not if-else chains.
- No separation between vote action and reputation effect. Changing the reputation formula means hunting through every place votes are cast.
Related Designs
Notification System, Task Management
Stock Brokerage — Financial & Payments
Difficulty: Advanced
Price-time priority order matching with per-stock order books, sorted buy/sell queues for O(log n) insertion, partial fill tracking, and Observer notifications on every executed trade.
Design Patterns for Stock Brokerage
- Observer Pattern
- Strategy Pattern
- Command Pattern
Key Abstractions in Stock Brokerage
- Exchange: Top-level matching engine that routes orders to the correct order book and updates portfolios
- Order: Buy or sell request with price, quantity, and fill tracking. MarketOrder and LimitOrder are subtypes.
- OrderBook: Per-stock bid/ask queues sorted by price-time priority, with its own lock for parallel matching
- Portfolio: User's cash balance and stock holdings, updated atomically on each trade
- Trade: Immutable record of an executed match: buyer, seller, price, quantity, timestamp
Key Points for Stock Brokerage
- One OrderBook per stock. AAPL and GOOGL match independently, so they can run in parallel without contention.
- Price-time priority: best price wins. Ties broken by earliest submission. This is how NYSE and NASDAQ work.
- Market orders match immediately at the best available price. Limit orders wait in the book until a compatible counter-order arrives.
- Partial fills are first-class. A buy for 100 shares might match against three separate sells of 30, 30, and 40.
Common Mistakes with Stock Brokerage
- Using a single order book for all stocks. Matching AAPL orders would block GOOGL for no reason.
- Not handling partial fills. A large order routinely matches against several smaller counter-orders at different price levels.
- Sorting the sell side in descending order instead of ascending. The lowest ask should match first, not the highest.
- Forgetting to update portfolio holdings after a trade executes. The order is filled but the user's position stays stale.
Related Designs
Online Auction, Payment System
Sudoku Solver — Games & Simulations
Difficulty: Intermediate
Backtracking with constraint propagation. The design question isn't 'how do I solve Sudoku' — it's how to model the board so validation, generation, and solving reuse the same code.
Design Patterns for Sudoku Solver
- Strategy Pattern
- Template Method
- Composite Pattern
Key Abstractions in Sudoku Solver
- Board: 9x9 grid with cells, plus derived views (rows, columns, boxes)
- Cell: Position plus current value (0 for empty). Knows its row, column, and 3x3 box.
- Solver: Strategy interface. Backtracking (MRV), DLX, or constraint-propagation variants plug in.
- Generator: Produces puzzles by solving a random seed then blanking N cells
Key Points for Sudoku Solver
- Track used digits per row, column, and box as bitmasks. Validation becomes a single bitwise AND.
- Backtracking picks the cell with fewest candidates first (MRV heuristic). Cuts search space massively.
- Board, Validator, and Solver are separate. Generation reuses Validator; solving reuses Board.
- Solver is a Strategy so the same Board can be solved by backtracking, DLX, or constraint propagation.
Common Mistakes with Sudoku Solver
- Scanning row/col/box on every move. That's O(27) per placement. Bitmasks make it O(1).
- Forgetting to undo placements on backtrack. The mask has a stale bit and the solver deadlocks.
- Choosing cells in reading order. Picking the most-constrained cell first (MRV) is 100x faster.
- Mixing UI state into the board model. A console-free Board is what lets the solver run in tests.
Related Designs
Chess, Minesweeper, Expression Evaluator
Task Management — Real-World Systems
Difficulty: Intermediate
State machine for task lifecycle, observers for notifications, commands for undo. Three patterns that handle the three hardest parts of any project tracker.
Design Patterns for Task Management
- Observer Pattern
- State Pattern
- Command Pattern
Key Abstractions in Task Management
- TaskBoard: Orchestrator that manages tasks, users, sprints, and coordinates notifications
- Task: Entity with state machine transitions: TODO to IN_PROGRESS to DONE
- User: Can be assignee or reporter, receives notifications on subscribed tasks
- Sprint: Time-boxed container grouping tasks for a development iteration
- TaskObserver: Interface for receiving notifications when task state changes
Key Points for Task Management
- Task state transitions are explicit: TODO -> IN_PROGRESS -> DONE, with DONE -> TODO for reopening.
- Observer pattern decouples notification logic from task state changes. Add email, Slack, or webhooks without touching Task.
- Command pattern wraps state changes so you can undo them. Undo is just reversing the last command.
- Sprints are containers, not owners. A task can move between sprints without losing its history.
Common Mistakes with Task Management
- Allowing arbitrary state transitions. A task jumping from TODO directly to DONE skips important workflow steps.
- Notifying inside the Task class. That couples Task to every notification channel you will ever support.
- Not storing the previous state in commands. Without it, undo has no idea what to revert to.
- Making Sprint responsible for task state. Sprint groups tasks by time. State transitions are the task's own business.
Related Designs
Notification System, Stack Overflow
Text Editor — Real-World Systems
Difficulty: Intermediate
Undo and redo through state snapshots and command objects. The Memento saves what the document looked like, the Command knows how to get there and back.
Design Patterns for Text Editor
- Memento Pattern
- Command Pattern
Key Abstractions in Text Editor
- Editor: Originator that creates and restores state snapshots on every operation
- EditorState: Immutable memento capturing text content and cursor position at a point in time
- TextBuffer: Underlying text storage with insert and delete at arbitrary positions
- Command: Interface for executable and undoable editor operations like type and delete
- History: Caretaker managing undo and redo stacks of EditorState snapshots
Key Points for Text Editor
- Memento captures state, Command captures action. Together they give you full undo/redo.
- EditorState is immutable. Once created, it never changes. This makes the undo stack reliable.
- History limits stack depth to prevent unbounded memory growth in long editing sessions.
- Commands store just enough to reverse themselves. DeleteCommand saves the deleted text.
Common Mistakes with Text Editor
- Storing mutable references in the memento. If the editor state changes, your snapshot changes too.
- Unbounded undo history. A user editing for hours will eat all your memory without a cap.
- Coupling the memento to the editor's internal structure. The memento should be opaque to everyone except the originator.
- Forgetting to clear the redo stack when a new action happens after an undo.
Related Designs
Task Management, Chat Room
Thread Pool Executor — Foundational
Difficulty: Advanced
A configurable thread pool that accepts tasks, queues overflow, and rejects when full. Core pool, max pool, bounded queue, and rejection policies built from scratch.
Design Patterns for Thread Pool Executor
- Object Pool Pattern
- Command Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Thread Pool Executor
- ThreadPool: Manages worker threads, task submission, and shutdown lifecycle
- Worker: Persistent thread that pulls tasks from the queue and executes them
- Task: Command wrapping a submitted unit of work together with its Future
- Future: Caller-side handle to retrieve a Task result or exception asynchronously
- RejectionPolicy: Strategy for handling tasks when both pool and queue are full: abort, discard, caller-runs
- TaskListener: Observer notified on task completion or failure callbacks
Key Points for Thread Pool Executor
- Core pool threads stay alive even when idle. Max pool threads are created only when the queue is full and torn down when demand drops. Getting that split right is essential for balancing resource use and throughput.
- A bounded queue puts a hard cap on how many tasks can pile up in memory. An unbounded queue sounds convenient until it eats all your heap during a traffic spike.
- Rejection policies are the Strategy pattern in action. When the pool and queue are both full, AbortPolicy throws, DiscardPolicy drops silently, CallerRunsPolicy forces the submitting thread to run the task itself and naturally slows down producers.
- Worker threads block on the queue when it is empty using a Condition variable. No busy-waiting, no polling loops, no wasted CPU cycles.
Common Mistakes with Thread Pool Executor
- Using an unbounded queue: tasks pile up during bursts and the JVM runs out of memory before you ever notice the overload
- Busy-waiting on an empty queue instead of blocking with a Condition: wastes CPU and defeats the purpose of pooling
- Not handling task exceptions inside workers: an unhandled exception kills the worker thread silently and the pool slowly shrinks to zero
- Creating threads without enforcing max pool size: under sustained load the system spawns thousands of threads and context-switching tanks throughput
Related Designs
Connection Pool, Workflow Engine, Rate Limiter
Tic Tac Toe — Games & Simulations
Difficulty: Starter
O(1) win detection using row/column/diagonal counters instead of scanning the board. Works for any NxN size.
Design Patterns for Tic Tac Toe
- State Pattern
- Strategy Pattern
- Observer Pattern
Key Abstractions in Tic Tac Toe
- Game: Orchestrator that manages turns and delegates to board and win checker
- Board: Grid state with placement validation and cell ownership
- Player: Participant with a symbol (X/O) and an optional move strategy
- WinChecker: O(1) win detection using row/col/diagonal accumulators
- MoveStrategy: Strategy for AI players: random, minimax, or human input
- GameState: Enum tracking IN_PROGRESS, X_WINS, O_WINS, DRAW
Key Points for Tic Tac Toe
- O(1) win detection by incrementing counters per row, column, and diagonal instead of scanning the board
- State pattern manages game lifecycle with clean transitions between IN_PROGRESS, WIN, and DRAW
- Strategy pattern for player moves. Same interface for human input and AI algorithms.
- Board is decoupled from win logic so you can swap win conditions without touching placement code
Common Mistakes with Tic Tac Toe
- Scanning the entire board after each move to check for a win. O(n^2) when O(1) is right there.
- Using strings for player symbols instead of enums. Typo bugs like 'x' vs 'X' will bite you.
- Hardcoding 3x3. A general NxN solution is barely more code and shows design maturity.
- Putting game logic inside the Board class. Board should only manage cell state, not turns or wins.
Related Designs
Chess, Snake & Ladder
Traffic Signal — Real-World Systems
Difficulty: Intermediate
State pattern drives clean phase transitions across an intersection. Each signal knows only its own color. A controller coordinates them so conflicting directions never get green simultaneously.
Design Patterns for Traffic Signal
- State Pattern
- Observer Pattern
- Strategy Pattern
Key Abstractions in Traffic Signal
- TrafficController: Manages an intersection, coordinates signal phases so conflicting directions never overlap
- Signal: One direction's light. Holds current state and transitions based on timing config.
- SignalState: Enum: RED, YELLOW, GREEN. Each state knows its duration and valid next state.
- TimingStrategy: Strategy for per-phase durations — FixedTimingStrategy and RushHourTimingStrategy implementations
- Intersection: Groups multiple signals into a coordinated unit with named directions
- EmergencyObserver: Observer that preempts normal cycling when an emergency vehicle approaches
Key Points for Traffic Signal
- State pattern eliminates if/else chains for phase transitions. Each state knows its successor, so invalid transitions are structurally impossible.
- Signals are passive. They don't decide when to change. The controller tells them. This prevents two conflicting directions from going green.
- Timing config as a separate object means rush hour, nighttime, and adaptive schedules are just data swaps, not code changes.
- Observer for emergency vehicles keeps priority override decoupled from normal signal logic.
Common Mistakes with Traffic Signal
- Letting signals self-transition without a controller. Two perpendicular signals can both go green, causing a simulated collision.
- Using string comparisons for signal states instead of enums. 'GRREN' won't cause a compile error, but it will cause a runtime bug.
- Hardcoding timing durations inside the signal. Now changing rush hour timing means editing signal internals.
- Forgetting the yellow phase. Jumping straight from green to red is unrealistic and breaks any simulation that models driver behavior.
Related Designs
Elevator System, Parking Lot
Typeahead Autocomplete — Platform Scale
Difficulty: Intermediate
Trie with Flyweight shared prefix nodes, swappable ranking strategies, and Decorator layers for result post-processing. Where data structures meet design patterns.
Design Patterns for Typeahead Autocomplete
- Flyweight Pattern
- Strategy Pattern
- Decorator Pattern
- Factory Pattern
- Observer Pattern
Key Abstractions in Typeahead Autocomplete
- TrieNode: Flyweight: shared prefix nodes with intrinsic char, extrinsic frequency
- Trie: Manages the node tree, insert/search operations
- RankingStrategy: Interface: FrequencyRanker, RecencyRanker, TrendingRanker
- ResultDecorator: Wraps results: SpellCheckDecorator, PersonalizationDecorator, DedupDecorator
- SuggestionProvider: Factory creates the right provider per context
- SearchListener: Observer: tracks user selections to feed back into ranking
Key Points for Typeahead Autocomplete
- Flyweight: Trie nodes share prefix paths. 'apple' and 'application' share 4 nodes. The character (intrinsic state) is shared; frequency per context (extrinsic) is external.
- Strategy: the ranking algorithm is hot-swappable. Search bar uses frequency, code editor uses recency.
- Decorator: post-processing layers compose. Each wraps the result list without modifying the core search.
- Observer: when a user clicks a suggestion, that feeds back as a frequency boost signal.
Common Mistakes with Typeahead Autocomplete
- Storing full strings at every node instead of sharing prefixes. That defeats Flyweight and wastes memory.
- Hardcoding ranking logic inside the Trie. Violates SRP and can't adapt to different contexts.
- Rebuilding the entire result list through all decorators on every keystroke. You need caching or debounce.
- Not limiting result count early. Searching millions of completions and truncating at the end is wasteful.
Related Designs
Search Index Engine, LRU Cache, Rate Limiter
Cross-Platform UI Toolkit — Foundational
Difficulty: Intermediate
One factory per platform, each producing a consistent family of widgets. Abstract Factory guarantees you never mix a Windows button with a Mac checkbox.
Design Patterns for Cross-Platform UI Toolkit
- Abstract Factory Pattern
- Prototype Pattern
Key Abstractions in Cross-Platform UI Toolkit
- UIFactory: Abstract factory defining create methods for each widget type in the family
- Widget: Common interface for all UI components with render() and clone() methods
- WindowsFactory / MacFactory / WebFactory: Concrete factories producing platform-consistent widget families
- WidgetRegistry: Prototype registry storing pre-configured widget templates for cloning
Key Points for Cross-Platform UI Toolkit
- Abstract Factory ensures platform consistency. One factory, one platform, no mixing.
- Adding a new platform means one new factory class. Existing code stays untouched.
- Prototype avoids repeating complex widget configuration. Clone a template and customize.
- Client code depends on UIFactory and Widget interfaces, never on platform-specific classes.
Common Mistakes with Cross-Platform UI Toolkit
- Using if-else on a platform string to create widgets. That spreads platform logic across every creation point.
- Adding a new widget type to the factory interface without updating all concrete factories. All factories must implement the full product family.
- Making prototypes mutable after cloning. The registry's template should not change when a clone is modified.
- Confusing Abstract Factory with Factory Method. Abstract Factory creates families of related objects. Factory Method creates one object and lets subclasses decide the type.
Related Designs
Expression Evaluator, In-Memory File System
Undo/Redo Manager — Foundational
Difficulty: Intermediate
Two stacks and a Command interface. Every user action becomes an object that knows how to do and undo itself — the rest is just stack discipline.
Design Patterns for Undo/Redo Manager
- Command Pattern
- Memento Pattern
- Composite Pattern
Key Abstractions in Undo/Redo Manager
- Command: Interface with execute and undo. Every user action is a command object.
- HistoryManager: Two stacks (undo/redo), capacity cap, and a map of named checkpoints
- CompositeCommand: Macro: a list of commands that execute and undo atomically
- DocumentMemento: Snapshot of document state used by named checkpoints
- RestoreCommand: Swaps the document to a memento's content; its undo swaps back — so restore is itself undoable
Key Points for Undo/Redo Manager
- Two stacks, not one. Executing a new command clears the redo stack — branching invalidates the future.
- Commands capture everything needed to redo in their constructor. No hidden dependence on current state.
- Composite commands let macros (replace-all, multi-edit) undo as one click.
- Capacity bounded with a deque from the bottom so long sessions don't blow memory.
- Named checkpoints use the Memento pattern. restoreTo runs through execute(), so restoring to v1 can itself be undone.
Common Mistakes with Undo/Redo Manager
- Keeping the redo stack alive after a new command. Now 'redo' replays a command from an obsolete branch.
- Storing before/after snapshots for every command. A 10MB document times 1000 commands eats 10GB. Compute inverses or diff.
- Mutating the document inside undo() without restoring side effects — cursor position, scroll, selection.
- Letting commands hold references to stale UI state. The command must hold data, not widgets.
Related Designs
Text Editor, Spreadsheet Engine, Workflow Engine
URL Shortener — Foundational
Difficulty: Starter
Base62 encode a counter and store the mapping. Reads dominate writes 100:1, so optimize your lookup path. The strategy pattern lets you swap encoding schemes without rewiring anything.
Design Patterns for URL Shortener
- Strategy Pattern
- Factory Pattern
Key Abstractions in URL Shortener
- URLShortener: Orchestrator that coordinates code generation, storage, and analytics
- CodeGenerator: Strategy interface for generating short codes: base62, hash-based, or random
- URLStore: Persistence interface for storing and retrieving URL mappings
- Analytics: Tracks click counts, timestamps, and referrer data per short URL
Key Points for URL Shortener
- Base62 encoding gives you [a-zA-Z0-9] characters. A 7-character code supports 62^7 = 3.5 trillion unique URLs.
- Counter-based generation guarantees uniqueness without collision checks. Hash-based needs collision handling.
- Read-to-write ratio is roughly 100:1. Cache the short-to-long mapping aggressively.
- Strategy pattern on CodeGenerator means you can swap base62 for MD5-based hashing without touching the shortener.
Common Mistakes with URL Shortener
- Using a full MD5/SHA hash and truncating it. Truncation dramatically increases collision probability.
- Not handling the same long URL submitted twice. Decide upfront: same short code or different ones.
- Skipping input validation. Malformed URLs, empty strings, and excessively long inputs all need guardrails.
- Ignoring the read path. If you only optimize writes, your redirect latency will suffer under real traffic.
Related Designs
HashMap, Rate Limiter
Vending Machine — Games & Simulations
Difficulty: Intermediate
State pattern turns a mess of if-else chains into clean, self-contained state objects. Each state knows what it can do and when to hand off to the next one. The machine just delegates.
Design Patterns for Vending Machine
- State Pattern
- Strategy Pattern
Key Abstractions in Vending Machine
- VendingMachine: Context object that delegates all behavior to the current state
- State: Interface with methods for each user action: insertMoney, selectProduct, dispense
- Inventory: Manages product stock, pricing, and availability checks
- PaymentProcessor: Handles coin/note acceptance and change calculation using greedy algorithm
Key Points for Vending Machine
- State pattern eliminates nested conditionals. Each state class handles only the actions valid for that state.
- Transitions are explicit. IdleState moves to HasMoneyState on coin insert. No ambiguity.
- Greedy algorithm for coin change works because standard denominations are canonical (each coin divides the next).
- Inventory is separate from the state machine. Stock management and state transitions are independent concerns.
Common Mistakes with Vending Machine
- Putting all state logic in one giant switch statement. Adding a new state means touching every case block.
- Forgetting to return change when the user cancels. Money in the machine must always be accounted for.
- Not checking stock before accepting money. The user inserts coins for an out-of-stock item and gets frustrated.
- Allowing negative inventory. Every dispense must check stock first, then decrement.
Related Designs
ATM, Payment System
Video Conferencing — Real-World Systems
Difficulty: Advanced
Rooms, participants, and media streams. The conferencing model is simple; the hard part is the SFU decision — who forwards whose bits to whom.
Design Patterns for Video Conferencing
- Observer Pattern
- Strategy Pattern
- State Pattern
Key Abstractions in Video Conferencing
- Room: The live session. Owns participants, media router, chat, role management, and status.
- Participant: One attendee. Tracks audio/video/screen state, role (host, cohost, attendee).
- MediaRouter: SFU-style forwarder. Decides which streams each participant receives.
- ChatLog: Append-only conversation log per room. Mutable only via send_message.
Key Points for Video Conferencing
- Separate signaling (who's in the room) from media (the actual packets). They scale differently.
- SFU (Selective Forwarding Unit) scales far better than MCU (Multipoint Control Unit) — no server-side transcoding.
- Each participant publishes their own track; the router decides subscriptions. Active-speaker logic runs on the router.
- Role state machine: attendee → cohost → host. Transitions are audited and broadcast.
Common Mistakes with Video Conferencing
- Coupling signaling and media into one class. They have totally different latency and throughput profiles.
- Mesh topology beyond 4 people. Every participant sends N-1 streams — bandwidth collapses at 8+ participants.
- No backpressure on slow clients. A client on 2G drags the whole room's bitrate down.
- Modeling host as a flag instead of a role. When the host leaves, who inherits? Role-as-state captures it.
Related Designs
Chat Room, Pub-Sub System, Notification System
Workflow Engine — Platform Scale
Difficulty: Advanced
Workflows are composite trees where each node is a command step. A state machine governs the lifecycle, memento checkpointing captures progress at step boundaries, and observers broadcast every transition. You get a pipeline engine that can nest, undo, pause, and recover from crashes.
Design Patterns for Workflow Engine
- Composite Pattern
- Command Pattern
- Chain of Responsibility
- State Pattern
- Memento Pattern
- Observer Pattern
Key Abstractions in Workflow Engine
- WorkflowStep: Command interface with execute() and undo() methods. Every step in a workflow implements this contract.
- SequentialWorkflow: Composite that contains child steps and executes them in order. Can nest other composites.
- ParallelWorkflow: Composite that contains child steps and executes them concurrently using a thread pool.
- WorkflowState: Enum representing lifecycle states: PENDING, RUNNING, PAUSED, COMPLETED, FAILED.
- WorkflowCheckpoint: Memento that captures the engine's progress at step boundaries for crash recovery.
- WorkflowListener: Observer interface notified on every state change, step completion, or failure event.
Key Points for Workflow Engine
- Composite pattern lets you nest arbitrarily. A SequentialWorkflow can contain ParallelWorkflows, which themselves contain leaf steps. The engine doesn't care about depth.
- Command pattern makes every step reversible. On failure partway through a sequence, the engine calls undo() on each completed step in reverse order for clean rollback.
- Memento checkpointing saves progress after each step completes. On crash recovery, the engine skips already-completed steps and resumes from the last checkpoint.
- The state machine enforces legal transitions: PENDING can move to RUNNING, RUNNING can move to PAUSED, COMPLETED, or FAILED, but COMPLETED can never move back to RUNNING.
Common Mistakes with Workflow Engine
- Not handling partial failures in composite workflows. If step 3 of 5 fails, steps 1 and 2 must be rolled back or the system is left in an inconsistent state.
- Missing checkpoints between steps. Without checkpointing, a crash after step 4 of 10 means re-running all 10 steps from scratch.
- Allowing COMPLETED to RUNNING transition. Once a workflow finishes, re-running it should create a new execution instance, not mutate the completed one.
- Coupling step logic to orchestration. Steps should not know about the engine, other steps, or execution order. They execute their own logic and nothing more.
Related Designs
Task Management, Pub/Sub System