Reactor Pattern and the Event Loop
One thread, one loop. The loop asks the kernel which file descriptors are ready, fires the registered handlers for each, and goes back to the kernel. No threads block. This single design powers Node.js, Python asyncio, Netty, Tokio, libuv, nginx, and the Go netpoller. It is the answer to 'how do I serve 100K connections without 100K threads?'
Diagram
The shape of the loop
loop:
ready_fds = demultiplexer.wait() # epoll_wait, kqueue, IOCP, io_uring
for fd in ready_fds:
handler = handlers[fd]
handler(fd) # must not block
goto loop
That is the entire pattern. Every async runtime in every language is a refinement of this loop. async/await is syntactic sugar that lets handlers be written as if they were straight-line code; under the hood, each await registers a callback and yields back to the loop.
Why one thread
The first instinct is to multithread the handlers. Don't. The reactor delivers two things by being single-threaded:
- No locks. Two handlers never run at once on the same loop, so loop-local state needs no synchronization. This eliminates a whole class of bugs.
- Cache locality. One thread, one core, hot caches for the loop's state machine.
The cost is that one CPU-bound handler stalls everything. Fix it by pushing CPU work to a separate pool, or by running one loop per core (share-nothing).
Single-threaded inside the loop, not single-threaded overall Most production reactors run one loop per core. Each loop owns its own connections. Across loops, multiple cores work in parallel; within a loop, things are serial. nginx, Tokio in multi-thread mode, and Netty all do this.
Why "colored functions" are a thing
Once a codebase commits to the reactor, every function in the call chain has to know how to yield to the loop. A regular sync function in the middle of an async stack is a blocking call by definition. So async functions can call async functions, sync functions can call sync functions, but the boundary between them needs an explicit bridge (run_in_executor, asyncio.to_thread, CompletableFuture.supplyAsync). This is what Bob Nystrom called the "colored function" problem; it's a direct consequence of the reactor design choice.
Go avoided the problem by hiding the loop inside the runtime: every blocking call is implicitly async (parks the goroutine, frees the M), so all functions are the same color.
When this is the right pattern
- Many connections, mostly idle. Chat, gateways, dashboards, real-time feeds.
- Mostly I/O-bound work. Each handler is short and waits on the network or disk.
- Memory matters: 100K coroutines fit in tens of MB; 100K threads need GB.
When it isn't
- Few connections, lots of CPU per request. ML inference, video transcoding, scientific compute. Use threads or processes; the loop will sit idle while one handler hogs a core.
- Sync libraries everywhere. If the DB driver, auth library, and HTTP client are all sync, every call needs an executor bridge. Sometimes the right answer is "use Java virtual threads or Go and skip the reactor entirely."
- A team that has never written async code. The mental model is different; the bugs are different (loop stalls, missing awaits, swallowed exceptions). It pays to be sure the throughput win matters before paying the learning cost.
What every reactor handles
A real reactor adds the following on top of the basic loop:
- Timers:
call_later(delay, fn). Internally a min-heap keyed by deadline; the loop computes its next epoll timeout from the head. - Cancellation: marking a coroutine as cancelled, raising at the next yield point.
- Synchronization primitives: async Lock, Semaphore, Event, Queue. None of them block the loop; they park the coroutine on a wait list.
- Error handling: unawaited tasks log a warning; raised exceptions in tasks bubble up at await time.
- Thread-safe scheduling:
loop.call_soon_threadsafe(fn)lets non-loop threads inject callbacks safely.
Skip these and the reactor handles only the toy case. Use a real one (asyncio, Tokio, Netty, libuv) for everything else.
Primitives by language
- java.nio.channels.Selector (the original NIO reactor)
- Netty EventLoop / EventLoopGroup (production-grade reactor)
- Project Reactor / RxJava (reactive streams on top)
Implementations
Selector is Java's wrapper around epoll/kqueue. Register channels with the operations of interest (OP_ACCEPT, OP_READ, OP_WRITE), then loop on selector.select(). Netty wraps this with much better ergonomics, pooling, and pipeline composition; it's the actual production choice.
1 import java.nio.*;
2 import java.nio.channels.*;
3 import java.net.*;
4 import java.util.*;
5
6 ServerSocketChannel server = ServerSocketChannel.open();
7 server.bind(new InetSocketAddress(8080));
8 server.configureBlocking(false);
9
10 Selector selector = Selector.open();
11 server.register(selector, SelectionKey.OP_ACCEPT);
12
13 ByteBuffer buf = ByteBuffer.allocate(4096);
14 while (true) {
15 selector.select(); // block in epoll_wait
16 Iterator<SelectionKey> it = selector.selectedKeys().iterator();
17 while (it.hasNext()) {
18 SelectionKey k = it.next();
19 it.remove();
20 if (k.isAcceptable()) {
21 SocketChannel c = server.accept();
22 c.configureBlocking(false);
23 c.register(selector, SelectionKey.OP_READ);
24 } else if (k.isReadable()) {
25 SocketChannel c = (SocketChannel) k.channel();
26 buf.clear();
27 int n = c.read(buf);
28 if (n <= 0) { c.close(); continue; }
29 buf.flip();
30 c.write(buf);
31 }
32 }
33 }Key points
- •Three pieces: a demultiplexer (epoll, kqueue, IOCP, io_uring), a queue of handlers keyed by FD, and a loop that drives them.
- •Loop body: block in epoll_wait, get a list of ready FDs, run each FD's handler to completion, then loop again.
- •Handlers must be non-blocking. Any blocking call (sync I/O, sleep, CPU-heavy work) freezes the entire loop and stalls every other connection.
- •CPU-heavy work goes to a worker pool (Node.js worker_threads, Python's run_in_executor); the result comes back as a callback or future on the loop.
- •Single-threaded by design. Removes data races inside the handler. Costs multi-core scaling unless multiple loops run (one per core, share-nothing).
- •The 'colored functions' problem comes from this pattern: handlers and the things they call must agree on the same async ABI (callback, promise, async/await).
Follow-up questions
▸Why is the reactor single-threaded?
▸What's a proactor and how is it different?
▸Why don't goroutines have this problem?
▸How is a reactor scaled across cores?
▸Where does this break down?
Gotchas
- !A blocking call inside a handler freezes the loop. This includes sleep(), input(), requests.get(), psycopg2 queries, and most non-async libraries.
- !Forgetting to await an async function: it returns a coroutine object that does nothing. Linters catch this; reviews don't always.
- !The loop runs callbacks in the order they're scheduled, not the order their I/O completed. Never assume FIFO across handlers if work is queued via call_soon.
- !Exceptions in handlers can crash the loop in some libraries; in asyncio they get logged and lost unless the task is awaited. Always check for unawaited tasks in production.
Common pitfalls
- Using a reactor for CPU-bound services (ML inference, video). The loop will be idle while one handler hogs the CPU; use threads or processes.
- Mixing sync and async libraries without bridging. Calling a sync DB driver inside an asyncio handler stalls the loop. Use the async driver, or wrap the sync call in run_in_executor.
- Running too many loops on the same core. One reactor per core, no more; otherwise they fight for CPU and the scheduling overhead eats the win.
Practice problems
APIs worth memorising
- Python: asyncio.run, asyncio.get_event_loop, loop.run_in_executor
- Java: java.nio.channels.Selector, Netty EventLoopGroup
- C: epoll_create1, epoll_ctl, epoll_wait (Linux); kqueue, kevent (BSD); CreateIoCompletionPort (Windows)
Nginx (worker_processes, each running one loop). Node.js. Python asyncio (FastAPI, aiohttp, asyncpg). Tokio (Rust). Netty (most JVM async networking). Vert.x. The Go netpoller. libuv. Redis (single-threaded loop). Memcached's async path. Almost every modern high-concurrency network service uses some flavor of this.