BlockingQueue Family
BlockingQueue is the Java interface for thread-safe producer-consumer queues. Pick by overflow behaviour and ordering: ArrayBlockingQueue (bounded), LinkedBlockingQueue (default unbounded, dangerous), PriorityBlockingQueue (ordered), SynchronousQueue (no buffer, hand-off), DelayQueue (scheduled), TransferQueue (sender knows when received).
What it is
BlockingQueue<T> is the Java interface for thread-safe queues that block when full or empty. It is the building block for almost every producer-consumer design in Java. The standard library ships several implementations, and the right choice depends on three questions: is bounding required, is ordering required, and is hand-off or buffering wanted.
The four operations
Each blocking queue implements four versions of put and take:
| Operation | Behaviour |
|---|---|
add(t) / remove() | Throw on failure |
offer(t) / poll() | Return false / null |
put(t) / take() | Block forever |
offer(t, t, u) / poll(t, u) | Block with timeout |
Pick based on what failure means. For mandatory work that must not be lost, put blocks. For optional work that can be dropped under load, offer returns false. For services with strict latency budgets, offer(timeout) gives explicit control.
The implementations
ArrayBlockingQueue. Bounded, fixed at construction. FIFO. Single lock. Predictable, simple, the default safe choice.
LinkedBlockingQueue. Optionally bounded. Two locks (head and tail), so one producer and one consumer can run without blocking each other. The catch: the no-argument constructor is unbounded. Always pass a capacity for production code.
SynchronousQueue. Zero capacity. A put does not return until a consumer is taking. Used by Executors.newCachedThreadPool so that work is handed off to a thread immediately or a new thread is spawned. Coordination primitive, not a buffer.
PriorityBlockingQueue. Ordered by Comparator or natural ordering. Unbounded. Used for prioritised job queues. Note the unboundedness: pair with a semaphore to limit in-flight work.
DelayQueue. Holds Delayed elements until their delay expires. take blocks until the head element is due. Used for scheduled tasks, retry timers, expiring cache entries.
TransferQueue / LinkedTransferQueue. Like LinkedBlockingQueue, but adds transfer: the producer can wait until the consumer actually takes the item. Useful for request-response style hand-off.
How it interacts with ExecutorService
ThreadPoolExecutor takes a BlockingQueue as its work queue. The choice has direct consequences:
- ArrayBlockingQueue (bounded): tasks queue up to capacity, then the rejection policy fires. Predictable backpressure.
- LinkedBlockingQueue (unbounded, the default in
newFixedThreadPool): tasks queue forever, then OOM. Avoid for production. - SynchronousQueue (zero): forces thread creation per task up to maximumPoolSize. Used by
newCachedThreadPool.
The lesson: the queue type often matters more than the executor's other parameters. Always specify it.
When to roll a custom one
Almost never. The standard library covers the cases that come up, and a hand-rolled queue is where subtle bugs live. The exceptions:
- A special data structure is needed (sharded queue, lock-free MPSC). Use
JCToolsor LMAX Disruptor. - Batched dequeue (drain N at once) is needed. Use
BlockingQueue.drainTo(collection, n)first; it is built in. - Wait-free guarantees are required. The standard implementations are lock-based; lock-free queues are a different conversation (see Lock-Free Programming).
Primitives by language
- ArrayBlockingQueue (bounded, FIFO, single lock)
- LinkedBlockingQueue (optionally bounded, FIFO, two locks)
- SynchronousQueue (zero-capacity, direct hand-off)
- PriorityBlockingQueue (ordered by Comparator, unbounded)
- DelayQueue (timed elements)
- TransferQueue / LinkedTransferQueue (sender awaits receive)
Implementation
The bounded queue provides backpressure for free. When full, the producer blocks; when empty, the consumer blocks. No mutex code, no condition variable, no off-by-one mistakes.
1 import java.util.concurrent.ArrayBlockingQueue;
2 import java.util.concurrent.BlockingQueue;
3
4 BlockingQueue<Task> queue = new ArrayBlockingQueue<>(1000);
5
6 // Producer
7 Thread producer = new Thread(() -> {
8 while (true) {
9 Task t = nextTask();
10 try {
11 queue.put(t); // blocks if queue is full
12 } catch (InterruptedException e) {
13 Thread.currentThread().interrupt();
14 return;
15 }
16 }
17 });
18
19 // Consumer
20 Thread consumer = new Thread(() -> {
21 while (true) {
22 try {
23 Task t = queue.take(); // blocks if queue is empty
24 process(t);
25 } catch (InterruptedException e) {
26 Thread.currentThread().interrupt();
27 return;
28 }
29 }
30 });Each operation has four flavours: throw on failure, return special value, block, time out. Learn which one each use case wants. The most common production choice is offer/poll with timeout.
1 BlockingQueue<Task> q = new ArrayBlockingQueue<>(100);
2
3 // Throws on failure
4 q.add(t); // throws IllegalStateException if full
5 q.remove(); // throws NoSuchElementException if empty
6
7 // Returns special value
8 q.offer(t); // returns false if full
9 Task popped = q.poll(); // returns null if empty
10
11 // Blocks
12 q.put(t); // blocks until space
13 Task taken = q.take(); // blocks until element
14
15 // Times out
16 q.offer(t, 100, TimeUnit.MILLISECONDS); // returns false on timeout
17 q.poll(100, TimeUnit.MILLISECONDS); // returns null on timeoutCapacity zero. The producer cannot leave the value behind; it has to wait for a consumer to take it directly. Useful for thread pools that should never queue work (Executors.newCachedThreadPool uses this). Pure coordination, no buffering.
1 import java.util.concurrent.SynchronousQueue;
2
3 SynchronousQueue<Runnable> handoff = new SynchronousQueue<>();
4
5 // Producer waits for consumer
6 handoff.put(() -> work()); // blocks until take()
7
8 // Consumer waits for producer
9 Runnable r = handoff.take(); // blocks until put()
10
11 // Used by ThreadPoolExecutor for "create a thread per task" behaviour:
12 ExecutorService cached = new ThreadPoolExecutor(
13 0, Integer.MAX_VALUE,
14 60L, TimeUnit.SECONDS,
15 new SynchronousQueue<>() // never queues
16 );Priority queue orders by Comparator (or natural ordering). Unbounded (so handle backpressure separately). DelayQueue holds elements until their delay expires; useful for retry queues, expiring caches, scheduled tasks.
1 import java.util.concurrent.PriorityBlockingQueue;
2 import java.util.concurrent.Delayed;
3 import java.util.concurrent.DelayQueue;
4 import java.util.concurrent.TimeUnit;
5
6 record Job(int priority, String name) implements Comparable<Job> {
7 @Override public int compareTo(Job o) { return Integer.compare(priority, o.priority); }
8 }
9
10 PriorityBlockingQueue<Job> jobs = new PriorityBlockingQueue<>();
11 jobs.put(new Job(3, "low"));
12 jobs.put(new Job(1, "high"));
13 Job next = jobs.take(); // returns Job(1, "high")
14
15 // Delayed retries:
16 class RetryItem implements Delayed { /* ... */ }
17 DelayQueue<RetryItem> retries = new DelayQueue<>();
18 retries.put(new RetryItem(5, TimeUnit.SECONDS));
19 RetryItem due = retries.take(); // blocks until 5 seconds passKey points
- •put blocks if full, take blocks if empty. offer/poll have non-blocking and timed variants.
- •ArrayBlockingQueue is the default safe choice for production: bounded, simple, predictable.
- •LinkedBlockingQueue defaults to UNBOUNDED. Always pass a capacity for production.
- •SynchronousQueue has zero buffer: producer waits for consumer to take directly. Used by Executors.newCachedThreadPool.
- •All implementations are fully thread-safe: put/take/poll/offer can be called from any thread.
Follow-up questions
▸Why is LinkedBlockingQueue dangerous by default?
▸ArrayBlockingQueue vs LinkedBlockingQueue: which is faster?
▸When to use SynchronousQueue?
▸What is TransferQueue and why does it matter?
Gotchas
- !LinkedBlockingQueue with no arg = unbounded = OOM risk under burst
- !Mixing add/remove (throw) with put/take (block) leads to inconsistent error handling
- !PriorityBlockingQueue is unbounded; use a separate semaphore for bounded priority work
- !Iterator on a BlockingQueue is weakly consistent, not a snapshot
- !Forgetting to handle InterruptedException in put/take leaves the thread alive but the cancellation lost