queue Module: Queue, LifoQueue, PriorityQueue
queue.Queue is the standard thread-safe FIFO. LifoQueue is a stack. PriorityQueue is a heap. SimpleQueue is FIFO without size bound, optimised for short-lived. All handle the locking internally; pair with a sentinel (None) to signal shutdown to consumers.
What it is
queue is the standard library's thread-safe queue collection. Four classes, all built on threading primitives so the caller does not have to manage the locks.
Queue: FIFO with optional max size. The general-purpose choice.
LifoQueue: stack. Get returns the most recently put.
PriorityQueue: heap. Get returns the smallest item by tuple comparison.
SimpleQueue: FIFO without max size and without task_done/join. Faster on the fast path; useful when bounding isn't required.
The producer-consumer template
queue.Queue exists to solve this one problem. The template:
q = queue.Queue(maxsize=N) # bounded for backpressure
def producer():
for item in source():
q.put(item)
q.put(None) # sentinel
def consumer():
while True:
item = q.get()
if item is None:
q.put(None) # let other consumers see it too
break
process(item)
q.task_done()
This handles backpressure (the bounded queue blocks the producer when full), thread-safe coordination (no external lock), and shutdown (the sentinel value). Almost every Python multi-threaded data pipeline is built on this.
task_done and join
These two methods together let the producer wait for all items to be processed, not just put. Each consumer calls task_done() after processing an item; the producer calls q.join() and blocks until task_done has been called for every put.
Useful for "all the work scheduled has finished" without maintaining a custom counter.
Bounded vs unbounded
The default maxsize=0 is unbounded. For production code, that is almost always wrong. An unbounded queue under bursty load grows without bound and OOMs the process.
Always set maxsize to something reasonable. Backpressure is cheap with Queue: when the queue fills, the producer blocks until consumers catch up. That is the right behaviour for almost every pipeline.
SimpleQueue: when speed matters
SimpleQueue was added in Python 3.7 specifically for cases where Queue's features (max size, task_done) are not needed. It uses a lock-free fast path on CPython. For high-throughput producer-consumer that only needs basic FIFO and shutdown, it is measurably faster.
The trade-off: no maxsize (so no backpressure) and no task_done/join (so no waiting for completion). When either is needed, use Queue.
A note on the three queue worlds
Threads, asyncio, and processes each have their own queue type. They do not interoperate.
queue.Queuefor threads.asyncio.Queuefor coroutines.await put,await get.multiprocessing.Queuefor processes. Items are pickled across the boundary.
Mixing them is a category error. Calling queue.Queue.get() from an async coroutine blocks the event loop. Using multiprocessing.Queue between threads in the same process works but pays unnecessary serialisation cost. Match the queue type to the concurrency model.
Primitives by language
- queue.Queue(maxsize=0) (FIFO, optionally bounded)
- queue.LifoQueue (LIFO / stack)
- queue.PriorityQueue (sorted by tuple comparison)
- queue.SimpleQueue (unbounded, lock-free fast path)
- put / get / put_nowait / get_nowait / put(timeout=) / get(timeout=)
Implementation
The bounded queue gives backpressure: producer blocks when full, consumer blocks when empty. No explicit lock or condition variable needed. The sentinel (None) signals shutdown to the consumer.
1 import queue
2 import threading
3
4 q = queue.Queue(maxsize=100)
5
6 def producer():
7 for item in source():
8 q.put(item) # blocks if full
9 q.put(None) # sentinel
10
11 def consumer():
12 while True:
13 item = q.get()
14 if item is None:
15 q.put(None) # propagate to others
16 break
17 process(item)
18
19 threading.Thread(target=producer).start()
20 for _ in range(4):
21 threading.Thread(target=consumer).start()Call task_done() after each get() and join() on the producer side to wait for all items to be processed. Cleaner than counting in a custom variable.
1 import queue
2 import threading
3
4 q = queue.Queue()
5
6 def worker():
7 while True:
8 item = q.get()
9 try:
10 process(item)
11 finally:
12 q.task_done() # mark complete
13
14 for _ in range(4):
15 t = threading.Thread(target=worker, daemon=True)
16 t.start()
17
18 for item in items:
19 q.put(item)
20
21 q.join() # waits until task_done called for every put
22 print("all done")PriorityQueue is a heap ordered by tuple comparison. Shape entries as (priority, counter, payload) so that ties break on insertion order rather than trying to compare payloads (which may not support ordering).
1 import queue
2 import itertools
3
4 pq = queue.PriorityQueue()
5 counter = itertools.count() # tie-breaker
6
7 pq.put((3, next(counter), "low priority work"))
8 pq.put((1, next(counter), "urgent"))
9 pq.put((2, next(counter), "normal"))
10
11 while not pq.empty():
12 priority, _, item = pq.get()
13 print(item)
14 # Output: urgent, normal, low priority workget() and put() with a timeout raise queue.Empty / queue.Full instead of blocking forever. Useful for worker loops that need to check a shutdown flag periodically.
1 import queue
2 import threading
3
4 q = queue.Queue()
5 shutdown = threading.Event()
6
7 def worker():
8 while not shutdown.is_set():
9 try:
10 item = q.get(timeout=1.0) # check shutdown every 1s
11 except queue.Empty:
12 continue
13 process(item)
14 q.task_done()
15
16 threading.Thread(target=worker).start()
17 # ... later:
18 shutdown.set()Key points
- •queue.Queue is thread-safe by design. No external lock needed.
- •maxsize > 0 makes the queue bounded; put() blocks when full. Backpressure for free.
- •Standard shutdown: producer puts a sentinel (None); consumer breaks when it sees one.
- •task_done() + join() let the producer wait for all submitted work to be processed.
- •queue.Queue is for THREADS. asyncio has its own asyncio.Queue. multiprocessing has multiprocessing.Queue.
Follow-up questions
▸queue.Queue or asyncio.Queue or multiprocessing.Queue?
▸Why is queue.SimpleQueue faster than queue.Queue?
▸What goes in PriorityQueue tuples?
▸How are workers shut down cleanly?
Gotchas
- !queue.Queue is for threads only; asyncio coroutines must use asyncio.Queue
- !PriorityQueue tuples need a tiebreaker; otherwise equal priorities try to compare payloads
- !task_done() called more times than put() raises ValueError
- !maxsize=0 is the default and means UNBOUNDED. Always set explicitly for production.
- !join() blocks the calling thread; if no consumers are running, it blocks forever