Python threading: Lock, RLock, Condition, Event, Barrier
threading module ships the usual suspects: Lock (mutex), RLock (reentrant), Condition (wait/notify), Event (one-shot signal), Semaphore (counted permits), Barrier (rendezvous N threads). The GIL makes Python thread interleaving look atomic for single bytecodes but not for multi-step operations, so these primitives are still needed.
What it is
Python's threading module ships the standard set of synchronisation primitives. The shape is familiar to anyone who has written threaded code in another language; the twist is the Global Interpreter Lock (GIL).
The GIL means at most one Python bytecode runs at a time across all threads. So why are locks needed at all? Because most interesting operations (x += 1, list.append, dict[k] = compute()) are multiple bytecodes, and the GIL can be released between them. Threads can interleave inside what looks like one statement.
The GIL also means thread-based parallelism does not help CPU-bound Python work. Processes are the answer there. Threads are still useful for I/O-bound work because the GIL is released during blocking syscalls.
The primitives
Lock: the basic non-reentrant mutex. Acquire, do work, release. Same thread acquiring twice deadlocks. Prefer with lock: to release on exception.
RLock: reentrant variant. Same thread can acquire multiple times; must release the same number of times. Use when nested calls all need the lock and refactoring is not feasible.
Condition: lock + wait/notify. The classic producer-consumer primitive when the predicate is complex (not just "non-empty"). Always wait inside a while loop on the predicate (spurious wakeups are allowed by the spec).
Event: one-shot, multi-waiter boolean. Once set, every waiter (current and future) sees True immediately. The right tool for shutdown signals and "ready" markers.
Semaphore / BoundedSemaphore: counted permits. Acquire blocks if no permits available; release returns one. BoundedSemaphore catches the bug where release count exceeds acquire count.
Barrier: N threads call wait(); none proceed until all N have arrived. Reusable. The optional action runs once per cycle on whichever thread completes the barrier.
What to use when
For a counter or simple shared state, Lock. Or itertools.count() (the increment is atomic) for pure integer counters.
For producer-consumer, queue.Queue. It is built on top of these primitives and handles bounded capacity, full/empty waits, and shutdown via sentinels. Hand-roll with Condition only when the predicate is something more complex.
For shutdown signals or any one-time event, Event. The pattern event.wait(timeout=1.0) lets a worker do work for up to 1 second and then check shutdown, all in one call.
For limiting in-flight concurrency (connection pool, bounded fan-out), Semaphore. Acquire to use, release when done.
For multi-thread iterative algorithms, Barrier. Each thread does its share, all sync at the barrier, the merge step runs in the action, repeat.
The GIL helps, but does not eliminate races. Multi-bytecode operations are not atomic. Reading from a dict is atomic; computing a value and storing it is not. Use locks for any read-modify-write.
Holding a lock across an I/O call is dangerous. The thread blocks; other threads waiting for the lock also block. Acquire late, release early.
threading.Lock and asyncio.Lock are different objects with different semantics. Don't mix them. asyncio.Lock awaits cooperatively; threading.Lock blocks the whole event loop if held during async code.
Primitives by language
- threading.Lock (non-reentrant mutex)
- threading.RLock (reentrant mutex; same thread can re-acquire)
- threading.Condition (wait/notify on a predicate)
- threading.Event (one-shot, multi-waiter signal)
- threading.Semaphore / BoundedSemaphore
- threading.Barrier (rendezvous N threads)
Implementation
counter += 1 is three bytecodes: LOAD, ADD, STORE. Two threads can interleave and lose updates even with the GIL. The lock makes the read-modify-write atomic. For pure integers, an itertools.count or a queue is often cleaner; for arbitrary state, the lock is the standard tool.
1 import threading
2
3 lock = threading.Lock()
4 counter = 0
5
6 def increment():
7 global counter
8 with lock:
9 counter += 1 # safe under the lock
10
11 threads = [threading.Thread(target=increment) for _ in range(100)]
12 for t in threads: t.start()
13 for t in threads: t.join()
14
15 assert counter == 100A method holds the lock and calls another method on the same object that also tries to acquire. With Lock that deadlocks. With RLock the same thread can re-acquire as many times as it wants; release must match acquire count.
1 import threading
2
3 class Account:
4 def __init__(self):
5 self._lock = threading.RLock()
6 self._balance = 0
7
8 def deposit(self, amount):
9 with self._lock:
10 self._balance += amount
11
12 def transfer(self, other, amount):
13 with self._lock: # acquire once
14 self.withdraw(amount) # withdraw also acquires (same thread, fine)
15 other.deposit(amount)
16
17 def withdraw(self, amount):
18 with self._lock: # second acquire by same thread
19 self._balance -= amountThe condition variable holds a lock, allows waiting for a predicate, and lets others notify when the predicate might be true. Always loop on the predicate (spurious wakeups). For most producer-consumer needs queue.Queue is simpler, but Condition is the right primitive when the predicate is more complex than "non-empty".
1 import threading
2 from collections import deque
3
4 class BoundedQueue:
5 def __init__(self, capacity):
6 self._lock = threading.Lock()
7 self._not_full = threading.Condition(self._lock)
8 self._not_empty = threading.Condition(self._lock)
9 self._items = deque()
10 self._capacity = capacity
11
12 def put(self, item):
13 with self._not_full:
14 while len(self._items) >= self._capacity:
15 self._not_full.wait() # releases lock, waits
16 self._items.append(item)
17 self._not_empty.notify() # wake one consumer
18
19 def get(self):
20 with self._not_empty:
21 while not self._items:
22 self._not_empty.wait()
23 item = self._items.popleft()
24 self._not_full.notify()
25 return itemEvent is a thread-safe boolean with wait. Set it once; every waiter and every future waiter sees True. Used for shutdown coordination, "wait until config loaded", and any one-shot signal.
1 import threading
2 import time
3
4 shutdown = threading.Event()
5
6 def worker():
7 while not shutdown.is_set():
8 do_work()
9 if shutdown.wait(timeout=1): # wait OR poll for shutdown
10 break
11 cleanup()
12
13 threading.Thread(target=worker).start()
14
15 time.sleep(60)
16 shutdown.set() # all workers see itAll threads call wait(); none proceed until N have arrived. Used for parallel iterative algorithms (every thread completes step K before any starts K+1). Reusable across iterations.
1 import threading
2
3 barrier = threading.Barrier(4, action=lambda: print("merge done"))
4
5 def worker(i):
6 for step in range(10):
7 compute(i, step)
8 barrier.wait() # wait for the other 3
9 # action runs here exactly once per step
10
11 threads = [threading.Thread(target=worker, args=(i,)) for i in range(4)]
12 for t in threads: t.start()
13 for t in threads: t.join()Key points
- •Lock acquired twice on the same thread = deadlock. RLock allows recursive entry.
- •Even with the GIL, operations like dict[k] += 1 are NOT atomic (load, add, store across multiple bytecodes).
- •Condition.wait() releases the lock and re-acquires on wakeup. Always pair with a predicate loop to handle spurious wakeups.
- •Event is the right tool for one-time signals (shutdown flag, ready signal). Don't roll a custom Lock+flag.
- •queue.Queue is built on these and provides a thread-safe producer-consumer for free.
Follow-up questions
▸Doesn't the GIL make all operations atomic?
▸Lock vs RLock: which is the default?
▸When is queue.Queue preferable over Lock + Condition?
▸Why is Event better than a Lock + boolean for shutdown?
Gotchas
- !Holding a Lock across a yield from / await blocks the whole asyncio loop
- !Forgetting `with` and using acquire/release manually leaks locks on exception
- !Condition.wait() must be called inside a loop on the predicate; single 'if' is wrong
- !Barrier with action: the action runs ONCE on whichever thread arrives last; exceptions break the barrier
- !BoundedSemaphore raises if release() exceeds initial count; Semaphore silently allows it