sync.Mutex and sync.RWMutex
sync.Mutex is Go's basic mutex (Lock/Unlock). sync.RWMutex allows many readers OR one writer. Always use defer Unlock right after Lock. Mutexes are not reentrant in Go: same goroutine locking twice deadlocks. Pass mutexes by pointer; copying a mutex copies its state (including held-ness).
What it is
sync.Mutex is the basic mutual-exclusion lock in Go. Two methods: Lock and Unlock. The zero value is unlocked and ready to use, so no constructor is required.
sync.RWMutex extends it with a read/write distinction. RLock allows many concurrent readers; Lock allows one writer with no concurrent readers.
Both are non-reentrant: the same goroutine acquiring the same lock twice deadlocks.
The patterns to memorise
Lock then defer Unlock. Always. The defer makes the unlock happen on every exit path (return, panic, late branch). Skipping it leaks the lock the first time something unexpected happens.
Pass struct pointers, not values. A Mutex carries its locked state. Copying the struct produces two independent mutexes. go vet catches most of these, but the rule is: if a struct has a Mutex field, methods take a pointer receiver and callers pass pointers.
Mutex by default, RWMutex only when measured. RWMutex's reader-counter bookkeeping costs real cycles. For short critical sections, plain Mutex is faster despite "blocking readers". Switch only when the workload is read-dominated and a profiler reports the lock as hot.
Acquire late, release early. Hold the lock for the smallest possible region. Don't make network calls or long computations under the lock; do those before or after.
What Go does differently
Other languages provide reentrant mutexes by default (Java's ReentrantLock, Python's RLock). Go does not, deliberately. The argument: if code needs reentrancy, it does not know whether it holds the lock, which means the lock boundary is muddled.
The Go-idiomatic fix is to split the public API from a private helper:
func (s *State) Update(x int) {
s.mu.Lock()
defer s.mu.Unlock()
s.updateLocked(x)
}
func (s *State) updateLocked(x int) {
// assumes s.mu is held
s.value += x
s.notifyLocked()
}
The convention methodLocked says "callers must hold the lock". Public methods acquire and call locked helpers. Locked helpers do not call back to public methods. The discipline replaces reentrancy.
When to reach for something else
For a counter, prefer atomic.Int64 or atomic.Uint64. Atomic operations are faster than a mutex and avoid the deferred unlock entirely.
For one-time initialisation, use sync.Once. It handles the locking and the once-only check automatically.
For a producer-consumer queue, use a channel. Channels are Go's first-class concurrency primitive; mutexes are for protecting state, not for coordination.
For complex coordination (waits, signals, multiple conditions), use sync.Cond. It is rarely the right tool; channels usually express the same logic more clearly.
Things that bite
A common bug: holding a mutex across a channel send. If the receiver needs the same mutex (perhaps to update some shared state in response), the result is a deadlock. Either do the channel work before acquiring the lock, or use a buffered channel with capacity to absorb the send.
Another: copying a struct that contains a mutex, especially via embedded mutex in an embedded struct. go vet -copylocks catches most of these. Run vet in CI.
A subtle one: passing a *sync.Mutex to a goroutine that outlives the caller. The goroutine references the mutex; the mutex is fine. But if the caller mutates the struct holding the mutex (replacing the field, for instance), the goroutine sees stale state. Mutex membership in a struct is part of the struct's identity; design accordingly.
Primitives by language
- sync.Mutex (basic mutex)
- sync.RWMutex (many readers OR one writer)
- sync.Locker interface
- go vet -copylocks (catches mutex copies at build time)
Implementation
The defer ensures Unlock runs even if the function panics or returns early. This is the canonical Go pattern. Skipping defer (e.g., calling Unlock manually at the end) leaks the lock if anything in between panics or returns early.
1 package main
2
3 import "sync"
4
5 type Counter struct {
6 mu sync.Mutex
7 value int
8 }
9
10 func (c *Counter) Inc() {
11 c.mu.Lock()
12 defer c.mu.Unlock()
13 c.value++
14 }
15
16 func (c *Counter) Get() int {
17 c.mu.Lock()
18 defer c.mu.Unlock()
19 return c.value
20 }Many readers can hold RLock simultaneously. Only one writer can hold Lock, and while it does, no readers can. Use for caches and lookup tables where reads vastly outnumber writes. For short critical sections, plain Mutex is often faster.
1 package main
2
3 import "sync"
4
5 type Cache struct {
6 mu sync.RWMutex
7 data map[string]string
8 }
9
10 func (c *Cache) Get(key string) (string, bool) {
11 c.mu.RLock()
12 defer c.mu.RUnlock()
13 v, ok := c.data[key]
14 return v, ok
15 }
16
17 func (c *Cache) Set(key, value string) {
18 c.mu.Lock()
19 defer c.mu.Unlock()
20 c.data[key] = value
21 }A Mutex carries its locked state. Copying the struct copies the locked state too, producing two mutexes that disagree about who holds the lock. go vet -copylocks catches most of these at build time, but only if the field is named (anonymous embedded mutexes can slip through).
1 package main
2
3 import "sync"
4
5 type Bad struct {
6 mu sync.Mutex
7 n int
8 }
9
10 func giveMeBad(b Bad) { // copy of Bad, copy of mutex
11 b.mu.Lock() // independent of caller's mutex
12 b.n++
13 b.mu.Unlock()
14 }
15
16 // FIX: take pointer
17 type Good struct {
18 mu sync.Mutex
19 n int
20 }
21
22 func updateGood(g *Good) { // shared
23 g.mu.Lock()
24 defer g.mu.Unlock()
25 g.n++
26 }TryLock returns true if it acquired the lock, false otherwise. Useful for "skip if busy" patterns: a metric collector that should not pile up if collection takes longer than the interval. Use sparingly; most code should wait, not skip.
1 package main
2
3 import "sync"
4
5 type Throttled struct {
6 mu sync.Mutex
7 }
8
9 func (t *Throttled) MaybeRun(work func()) bool {
10 if !t.mu.TryLock() {
11 return false // skip; another caller is running
12 }
13 defer t.mu.Unlock()
14 work()
15 return true
16 }Key points
- •Mutexes are NOT reentrant in Go. Same goroutine locking twice deadlocks.
- •Always Lock then defer Unlock. Skipping defer is the source of most lock leaks.
- •Mutex zero value is unlocked and ready to use. No initialisation needed.
- •Copying a Mutex copies its state. Embed by value carefully; pass structs containing mutexes by pointer.
- •RWMutex is slower than Mutex for write-heavy or short-critical-section workloads. Profile.
Follow-up questions
▸Why doesn't Go support reentrant mutexes?
▸Mutex or RWMutex by default?
▸Can a goroutine that holds a Mutex pass it to another goroutine?
▸What goes wrong when defer Unlock is forgotten?
Gotchas
- !Mutex is not reentrant; same goroutine locking twice deadlocks
- !Copying a struct that contains a Mutex copies the lock state; pass by pointer
- !RWMutex is slower than Mutex for short critical sections; profile before assuming
- !Forgetting defer Unlock leaks the lock on panic or early return
- !Holding a mutex across a channel send/receive can deadlock with the channel's other end