sync.Map vs map+RWMutex
sync.Map is the standard library's lock-free concurrent map. It wins for two specific shapes: write-once-read-many entries, and disjoint-key access from many goroutines. For mixed read/write on the same keys, plain map + sync.RWMutex is faster. The Go authors put a warning in the docs to this effect.
What it is
sync.Map is the standard library's concurrent map. It exists in the sync package alongside Mutex and WaitGroup. The API uses any (no generics), so callers perform type assertions on every Load.
The Go authors specifically warn that sync.Map is not the right default. From the docs: "The Map type is optimised for two common use cases: when the entry for a given key is only ever written once but read many times, as in caches that only grow, or when multiple goroutines read, write, and overwrite entries for disjoint sets of keys."
For everything else, plain map with sync.RWMutex is usually faster.
When sync.Map wins
Two specific shapes.
Write-once-read-many. Caches where entries are added once and then read many times. Configuration, route tables, session stores. The reads are lock-free; the writes are infrequent enough that the bookkeeping cost does not matter.
Disjoint-key access. Many goroutines, each touching a different subset of keys. Per-CPU caches, per-tenant state. There is no contention on shared keys, so the per-key lock-free machinery wins.
For both, the win comes from sync.Map's two-tier design: a read-only fast path (loaded into a separate atomic-pointer-published map) and a slower dirty map for writes. Reads on the fast path are atomic loads with no contention. Writes promote dirty entries periodically.
When map+RWMutex wins
Pretty much everything else.
Mixed read/write on the same keys. The dirty-promotion cost in sync.Map adds up; map+RWMutex's straightforward lock is faster.
Write-heavy workloads. sync.Map writes always touch the dirty map; map+RWMutex with a Lock is no worse and often better.
Small maps. The overhead per sync.Map entry (more pointers, more atomic operations) dominates with only dozens or hundreds of entries.
Generics, or any other "normal map" feature. map+RWMutex provides a real Go map (note: neither preserves iteration order; sort keys explicitly when order matters).
How to choose
Start with map+RWMutex. It is the obvious choice and works for everything. Only switch to sync.Map when the workload fits one of the two specific shapes AND a benchmark confirms the win.
When the shape is unclear, the most useful question is: "of all the operations on this map, what fraction are writes after the initial population, and to what fraction of keys?" If both fractions are low, sync.Map. If either is high, map+RWMutex.
A note on alternatives
For larger-scale concurrent maps (millions of keys, very high throughput), neither sync.Map nor map+RWMutex may be enough. Sharded maps (split keys across N independent maps, each with its own lock) scale further. Several third-party libraries (concurrent-map, xsync) offer typed sharded maps with cleaner ergonomics than rolling a custom one.
But for application code, map+RWMutex is almost always the right answer, sync.Map is the right answer for the two specific shapes above, and sharded maps are the right answer when contention has been measured as the bottleneck.
Primitives by language
- sync.Map.Load(key) (any, bool)
- sync.Map.Store(key, value)
- sync.Map.LoadOrStore(key, value) (actual any, loaded bool)
- sync.Map.LoadAndDelete(key) (value any, loaded bool)
- sync.Map.Delete(key)
- sync.Map.Range(f func(key, value any) bool)
Implementation
Sessions written once when created, read on every request, deleted on logout. Disjoint keys (different sessions accessed by different requests). Sweet spot for sync.Map: lock-free reads, no contention on Load.
1 package main
2
3 import "sync"
4
5 type Session struct {
6 UserID string
7 Token string
8 }
9
10 var sessions sync.Map // map[string]*Session
11
12 func GetSession(token string) *Session {
13 v, ok := sessions.Load(token)
14 if !ok {
15 return nil
16 }
17 return v.(*Session)
18 }
19
20 func CreateSession(token string, s *Session) {
21 sessions.Store(token, s)
22 }
23
24 func GetOrCreate(token string, factory func() *Session) *Session {
25 if v, ok := sessions.Load(token); ok {
26 return v.(*Session)
27 }
28 actual, _ := sessions.LoadOrStore(token, factory())
29 return actual.(*Session) // returns existing if race lost
30 }For mixed-read-write on contended keys, this is faster than sync.Map and provides generics, range with stable order, and clearer semantics. The trade-off is more boilerplate.
1 package main
2
3 import "sync"
4
5 type Cache[V any] struct {
6 mu sync.RWMutex
7 m map[string]V
8 }
9
10 func NewCache[V any]() *Cache[V] {
11 return &Cache[V]{m: make(map[string]V)}
12 }
13
14 func (c *Cache[V]) Get(key string) (V, bool) {
15 c.mu.RLock()
16 defer c.mu.RUnlock()
17 v, ok := c.m[key]
18 return v, ok
19 }
20
21 func (c *Cache[V]) Set(key string, value V) {
22 c.mu.Lock()
23 defer c.mu.Unlock()
24 c.m[key] = value
25 }
26
27 func (c *Cache[V]) GetOrCompute(key string, fn func() V) V {
28 c.mu.RLock()
29 if v, ok := c.m[key]; ok {
30 c.mu.RUnlock()
31 return v
32 }
33 c.mu.RUnlock()
34
35 c.mu.Lock()
36 defer c.mu.Unlock()
37 if v, ok := c.m[key]; ok { // re-check
38 return v
39 }
40 v := fn()
41 c.m[key] = v
42 return v
43 }Range visits each entry but the snapshot is weakly consistent: writes during the range may or may not be visible. Return false to stop early. Useful for bulk operations like cache eviction or stats collection.
1 package main
2
3 import (
4 "fmt"
5 "sync"
6 )
7
8 var counts sync.Map
9
10 func PrintAll() {
11 counts.Range(func(key, value any) bool {
12 fmt.Printf("%v = %v\n", key, value)
13 return true // continue
14 })
15 }
16
17 func PurgeStale(threshold int64) {
18 counts.Range(func(key, value any) bool {
19 if value.(int64) < threshold {
20 counts.Delete(key)
21 }
22 return true
23 })
24 }LoadOrStore atomically returns the existing value or inserts a new one. With map+RWMutex, the equivalent requires: RLock, check, RUnlock, Lock, re-check, insert, Unlock. LoadOrStore does it in one method without any visible lock.
1 package main
2
3 import "sync"
4
5 var inflight sync.Map // map[string]chan Result
6
7 // Single-flight: concurrent calls for the same key all wait for the first result.
8 func FetchOnce(key string) Result {
9 ch := make(chan Result, 1)
10 actual, loaded := inflight.LoadOrStore(key, ch)
11 waiter := actual.(chan Result)
12
13 if !loaded {
14 // Won the race; this caller runs the fetch
15 defer inflight.Delete(key)
16 r := actuallyFetch(key)
17 waiter <- r
18 return r
19 }
20 // Lost the race; wait for the winner's result
21 return <-waiter
22 }Key points
- •sync.Map is untyped (any/any). No generics in standard library; type assertions everywhere.
- •Optimised for write-once-read-many or disjoint-key access. Other patterns: use map+RWMutex.
- •Range is weakly consistent: it iterates a snapshot, but mutations during Range may or may not be reflected.
- •LoadOrStore is the atomic 'get or insert' that map+RWMutex needs separate Lock/Unlock for.
- •If iteration order, range, or generics are needed, use plain map with RWMutex. sync.Map is a sharp tool for specific shapes.
Follow-up questions
▸When is sync.Map actually faster than map+RWMutex?
▸Why is sync.Map untyped?
▸Can sync.Map iteration miss entries?
▸What about ordered iteration?
Gotchas
- !sync.Map is untyped; type assertions on every Load are easy to get wrong
- !Range is weakly consistent; not safe for code that needs an exact snapshot
- !Storing nil values is allowed but easily confused with 'not present'; check the bool
- !For most workloads, map+RWMutex outperforms sync.Map; default to it unless measurements show otherwise
- !Range can block forever if f never returns, including on writes that block on the map