Locks in Go - Mutex, RWMutex, and When to Use What
Posted on Wed 25 March 2026 by Sanyam Khurana in Programming
In our concurrency post, we briefly touched on mutexes. But there's more to locking in Go than just sync.Mutex. Go gives you several synchronization primitives, each designed for a specific access pattern. Use the wrong one and you'll either have a race condition, a deadlock, or unnecessary contention killing your performance.
Let's go through every locking mechanism Go offers, understand when each one shines, and see real-world examples of picking the right lock for the job.
sync.Mutex - The Basic Lock
This is the simplest lock. One goroutine holds it at a time. Everyone else waits.
type UserCache struct {
mu sync.Mutex
users map[string]*User
}
func (c *UserCache) Get(id string) *User {
c.mu.Lock()
defer c.mu.Unlock()
return c.users[id]
}
func (c *UserCache) Set(id string, user *User) {
c.mu.Lock()
defer c.mu.Unlock()
c.users[id] = user
}
Every call to Get or Set acquires the same lock. If 100 goroutines are reading the cache and 1 is writing, all 100 readers have to wait for each other, even though reads don't conflict with reads. That's wasteful.
Key Properties
- Exclusive: only one goroutine holds it at a time
- Not reentrant: if you try to lock a mutex you already hold, you deadlock
- Zero value is usable: you don't need to initialize it
The non-reentrant part catches people coming from Java or Python. In those languages, you can lock the same lock twice from the same thread. In Go, this is a deadlock:
func (c *UserCache) BadMethod() {
c.mu.Lock()
defer c.mu.Unlock()
// This will deadlock - we already hold c.mu
c.mu.Lock() // Blocked forever waiting for ourselves
}
There's no sync.ReentrantMutex in Go, and that's intentional. The Go team considers reentrant locks a code smell - if you need to lock the same mutex twice, your function boundaries are wrong. Refactor instead.
sync.RWMutex - Read-Write Lock
This is the most commonly misunderstood lock in Go. RWMutex allows multiple concurrent readers OR a single writer, but not both.
type UserCache struct {
mu sync.RWMutex
users map[string]*User
}
func (c *UserCache) Get(id string) *User {
c.mu.RLock() // Multiple goroutines can RLock simultaneously
defer c.mu.RUnlock()
return c.users[id]
}
func (c *UserCache) Set(id string, user *User) {
c.mu.Lock() // Exclusive - blocks all readers and writers
defer c.mu.Unlock()
c.users[id] = user
}
Now those 100 readers from our earlier example can all read simultaneously. They only block when a writer shows up.
How RWMutex Works Internally
The rules are:
- Multiple goroutines can hold
RLockat the same time - reads are shared Lock(write lock) is exclusive - it waits for all readers to release, then blocks new readers and writers- When a writer is waiting, new readers also block - this prevents writer starvation
That third point is important. Without it, a steady stream of readers could starve a writer indefinitely. Go's RWMutex is fair to writers - once a writer starts waiting, new readers queue up behind it.
Timeline:
R1 holds RLock
R2 holds RLock
W1 calls Lock() → waits for R1 and R2 to finish
R3 calls RLock() → waits behind W1 (not allowed to skip ahead)
R1 releases → still waiting (R2 still holds)
R2 releases → W1 acquires Lock
W1 releases → R3 acquires RLock
When RWMutex Helps (and When It Doesn't)
RWMutex is not always faster than Mutex. There's overhead in managing the reader count. The payoff depends on your read-to-write ratio:
| Read:Write Ratio | Better Choice |
|---|---|
| 1:1 | sync.Mutex (RWMutex overhead not worth it) |
| 10:1 | sync.RWMutex (readers rarely block) |
| 100:1 | sync.RWMutex (clear win) |
| 1000:1 | Consider sync.Map or atomic operations |
If writes are frequent, readers will constantly be blocked waiting for writers, and the read-write tracking overhead makes RWMutex slower than a plain Mutex. Benchmark your actual workload before assuming RWMutex is faster.
A Real Example: Configuration Store
A configuration store is a textbook case for RWMutex - configs are read thousands of times per second but updated rarely:
type ConfigStore struct {
mu sync.RWMutex
config map[string]string
}
func NewConfigStore() *ConfigStore {
return &ConfigStore{config: make(map[string]string)}
}
func (s *ConfigStore) Get(key string) (string, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
val, ok := s.config[key]
return val, ok
}
func (s *ConfigStore) Set(key, value string) {
s.mu.Lock()
defer s.mu.Unlock()
s.config[key] = value
}
// Bulk reload - happens infrequently
func (s *ConfigStore) Reload(newConfig map[string]string) {
s.mu.Lock()
defer s.mu.Unlock()
s.config = newConfig
}
Hundreds of goroutines can call Get simultaneously without blocking each other. Only Set and Reload cause contention.
sync.Once - Run Exactly Once
Not technically a lock, but it uses one internally. sync.Once guarantees a function is executed exactly once, regardless of how many goroutines call it:
type DBPool struct {
once sync.Once
pool *sql.DB
}
func (d *DBPool) GetConnection() *sql.DB {
d.once.Do(func() {
var err error
d.pool, err = sql.Open("postgres", "connection-string")
if err != nil {
log.Fatal(err)
}
})
return d.pool
}
Even if 50 goroutines call GetConnection at the same time, the database connection is created exactly once. The other 49 goroutines block until the first one finishes, then they all get the same pool instance.
This is the idiomatic way to implement lazy initialization and singletons in Go.
sync.Once Gotcha: It Doesn't Retry on Failure
If the function passed to Do panics, Once still considers it "done". It won't retry:
var once sync.Once
once.Do(func() {
panic("oops") // This panics
})
once.Do(func() {
fmt.Println("This will never run")
})
If you need retry-on-failure behavior, Go 1.21 introduced sync.OnceFunc, sync.OnceValue, and sync.OnceValues:
// sync.OnceFunc panics again on subsequent calls if the first call panicked
initDB := sync.OnceFunc(func() {
// initialization code
})
For retry behavior, you'll need to build your own using sync.Mutex and a boolean flag.
sync/atomic - Lock-Free Operations
For simple counters and flags, atomic operations are faster than mutexes because they use CPU-level instructions instead of OS-level locks:
import "sync/atomic"
type Metrics struct {
requestCount atomic.Int64
errorCount atomic.Int64
isHealthy atomic.Bool
}
func (m *Metrics) RecordRequest() {
m.requestCount.Add(1)
}
func (m *Metrics) RecordError() {
m.errorCount.Add(1)
}
func (m *Metrics) GetRequestCount() int64 {
return m.requestCount.Load()
}
func (m *Metrics) SetHealthy(healthy bool) {
m.isHealthy.Store(healthy)
}
Go 1.19 introduced the typed atomic types (atomic.Int64, atomic.Bool, atomic.Pointer[T]), which are much nicer than the old function-based API (atomic.AddInt64(&counter, 1)).
When to Use Atomics vs. Mutex
- Atomics: single values (counters, flags, pointers). No complex invariants.
- Mutex: when you need to update multiple related values atomically, or when the critical section involves non-trivial logic.
// Atomic is fine here - single counter
var count atomic.Int64
count.Add(1)
// Mutex needed here - two values must be consistent
type Stats struct {
mu sync.Mutex
total int
sum float64
}
func (s *Stats) Record(value float64) {
s.mu.Lock()
defer s.mu.Unlock()
s.total++
s.sum += value
// Average = s.sum / float64(s.total) must always be consistent
}
If you used two separate atomics for total and sum, a reader could see the new total but the old sum, giving a wrong average.
sync.Map - Concurrent Map
Go's built-in maps are not goroutine-safe. If you need a concurrent map, you have two options: a regular map with a mutex, or sync.Map.
var cache sync.Map
// Store
cache.Store("key", "value")
// Load
val, ok := cache.Load("key")
// Load or store (atomic check-and-set)
actual, loaded := cache.LoadOrStore("key", "default-value")
// Delete
cache.Delete("key")
// Range (iterate)
cache.Range(func(key, value any) bool {
fmt.Printf("%v: %v\n", key, value)
return true // return false to stop iteration
})
sync.Map vs. Map + RWMutex
sync.Map is not always better. It's optimized for two specific patterns:
- Write-once, read-many: keys are set once and then read repeatedly (like a cache that's populated at startup)
- Disjoint key sets: different goroutines access different keys (like per-connection state)
For general-purpose concurrent maps where many goroutines read and write the same keys, a map with RWMutex is usually faster:
// Use this for general-purpose concurrent maps
type SafeMap[K comparable, V any] struct {
mu sync.RWMutex
m map[K]V
}
func (s *SafeMap[K, V]) Get(key K) (V, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
val, ok := s.m[key]
return val, ok
}
func (s *SafeMap[K, V]) Set(key K, val V) {
s.mu.Lock()
defer s.mu.Unlock()
s.m[key] = val
}
The other downside of sync.Map is that it's not type-safe - keys and values are any, so you lose compile-time type checking.
sync.Cond - Condition Variables
sync.Cond lets goroutines wait for a condition to be true. It's less commonly used than channels, but has its place:
type Queue struct {
mu sync.Mutex
cond *sync.Cond
items []string
}
func NewQueue() *Queue {
q := &Queue{}
q.cond = sync.NewCond(&q.mu)
return q
}
func (q *Queue) Push(item string) {
q.mu.Lock()
defer q.mu.Unlock()
q.items = append(q.items, item)
q.cond.Signal() // Wake up one waiting goroutine
}
func (q *Queue) Pop() string {
q.mu.Lock()
defer q.mu.Unlock()
for len(q.items) == 0 {
q.cond.Wait() // Releases lock, waits, reacquires lock
}
item := q.items[0]
q.items = q.items[1:]
return item
}
Wait() atomically releases the mutex and suspends the goroutine. When Signal() or Broadcast() wakes it up, it reacquires the mutex before returning.
- Signal(): wakes one waiting goroutine
- Broadcast(): wakes all waiting goroutines
In practice, you'll reach for channels more often than sync.Cond. But Cond is useful when you have a complex condition that multiple goroutines need to wait on, especially when using Broadcast to notify all waiters.
Common Patterns and Pitfalls
Copy a Lock? Deadlock.
Mutexes must not be copied. If you copy a struct containing a mutex, the copy has a separate lock, which defeats the purpose:
type Counter struct {
mu sync.Mutex
count int
}
func main() {
c1 := Counter{}
c2 := c1 // Bug: c2 has a copy of the mutex
// c1 and c2 now have independent locks
// They don't protect each other
}
Use go vet to catch this - it warns about copying sync types. Pass lock-containing structs by pointer, not by value.
Lock Ordering to Prevent Deadlocks
If you need to hold two locks at once, always acquire them in the same order everywhere:
// Always lock accounts in order of ID to prevent deadlock
func Transfer(from, to *Account, amount int) {
// Ensure consistent ordering
first, second := from, to
if from.ID > to.ID {
first, second = to, from
}
first.mu.Lock()
defer first.mu.Unlock()
second.mu.Lock()
defer second.mu.Unlock()
from.Balance -= amount
to.Balance += amount
}
Without consistent ordering, two concurrent transfers (A to B, and B to A) could each hold one lock and wait for the other forever.
Hold Locks for the Shortest Time Possible
Don't do I/O or expensive computation while holding a lock:
// Bad: holding the lock during a network call
func (c *Cache) GetOrFetch(key string) (string, error) {
c.mu.Lock()
defer c.mu.Unlock()
if val, ok := c.data[key]; ok {
return val, nil
}
// Other goroutines are blocked while we make an HTTP call!
val, err := http.Get("https://api.example.com/" + key)
// ...
}
// Good: release the lock, do the fetch, reacquire
func (c *Cache) GetOrFetch(key string) (string, error) {
c.mu.RLock()
if val, ok := c.data[key]; ok {
c.mu.RUnlock()
return val, nil
}
c.mu.RUnlock()
// Fetch without holding any lock
val, err := fetchFromAPI(key)
if err != nil {
return "", err
}
c.mu.Lock()
c.data[key] = val
c.mu.Unlock()
return val, nil
}
Yes, two goroutines might both fetch the same key. That's usually fine - a duplicate fetch is better than blocking all readers on a network call.
Decision Guide
Here's how I decide which synchronization primitive to use:
What are you protecting?
|
+-- A single counter or flag?
| → sync/atomic
|
+-- A map with write-once, read-many pattern?
| → sync.Map
|
+-- One-time initialization?
| → sync.Once
|
+-- Shared state with mostly reads?
| → sync.RWMutex
|
+-- Shared state with balanced reads and writes?
| → sync.Mutex
|
+-- Passing data between goroutines?
| → channels (not a lock at all)
|
+-- Waiting for a complex condition?
→ sync.Cond (or rethink with channels)
Summary
Go gives you a focused set of synchronization primitives, each with a clear purpose:
| Primitive | Use Case | Key Behavior |
|---|---|---|
sync.Mutex |
General mutual exclusion | One holder at a time |
sync.RWMutex |
Read-heavy workloads | Multiple readers OR one writer |
sync.Once |
One-time initialization | Runs function exactly once |
sync/atomic |
Simple counters and flags | Lock-free CPU instructions |
sync.Map |
Write-once read-many maps | Optimized for specific access patterns |
sync.Cond |
Waiting for conditions | Signal/broadcast to waiters |
The most common choice in practice is sync.RWMutex for read-heavy caches and sync.Mutex for everything else. Reach for atomics when performance matters and the operation is simple. And always run go test -race to catch what you miss.
If you've any questions about locks and synchronization in Go, please let us know in the comments section below.