Channels, Buffered, Unbuffered, Patterns
Channels are typed conduits between goroutines. Unbuffered = synchronous handoff (sender blocks until receiver ready). Buffered = bounded queue. Closing signals 'no more values' to receivers. The cornerstone of Go's 'share by communicating' philosophy.
What channels actually are
A channel is a typed FIFO queue with built-in synchronization. Unbuffered channels are rendezvous points, sender blocks until receiver shows up (and vice versa). Buffered channels are bounded queues, sender blocks only when buffer is full.
The "share by communicating" framing Go's tagline. The idea: instead of multiple goroutines sharing memory and locking, one goroutine owns the data and communicates with others via channels. The channel send IS the synchronization, no separate lock needed.
When unbuffered vs buffered
| Use unbuffered when | Use buffered when |
|---|---|
| A sync point is needed ("X has happened") | Absorbing producer bursts |
| Strict producer-consumer rendezvous | Decoupling producer/consumer rates |
| Signal completion | Backpressure with a buffer |
| Default choice | Measurement shows slack is needed |
Closing rules
Two rules that bite everyone
- Only the sender side closes. Receiver should never close, would race with future sends.
- Close exactly once. Closing twice panics. With multiple senders, designate one closer goroutine that waits for all senders to finish (via WaitGroup).
The pattern for multiple senders + bounded close:
go func() {
senderWG.Wait()
close(ch)
}()
The closer goroutine waits, then closes once. All senders just send; the closer handles termination.
When NOT to use channels
For raw shared state, a counter, a config object, sync.Mutex or sync/atomic is faster than a channel. Channels excel at handoff: producer-consumer, work distribution, signaling. They're not the answer to every concurrency question.
Rob Pike's rule "Channels orchestrate; mutexes serialize." If goroutines need to coordinate (one signals another, one passes work to another), channels. If they need to protect shared state, mutex.
Primitives by language
- make(chan T) / make(chan T, capacity)
- ch <- v / v := <-ch / v, ok := <-ch
- close(ch)
- for v := range ch
Implementation
make(chan int) is unbuffered. Send blocks until a receiver reads. Use for producer-consumer rendezvous, synchronization signals.
1 package main
2
3 import "fmt"
4
5 func main() {
6 ch := make(chan int) // unbuffered
7 go func() {
8 ch <- 42 // blocks until someone reads
9 }()
10 fmt.Println(<-ch) // 42, sender unblocks
11 }make(chan int, 10) is buffered. Producer can send up to 10 without blocking. Once full, send blocks. Receivers drain. Producer-consumer pattern.
1 package main
2
3 import "fmt"
4
5 func main() {
6 ch := make(chan int, 3)
7 ch <- 1; ch <- 2; ch <- 3 // all non-blocking
8 // ch <- 4 // would block, buffer full
9
10 fmt.Println(<-ch, <-ch, <-ch) // 1 2 3 (FIFO)
11 }close(ch) signals receivers that no more values will arrive. for v := range ch exits when channel is closed and drained. The clean way to terminate consumers.
1 package main
2
3 import "fmt"
4
5 func main() {
6 ch := make(chan int)
7 go func() {
8 for i := 0; i < 5; i++ {
9 ch <- i
10 }
11 close(ch) // signal end
12 }()
13 for v := range ch { // exits when ch is closed AND drained
14 fmt.Println(v)
15 }
16 }v, ok := <-ch, ok is false when channel is closed AND drained. Use to distinguish 'real value' from 'channel closed' without ranging.
1 package main
2
3 func receive(ch <-chan int) {
4 for {
5 v, ok := <-ch
6 if !ok {
7 // channel closed and drained
8 return
9 }
10 process(v)
11 }
12 }Fan-out: one input channel → N workers. Fan-in: N input channels → one output. Standard composition; the building blocks of pipelines.
1 package main
2
3 import "sync"
4
5 // Fan-out: one input → N workers each on the same channel
6 func fanOut(in <-chan int, workers int) []chan int {
7 outs := make([]chan int, workers)
8 for i := range outs { outs[i] = make(chan int) }
9 for i := 0; i < workers; i++ {
10 go func(out chan<- int) {
11 defer close(out)
12 for v := range in {
13 out <- process(v)
14 }
15 }(outs[i])
16 }
17 return outs
18 }
19
20 // Fan-in: N inputs → one output
21 func fanIn(ins ...<-chan int) <-chan int {
22 out := make(chan int)
23 var wg sync.WaitGroup
24 wg.Add(len(ins))
25 for _, in := range ins {
26 go func(c <-chan int) {
27 defer wg.Done()
28 for v := range c { out <- v }
29 }(in)
30 }
31 go func() { wg.Wait(); close(out) }()
32 return out
33 }Two strict rules: (1) only the sender side closes; (2) close exactly once. Closing from receiver, or twice, panics. Multiple senders → designate one closer goroutine + WaitGroup.
1 package main
2
3 import "sync"
4
5 func multipleProducers(producers []chan<- int, finalCh chan int) {
6 var wg sync.WaitGroup
7 // ... producers send to finalCh ...
8 wg.Add(len(producers))
9 // The CLOSER pattern, one goroutine waits for all producers, then closes
10 go func() {
11 wg.Wait()
12 close(finalCh) // exactly once
13 }()
14 }
15
16 // BROKEN: each producer closes finalCh, panics on second close
17 // BROKEN: receiver closes finalCh, sender will panic on next sendKey points
- •Unbuffered: synchronous, send blocks until receive, vice versa
- •Buffered: send blocks only when buffer full; receive blocks only when empty
- •close(ch) = 'no more values'; receivers see ok=false; sending on closed = panic
- •Nil channels block forever, useful in select to disable a case
- •Channels are not free, for tight CPU loops, mutex is faster
Follow-up questions
▸Buffered or unbuffered, which to use?
▸What does closing a nil channel do?
▸When are channels slower than mutex?
▸What's the right channel buffer size?
Gotchas
- !Closing a channel twice → panic
- !Sending on a closed channel → panic (so receiver should never close)
- !Range loop on a never-closed channel → blocks forever (goroutine leak)
- !Make sure only ONE goroutine closes the channel; coordinate with WaitGroup if multiple senders
- !Buffered channels with capacity = number of sends → potential deadlock if a receive is missed