Implement a concurrent worker pool in Go
go-mid-002
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
A worker pool limits concurrency to N goroutines to prevent resource exhaustion. The standard pattern: create a jobs channel, spawn N worker goroutines that range over it, enqueue all work, then close the channel to signal completion. Workers exit when the channel is drained. sync.WaitGroup tracks when all workers finish. This ensures at most N goroutines run concurrently regardless of how many jobs are submitted — critical for avoiding OOM when job count is unbounded.
Code example
package main
import (
"fmt"
"sync"
)
func workerPool(numWorkers, numJobs int) {
jobs := make(chan int, numJobs)
var wg sync.WaitGroup
// Spawn fixed number of workers
for w := 0; w < numWorkers; w++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for job := range jobs {
fmt.Printf("worker %d: job %d\n", id, job)
}
}(w)
}
// Enqueue work
for j := 0; j < numJobs; j++ {
jobs <- j
}
close(jobs) // signal workers: no more jobs
wg.Wait() // wait for all workers to finish
}
func main() {
workerPool(3, 10)
}
Follow-up
How would you modify the pool to support graceful shutdown when a context is cancelled mid-job?