Implement a minimal distributed job queue in Go (workers, retries, dead-letter)
go-sys-003
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
Core requirements: enqueue jobs, multiple workers consume in parallel, failed jobs retry up to N times with backoff, exceeded retries go to a dead-letter queue. In Go: use a buffered channel as the in-memory queue, a WaitGroup + goroutine pool as workers, exponential backoff with jitter (time.Sleep(base * 2^attempt + jitter)), and a separate deadLetter channel for permanent failures. For persistence, replace the channel with Postgres SKIP LOCKED (advisory locking for at-most-once delivery) or Redis streams. The key operational concern: idempotency — ensure job handlers are safe to re-run so retries do not cause double-effects.
Code example
type Job struct {
ID string
Payload []byte
Attempt int
}
const maxAttempts = 3
func StartWorkerPool(ctx context.Context, queue <-chan Job, deadLetter chan<- Job, handler func(Job) error, n int) {
var wg sync.WaitGroup
for range n {
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case job, ok := <-queue:
if !ok {
return
}
if err := handler(job); err != nil {
job.Attempt++
if job.Attempt >= maxAttempts {
deadLetter <- job
continue
}
backoff := time.Duration(1<<job.Attempt) * 100 * time.Millisecond
time.Sleep(backoff)
// In production: re-enqueue to persistent store
}
case <-ctx.Done():
return
}
}
}()
}
wg.Wait()
}
Follow-up
How would you ensure exactly-once processing semantics across worker restarts, and what is the trade-off compared to at-least-once?