Implement a context-aware system design for a real production challenge you've faced
go-mid-006
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
Good candidates at mid level: (1) designing a simple internal request routing layer with context propagation, (2) designing a caching layer with TTL and stampede protection, (3) designing a webhook delivery system with retry. For the cache example: the challenge is 'thundering herd' — when a popular key expires, many requests hit the database simultaneously. Solution: singleflight (golang.org/x/sync/singleflight) ensures only one goroutine fetches the cache miss; all others wait for the result. Combine with probabilistic early expiration (jitter on TTL) to prevent mass expiry at the same moment. For webhook delivery: use a worker pool consuming from a channel; each delivery attempt stores its retry count in the job; failed jobs are re-queued with exponential backoff using time.After. Dead-letter queue for jobs that exceed max retries.
Code example
import "golang.org/x/sync/singleflight"
type Cache struct {
mu sync.RWMutex
items map[string]cacheItem
group singleflight.Group
}
func (c *Cache) Get(ctx context.Context, key string, fetch func() (any, error)) (any, error) {
c.mu.RLock()
item, ok := c.items[key]
c.mu.RUnlock()
if ok && time.Now().Before(item.expiresAt) {
return item.value, nil
}
// Singleflight: only one goroutine fetches, others wait
v, err, _ := c.group.Do(key, func() (any, error) {
val, err := fetch()
if err != nil {
return nil, err
}
c.mu.Lock()
c.items[key] = cacheItem{value: val, expiresAt: time.Now().Add(ttl)}
c.mu.Unlock()
return val, nil
})
return v, err
}
Follow-up
How would you add observability (metrics, tracing) to the webhook delivery system without cluttering business logic?