Implement a token bucket rate limiter in Go with per-user limits
go-sen-002
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
A token bucket allows burst traffic up to the bucket capacity, then enforces a steady rate. Two approaches: (1) time-based: store lastRefill time and tokens remaining; on each request compute elapsed tokens = rate * elapsed. (2) golang.org/x/time/rate.Limiter which uses the same algorithm internally. For per-user limits, maintain a map[string]*rate.Limiter protected by a sync.RWMutex. Evict idle limiters with a background goroutine to avoid unbounded memory growth. Thread safety is critical — the Limiter itself is goroutine-safe, but the map operations are not.
Code example
package limiter
import (
"sync"
"time"
"golang.org/x/time/rate"
)
type PerUserLimiter struct {
mu sync.RWMutex
limiters map[string]*entry
rate rate.Limit
burst int
}
type entry struct {
limiter *rate.Limiter
lastSeen time.Time
}
func NewPerUserLimiter(r rate.Limit, burst int) *PerUserLimiter {
l := &PerUserLimiter{
limiters: make(map[string]*entry),
rate: r,
burst: burst,
}
go l.cleanup()
return l
}
func (l *PerUserLimiter) Allow(userID string) bool {
l.mu.Lock()
e, ok := l.limiters[userID]
if !ok {
e = &entry{limiter: rate.NewLimiter(l.rate, l.burst)}
l.limiters[userID] = e
}
e.lastSeen = time.Now()
allowed := e.limiter.Allow()
l.mu.Unlock()
return allowed
}
// Remove limiters not seen for 5 minutes
func (l *PerUserLimiter) cleanup() {
for range time.Tick(time.Minute) {
cutoff := time.Now().Add(-5 * time.Minute)
l.mu.Lock()
for id, e := range l.limiters {
if e.lastSeen.Before(cutoff) {
delete(l.limiters, id)
}
}
l.mu.Unlock()
}
}
Follow-up
How would you distribute this rate limiter across multiple server instances using Redis INCR with expiry (sliding window counter)?