Implement a retry decorator with configurable attempts and exponential backoff
py-mid-004
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
A retry decorator wraps a function and re-calls it on failure up to max_attempts times. Use functools.wraps to preserve the original function's metadata. Exponential backoff formula: sleep(base * 2^attempt). Add jitter (random fraction) to prevent thundering herd when many coroutines retry simultaneously. The decorator should accept configuration parameters, which requires a decorator factory (a function that returns a decorator). For async code, use asyncio.sleep instead of time.sleep.
Code example
import functools
import time
import random
from typing import Callable, TypeVar, Any
F = TypeVar("F", bound=Callable[..., Any])
def retry(
max_attempts: int = 3,
base_delay: float = 0.5,
exceptions: tuple[type[Exception], ...] = (Exception,),
) -> Callable[[F], F]:
"""Decorator factory: retry on exception with exponential backoff + jitter."""
def decorator(func: F) -> F:
@functools.wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(max_attempts):
try:
return func(*args, **kwargs)
except exceptions as exc:
if attempt == max_attempts - 1:
raise # final attempt — re-raise
delay = base_delay * (2 ** attempt) + random.uniform(0, 0.1)
print(f"Attempt {attempt + 1} failed: {exc}. Retrying in {delay:.2f}s")
time.sleep(delay)
return wrapper # type: ignore[return-value]
return decorator
# Usage
import requests
@retry(max_attempts=4, base_delay=1.0, exceptions=(requests.RequestException,))
def fetch(url: str) -> dict:
response = requests.get(url, timeout=5)
response.raise_for_status()
return response.json()
Follow-up
How would you extend this to only retry on specific exception types, and how would you make it compatible with both sync and async functions?