Implement a generic LRU Cache with O(1) get and put operations
cs-mid-002
Your answer
Answer as you would in a real interview — explain your thinking, not just the conclusion.
Model answer
The classic approach combines a Dictionary<TKey, LinkedListNode<T>> for O(1) lookup with a doubly-linked LinkedList<T> to maintain access order. On Get, find the node in the dictionary, move it to the front (most recently used), and return the value. On Put, if the key exists update in place; otherwise add to the front. If at capacity, evict the tail node (least recently used) and remove it from the dictionary. All operations are O(1) because LinkedList<T> supports O(1) AddFirst and Remove given a node reference — no linear scan needed.
Code example
public class LruCache<TKey, TValue>
{
private readonly int _cap;
private readonly Dictionary<TKey, LinkedListNode<(TKey K, TValue V)>> _map;
private readonly LinkedList<(TKey K, TValue V)> _list;
public LruCache(int capacity) {
_cap = capacity;
_map = new(_cap);
_list = new();
}
public bool TryGet(TKey key, out TValue value) {
if (!_map.TryGetValue(key, out var node)) { value = default; return false; }
_list.Remove(node); _list.AddFirst(node);
value = node.Value.V; return true;
}
public void Put(TKey key, TValue value) {
if (_map.TryGetValue(key, out var existing)) {
_list.Remove(existing); _map.Remove(key);
} else if (_map.Count >= _cap) {
var tail = _list.Last!;
_list.RemoveLast(); _map.Remove(tail.Value.K);
}
_map[key] = _list.AddFirst((key, value));
}
}
Follow-up
How would you make this thread-safe for concurrent reads and writes? Compare ReaderWriterLockSlim to a simple lock.